Category: Startups

HyperTrack: Managed Service for Live Location Tracking at Scale

HyperTrack: Managed Service for Live Location Tracking at Scale

By Alexander Kishinevsky, VP of Engineering, HyperTrack

Alexander Kishinevsky, VP of Engineering, HyperTrack

We live in a world where it is easier than ever to build software, but harder than ever to operate it. Let me explain.

Suppose you are building location-aware applications to make the mobile workforce more productive and logistics networks more efficient. To do this, you would start by getting the live location from a bunch of devices and then show it to the customer on a map in real-time. But in order to do that successfully, you also need to operate complex infrastructure to ingest, process, store, provision, and manage this data. The process gets even more complex as you go from lab to production, scale up devices in production, and add more use cases to service various teams that want to use this data. Before you know it, you are spending valuable engineering resources managing the infrastructure that you wish just worked.

Enter HyperTrack. When we first started the company in late 2015, we spun up Heroku instances for a Django-Python app built as a monolith. We made the mistake of building an API product as an application and later moved the party to Amazon EC2 and Amazon RDS instances, and de-constructed the monolith to a microservices architecture. The bills racked up and our engineers were spending time managing servers, databases, deployments, and migrations. We made the mistake of operating our own infrastructure instead of focusing on building the product.

In early 2019, our Architect Thomas Raffetseder and I started building a serverless architecture with AWS managed services. The idea was to build a platform that scaled up and down while we were asleep, even as customers from across the globe ramped their usage up and down at whim. This involved high-precision craftsmanship to wire up hundreds of resources from across a dozen of AWS managed services – Amazon API Gateway, Amazon Kinesis, AWS Lambda, Amazon DynamoDB, Amazon DynamoDB Streams, Amazon SNS, Amazon RDS, AWS Glue, AWS Step Functions, Amazon S3, Amazon Cognito, Amazon Athena, Amazon CloudWatch, and AWS AppSync. This complexity is only manageable if we have infrastructure-as-code that team members with remote offices could automatically deploy and build on top of. Therefore, we built IaC and continuous integration/deployment in parallel.

Read more about our serverless architecture in this blog post that trended on Hacker News earlier this year.

The complexities of the device-to-server infrastructure were known, but the complexities of server-to-frontend were evasive.

We knew that millions of devices would stream up time-series data from our SDK to servers that ingest this data reliably, process the peculiar data set for accuracy, store processed data in real-time data stores for applications to consumer, and archive it in a data lake for analytics. Applications would use this real-time data through REST APIs, webhooks, and embeddable views.

We operated under the assumption that getting the data into the platform is the hard part and discovered in the last days of each release that getting data out to the front-end was more complex than we gave it credit for.

We could use sockets to push data to the front-end, and private APIs for the front-end to query at load. Or we could manage our own GraphQL servers to streamline the queries and subscriptions between the platform and front-end. The front-ends could be native or hybrid mobile apps. They could be web apps used on mobile or desktop. Widgets built by HyperTrack might be embedded by customers as-is, or used as open source libraries that are customized by customers to build their own app experiences. There were more questions than answers and our heads got a little dizzy. It was not fun to run into this architecture challenge later in the cycle with deadlines staring us in the face.

To tackle this, I took a bite off the falafel wrap with my left hand and searched Google for “connect dynamodb with react graphql” with my right. Out came AWS AppSync. The next ten minutes at the lunch table swung between excitement, fear, doubt, relief, and curiosity. Fortunately, curiosity dominated over the other emotions and after a few hours of playing around, we knew that AWS AppSync would be the answer, at least for now. The data was sitting there in Amazon DynamoDB and the React components were sitting there in the front-end, looking for GraphQL schemas to drink from. A managed service to do just that was sitting in a different room of the same house that we lived in. We flipped that door open.

Six months later and with our fair share of teething pains, we have a dozen front-ends integrated into a diverse set of applications in the hands of millions of users. Initially, AWS AppSync was difficult to learn, adopt, and troubleshoot. We spent countless late nights figuring out how to set up Terraform as our primary IaC framework to help create and maintain schemas, various data sources types, and resolvers. However, the challenge of operating our own GraphQL servers at scale to achieve the same would have been much higher and consume much of our precious time.

Overall, going serverless is 25-30% cheaper per unit than the time we managed our own Amazon EC2/RDS. In hindsight, the extra cost was due to over-provisioning that led to more costs than we were utilizing, or under-provisioning that led to reactive logging and sudden scale up when servers got choked. The bigger win is that our customers are happier even as our 50+ man hours per month on operating servers has now shrunk to near zero. There are 99 problems that keep us up at night but operating servers ain’t one.

We live in a world where it is easier than ever to build software, but harder than ever to operate it. Building with AWS managed services helps us focus on building our product while leaving the compute, store, and real-time messaging infrastructure to them—just as building with HyperTrack helps our customers focus on building their product while leaving the live location to us.

from AWS Startups Blog

How Quantico Energy Solution is Using AI—and AWS—to Reimagine the Oil and Gas Industry

How Quantico Energy Solution is Using AI—and AWS—to Reimagine the Oil and Gas Industry

According to the U.S. Energy Department, oil production in America increased by less than 1% during the first half of 2019—that represents a decrease of 7% YoY over the same period in 2018. Given that the gains from onshore oil drilling, and in yield and efficiency are beginning to flatten, the oil and gas industry needs to dive into another deep engineering phase to once again increase efficiency.

Houston-based oil and gas startup Quantico Energy Solutions thinks they’ve found one solution to this problem: artificial intelligence (AI).

Founded in 2012, Quantico focuses on applying AI to the key subsurface oil and gas challenges. Their core competency is using AI to increase resolution and lower cycle times for reservoir characterization. They do this by constraining the AI with physics to achieve accurate results despite sparse subsurface data and are well positioned to use AI driven subsurface prediction to lower the cost of energy exploration, development and production.

Quantico’s AI has already been applied on hundreds of wells on U.S. land and in deepwater, and according to Nathan Chang, Quantico’s director of operations, the company is now positioning itself for growth—“growth of our services into SaaS, growth in the market place of the energy value chain and growth in value for our shareholders.” Their next major advancement will be the launch of QEarth, the industry’s first real-time, high-resolution earth model.

We recently spoke to Nathan Chang, who oversees recruiting, business development, operational production, product management, and marketing for Quantico, about his experiences at the company.

What’s one unique thing that most people don’t know about what your company does?

Of any pure play AI company, Quantico works for more major oil companies around the world than any other company. Our customers include Shell, Equinor, Exxon, Conoco Phillips, and Nabors Industries.

How do you differ from your competitors?

Unlike other startups in the space, Quantico strictly focuses on the subsurface, or engineering challenges where understanding the heterogeneity of geology is most important.

How are you looking to deliver exceptional experiences for your customers? How has AWS helped you achieve that?

In multiple instances we are looking to extend the reach of Quantico’s deep learning neural networks (NN) into the client environments with seamless integration.  AWS allows us to extend our models as microservices with the ability to bring the full breadth of our data science workflows into the client desktop environment through cloud integration and deployment.  An easy reference is our work with Shell.

Could you elaborate some more on how AWS has helped you achieve this? What AWS services are you using?

AWS is the clear leader in terms of both cost and performance for Quantico’s demanding workloads.  Big data requires massive amounts of storage, database throughput and compute power.  AWS’s reliability and scalability continues to meet the demands of Quantico’s proprietary machine-learning algorithms. Among others, we are using Amazon S3, Amazon API Gateway, Amazon SageMaker, Amazon Cognito, Amazon DynamoDB, Amazon Aurora, and AWS Lambda.

What’s on your roadmap for the rest of this year and the next few years? What is the most critical initiative you’re working on now? 

Specifically, we’re looking to (1) build direct plugins into client desktop software environments that leverage cloud native NN microservices (2) integrate multiple service offerings into our QEarth platform and (3) automate all of our service offerings and ensure decoupled earth modeling services are available into any end point.

from AWS Startups Blog

In a Male-Dominated VC World, The Vinetta Project’s Upcoming Event Aims To Give Female Founders Funding 

In a Male-Dominated VC World, The Vinetta Project’s Upcoming Event Aims To Give Female Founders Funding 

This October, more than 200 investors and entrepreneurs will gather in New York City for a pitch and panel event where four seasoned founders, whittled down from a list of 200 contenders, will compete for a $20,000 prize. Just one winner will snag the cash, but no one will walk away empty-handed, since the event is designed to provide female founders with the contacts, funding, networking opportunities, and resources that male founders have already been receiving for decades. It’s all part of the NYC Showcase Series hosted by The Vinetta Project, a company built with the mission to close the gender-based funding gap in the VC world.

Just 2.2 percent of all venture dollars went to female-founded businesses in 2018. That’s a number that 80 percent of investors seem okay with—when asked, they thought that multicultural and women business owners get the right amount, or more, of the capital they deserve.

Other numbers tell a different story. It’s one where more women in charge wouldn’t simply close a funding gap, but would help investors get more bang for their buck. Women-operated venture-backed companies have on average a 12 percent higher annual revenue than those operated by men, and organizations inclusive of women in top management achieve 34 percent better returns for investors. And the companies with a female founder at the helm? They perform 63 percent better than all-male teams.

It was those numbers that convinced LA-based founder Vanessa Dawson to start Vinetta six years ago. Part of her success has come from her belief in growing VC backing right in the founders’ own backyard. Vinetta has created hubs in seven cities across the US, each with a hyper-local focus.

“It prevents it from just being in Silicon Valley, and it allows the founders that are in different areas that may not be the classic cities that investors are going to to actually get that same support and interest from an investment perspective,” says Madelaine Czufin, director of platform and co-director of the New York Board at The Vinetta Project.

It also helps founders to actually sit down and connect with potential investors, receive hands-on feedback from advisory committees who understand the ecosystem, and have face-to-face meetings through events, including hacking courses and private dinners with small groups of like-minded founders and investors.

“It’s meant to be a way to formulate real relationships and really move the needle on getting these founders in front of strategic partners and investors,” says Czufin.

The needle has been moved. Since its inception six years ago, Vinetta has helped more than 2,300 female founders secure more than $180 million to grow their companies. This year’s NYC event at the AWS loft in SoHo will showcase four of those founders pitching to female leaders and investors from companies including First Aid Beauty and Radian Capital. They’ve been narrowed down from a list of 200 female founders, all of whom were tough competitors.

“It’s an incredible value add for founders to even apply,” Czufin explains. “They’re getting feedback that they want from a partner at a VC, so they’re getting real, tangible value adds from just being part of the Vinetta community.”

Two of the four finalists have used new tech to redesign outdated medical equipment: Adriana Catalina Vasquez Ortiz is an MIT grad who has designed the Lilu breast pump that massages the breast in order to make pumping more efficient, while Sanna Gaspard will be pitching for Rubitection, a technology that will help modernize early bedsore detection and improve patient care, particularly for currently underserved patients. Stacy Flynn will present her solution to minimizing the waste that fast fashion creates with her company, EverNu, that makes engineered fiber from discarded clothing. Finally, Lisa Guerrera will discuss how she plans to transform the beauty industry by reinventing the ingredients list with her company See Thru.

Interested in learning more? You can check out The Vinetta Project’s site here. It could be the first step on your path to gaining the networking opportunity available during the NYC showcase, learning how you can apply to be a part of Vinetta’s growing group of female founders, or putting some of your investment money toward the latest in female-founded innovation.

from AWS Startups Blog

Caching the Uncacheable: How Speed Kit Accelerates E-Commerce Websites

Caching the Uncacheable: How Speed Kit Accelerates E-Commerce Websites

Industry knowledge states that three mega trends have changed how e-commerce works today. First, e-commerce exhibits two-digit growth rates, as more and more users shop online. Second, attention spans are declining, due to information overload and the number of screens that surround us. Third, usability expectations are at a maximum, since fast market leaders like Amazon set the benchmark extremely high. As a consequence of these trends, users leave when page loads take too long. This simple fact has complex implications for today’s online shopping, because building fast websites is not as simple as it used to be.

While traditional web caching plays a critical role in content delivery across all industries, the high degree of personalization in modern e-commerce has seemingly outgrown its capabilities. As the HTTP caching model has been designed for distributing static and generic assets under fixed caching times, standard caching does not account for content that changes unexpectedly or is unique for every user. Immutable and generic resources like images or stylesheets are thus usually accelerated with globally distributed content delivery networks (CDNs). Since modern e-commerce relies on product recommendations and other means of personalization, though, the performance-critical website itself (i.e. the HTML) is typically considered uncacheable and therefore extremely difficult to deliver fast.

To fill this gap in the current state of the art, German software startup Baqend has developed Speed Kit as an SaaS solution for accelerating e-commerce websites. Speed Kit is designed for websites of all sizes and trusted by customers ranging from medium-scale online vendors like Stylefile to billion-revenue retailers such as the OTTO daughter Baur or sports retailer Decathlon.

diagram of baqend ecommerce website accelerator Speed kit's architecture diagram of baqend ecommerce website accelerator Speed kit's architecture

To make adoption as easy as possible, Speed Kit is integrated as a single-line JavaScript code snippet that runs in the browser. Through smart processing within the user device, Speed Kit can thus leverage a unique caching scheme that is impossible to apply for traditional CDNs. In more detail, Speed Kit loads the personalized website for the logged-in user like the browser normally would, but in addition also loads the page as an anonymous user from its caches. Since anonymous users are typically shown a generic version of the page, the anonymous HTML is easily cacheable and can therefore be delivered faster. As a result, the anonymous version of the page (from Speed Kit’s caches) is displayed almost instantly and the personalized elements (from the origin server) are merged as soon as they arrive. To make this work, Speed Kit is built on top of Service Workers, a brand-new browser technology for manipulating browser requests at the network level.

Since it is running as a Service Worker process within the browser, Speed Kit can intercept HTTP requests against slow origins and redirect them to Speed Kit’s fast caching infrastructure instead. For example, third-party resources like the Google Analytics script do not have to be served from Google’s domain over a cold TCP connection, but can be delivered from Speed Kit’s own CDN or even the browser cache. This kind of optimization is impossible for CDNs, because they do not optimize the last mile between the edge server and the user device.

diagram of baqend ecommerce website accelerator Speed kit's architecture
Since measuring the uplift is just as important as achieving it, real-user monitoring (RUM) for statistically sound A/B-testing is already built into Speed Kit. To capture both performance- and business-related KPIs, data from every page load is tracked and sent to a scalable analytics pipeline hosted on AWS. The tracking data is ingested through a Docker-based EC2 cluster, stored in S3, continuously imported into an SQL warehouse based on AWS Athena, and ultimately fed into QuickSight dashboards for visualization. Since the entire analytics stack is built on top of AWS services, it combines automated reporting and complex analyses with low end-to-end latency at massive scale.

Speed Kit promises a new era of web performance and AWS provides the perfect infrastructure to make sure it delivers.

from AWS Startups Blog

In Case You Missed It – An Evening with Female Startup Founders

In Case You Missed It – An Evening with Female Startup Founders

In support of the 2019 Grace Hopper Celebration, AWS partnered with revolutionary accelerator Y Combinator and Elpha, a startup professional network for women in tech, to host an evening reception for female startup founders. Over 80 female leaders were in attendance for an evening of networking and idea sharing. Jory Des Jardins, AWS’ Head of Global Startup Marketing, kicked off the evening with a formal welcome to guests on behalf of AWS. Kat Manalac, Partner for Y Combinator then talked about the accelerator’s funding model for startups and introduced Cadran Cowansage, CEO and Co-Founder of Elpha, who spoke about the startup’s journey in development with Y Combinator. The evening concluded with more time for guests to network and build relationships with other inspirational female leaders in the startup community.

To stay up to date with all upcoming events for startup leaders, be sure to follow @AWSstartups on Twitter, as well as the AWS Startups Blog!

from AWS Startups Blog

AgTech: How House of Crops is Digitizing the Least Digital Industry

AgTech: How House of Crops is Digitizing the Least Digital Industry

House of crops founders are digitizing the agriculture industry in Germany

Guest post by Maximilian Commandeur, COO, House of Crops

Berlin-based House of Crops is an agtech startup that’s digitalizing the world’s most traditional industry: agriculture. Its digital trading platform is based on three pillars—the marketplace, logistics organization, and contract management—and among other innovative approaches, employs a machine learning-supported matching algorithm for evaluating potential trading partners and a predictive model for logistics price forecasting. The platform users – farmers, traders, cooperatives, and processing companies like flour mills and compound feed producers – interact via a dynamic negotiation engine which allows an efficient negotiation of over 50 contract parameters.

The idea for House of Crops originated from a casual summer-BBQ with friends four years ago. “My friends, mainly farmers and traders, were complaining about how increased administrative work was taking up a significant portion of their daily work,” says Max Wedel, my House of Crops Co-Founder and CEO, adding that 40 million tons of crops in Germany are still traded via phone every year.

As a computer scientist, Max saw an opportunity for technology to help the agri-trading business, and in 2015, he and I set out to further investigate the idea. Building on our experience in agile transformations in the financial industry, we began to methodically investigate our hypothesis that a neutral digital broker offers relevant benefits for the target group to be adopted into their everyday trading processes. Via multiple interviews and workshops, we gained an understanding of our customer’s needs, continuously developing our value proposition. The agricultural industry is known for its weak margins and constant price pressure. Consequently, market participants are often aiming at increasing the efficiency of their processes, improving their reach to find the right supplier/buyer for the product at hand and, of course, getting the best possible price. With this in mind, it was clear our digital marketplace would have to enable the user to reduce effort on its side, provide a liquid market with access to new trading partners, and offer the best possible price at the given time. After receiving positive feedback, we started forming strategic partnerships and built our first non-functional prototype in order to ensure a product-market-fit as well as market liquidity from the get go. In January 2019, we decided to officially launch the company; however, several essential questions remained. Which tech stack should we use? Which architecture was best suited to our needs? Which cloud provider should we opt for?

Finding the Optimal, Future-Oriented Architecture: Comfort Zone vs. What Your Business Needs

Initially, we screened possible product development and sourcing strategies while also comparing different cloud providers. We evaluated cloud providers based on three main factors: customer service, service offerings, and cost structure. At the end of the detailed evaluation process, we chose AWS as our preferred cloud provider. In addition to our previous positive experiences with the platform, it was easy to talk to AWS representatives who were responsible for and interested in early stage startups. Additionally, following a thorough search process for a development service provider, we opted for our current provider Tech Alchemy, a young UK company, who proposed building the application using the MEAN stack. Our initial architecture setup is depicted in Figure 1. It is a rather traditional setup with which the team has plenty of experience. However, with the unique opportunity to build our application from scratch, we re-evaluated whether the setup optimally served our business needs. Ultimately, our desire to iterate quickly, use an agile development approach, and adopt fast time-to-market goals, led us to aim for a more future-oriented, flexible, and responsive setup.

 

Figure 1: Initial (more traditional) Architecture Setup

Therefore, we set out to search for a solution that would better suit our needs. Not being experts in agricultural product trading, we needed to be able to ship features quickly, collect feedback from our partners, and then adjust, if necessary. Moreover, while we followed a lean Minimum Viable Product (MVP) approach, the application would be easily scalable for both an international expansion as well as the integration of additional value-added services. Of course, being an early stage, bootstrapped company, cost played an important role in our decision as well. This led us to evaluate the possibility of adopting a serverless architecture.

After talking to the AWS Startup Team during the AWS Summit in Berlin in February, we were quickly offered the opportunity to discuss our challenge with a solutions architect (SA). We were positively surprised about the engagement we received from AWS, even though we were, and are, a very early stage startup. In preparation for the meeting, we openly discussed our companies’ business goals and the concerns that each of our team members had regarding a serverless architecture. As a result, we provided Mat (“our” SA) and Marius (“our” Account Manager) with detailed information about the challenges, tech stack, and planned architecture.

In a one-hour architecture discussion, Mat and Marius were able to answer all of our questions and quickly identified possible ways to build on the teams’ expertise while ensuring a gradual evolution toward a serverless setup. As a follow-up, Mat quickly provided us with thorough and highly insightful material related to our questions and made additional suggestions to further support achieving our goals. After a couple of days of R&D, we were able to re-design our architecture (see Figure 2) in a way that sets us up for any future challenges while ensuring the team feels comfortable with it (we’d like to use this chance to say a big thank you to Mat and Marius). The focus was on using as many  serverless and fully managed services from AWS as possible, to reduce our operational exposure as far as possible. The only component that we have to manage directly now is the SonarQube installation on EC2. The public perimeter is based on CloudFront as a CDN, which serves as a host for AWS WAF and Shield services to protect our front door and give our customers a better, faster experience. This fronts a deployment of Amazon API gateway, and both of these services have certificates updated by AWS certificate manager. The API gateway uses Amazon Cognito for authentication, and then invokes a number of AWS Lambda functions to handle the business logic, talking to Amazon SNS, DocumentDB etc. For administration we use the fully managed Client VPN service.

Figure 2: Re-designed Architecture Setup after AWS Discussion

Overall, adapting the new technologies was not as tough as we initially thought. The experience taught us not to shy away from difficult decisions – i.e. traditional infrastructure vs. serverless – for short-term reasons. This is not to say that we did not run into any complications. The R&D took a bit longer than we expected, and in the first days of development, we had a steep learning curve. Additionally, we had some initial challenges with the technical setup of the new services. However, both the online documentation from AWS as well as the business technical support we received (which we accessed for free thanks to the AWS Activate program) enabled us to quickly resolve the issues. After starting development three months ago in the beginning of April, we have not run into any more issues in the last two months. This enables us to focus completely on creating new features, while decreasing the AWS infrastructure costs.

We scheduled an architecture review for July to make sure we continuously re-evaluate our setup and learn from our experiences. The team was overall contented with the architecture we have chosen and eagerly discussed potential improvements. As a result, we agreed on an approach to become (close to) fully serverless by integrating AWS Client VPN and only spinning up an EC2 instance for the time our static code analysis tool is running during build and deployment.

Up and Running: Exciting Times Ahead

Looking back at the past months, we have managed to successfully discuss challenging topics that address the foundation of our future application. By talking to AWS solution architects and opting for AWS services, we were able to objectively discuss the teams concerns and arrive at a cheap, reliable, and scalable infrastructure solution that allowed us to build on our previously acquired skills while setting the application up for future innovations.

At House of Crops, we believe in open communication and collaboration as the basis for constant progress.  We actively promote discussions with all market participants to understand their detailed requirements and identify possible benefits. If you are a farmer, trader or mill, an agtech company, or simply someone wanting to exchange ideas and experiences, feel free to contact us at [email protected] or visit our website www.houseofcrops.de.

We are hiring! If you are interested in joining the House of Crops Team, check out our open positions on our website or apply directly at [email protected]

from AWS Startups Blog

Building Trust with Complete Auto Reports

Building Trust with Complete Auto Reports

Complete Auto Reports (CAR) is the brain child of New Jersey-based car mechanic Ricardo Da Cruz. He founded the auto repair software company after spending hours each day wading through paper records and building trust with customers of his father’s auto repair company, Joman Auto Service, which Ricardo now owns. Ricardo shares how these challenges inspired him to build his own enterprise software, how that software works, and where he sees the industry going next.

from AWS Startups Blog

How FinTech Startup NIRA Leveraged AWS’ Cloud Solutions to Enable Financial Inclusion

How FinTech Startup NIRA Leveraged AWS’ Cloud Solutions to Enable Financial Inclusion

NIRA Finance group photo india

Guest post by NIRA Finance

Less than 10% of India’s population is able to get loans from banks. The majority of India’s 1.3 billion person population doesn’t have assets to put up as collateral or a credit rating, so banks are unable to underwrite their loan requests. It’s also too expensive for banks to use the traditional model of lending for disbursal of loans of small ticket sizes (under INR 1 lakh), given the high costs of processing loan requests. Technology can address this problem by reducing the costs of processing as well as distribution. A growing group of Indian fintech companies are now addressing this market for small-ticket loans.

NIRA is a consumer finance company leading that charge. NIRA believes there are good borrowers across income levels. Through its mobile app, NIRA provides small loans of up to INR 1 lakh to qualifying borrowers.

NIRA’s target group consists of blue and white-collar salaried individuals, earning between INR 15k to INR 40K per month, across Tier 1 and Tier 2 cities, i.e. Indian cities with a population above 100,000 and above 50,000 respectively. This group falls into an underserved section of the market: individuals who find it challenging to get loans for urgent personal or family needs.

How Does NIRA Do It?

NIRA saves costs by taking traditional banking processes online, thus drastically reducing investments in real estate to set up branches and staff to service customers at these branches. This helps ensure their cost of the loan to the customer is substantially lower than the alternatives available today.

NIRA’s major differentiation is not the loan itself, but the pre-approved credit limit, which is available to the customer at their time of need, free of any fees. Customers thus do not pay, but withdraw up to their credit limit, which increases as they successfully pay back the loan. NIRA offers loans to customers who do not have a credit score — a deal-breaker for other, more traditional lenders — and asks for much less documentation than other players in the market.

It is feasible for NIRA to extend credit to this group of people in an economically viable manner because it’s able to utilize new sorts of data that is getting generated today on borrowers’ mobile phones. This allows NIRA to form a reliable credit assessment and collection mechanism unconventionally. NIRA also uses risk-based pricing; borrowers pay an interest rate of between 1.5% and 2.5% per month, depending on their score on NIRA’s proprietary credit model.

Valuable Partnerships

NIRA has partnered with several large banks, including Federal Bank, along with some key Non-Banking Finance Corporations (NBFCs) to serve more niche segments in the group. This is a mutually beneficial partnership; NIRA is able to get loans approved for otherwise unserviceable customers, and its partners are able to grow their customer base.

Once a borrower’s credit limit is activated, they can draw down loans from as small as INR 5k all the way up to their full limit.

They only pay interest on the amount drawn, and their limit replenishes as the borrowers repay their loans. Further, as borrowers demonstrate good payment history with NIRA, their limit starts increasing. For example, if a borrower who took an INR 20k loan against child education pays it on time, their limit would increase to INR 50k next year which they might very well use to cover a major wedding expense in the family.

The Technology Behind NIRA

NIRA’s user journey can be broken down into two parts. As the first step, an in-principle decision is given to the customer in one minute based on the information they provide. As the second step, multiple verifications are done including employment, bank account, and then they are given a final decision for a loan.

NIRA projects its mobile application could be serving 10 million customers a year in next 5-6 years. To meet its ambitious growth plans, NIRA was looking for a technology enabler that could help it scale seamlessly. It also needed infrastructure that was cost-effective, robust, and secure to support every aspect of its startup journey.  Being in the financial services domain, it required a secure and reliable infrastructure to comply with a range of financial data regulations. More importantly, handling sensitive financial data of customers leaves no room for errors.

In 2016, NIRA was looking for a cloud partner in Mumbai, but there were very few who fit the bill. Since AWS already had plans to set-up a data centre in the city and provided a highly robust and scalable solution, it was NIRAs first choice.

AWS Solutions Deployed

NIRA chose the AWS platform for its scalability and reliability. When it became mandatory for companies to do an e-KYC (Know Your Customer) verification linked to the Aadhaar card (an identity card with a unique identification number issued to Indian citizens), Amazon Rekognition services helped NIRA with image verification and validation services of customers’ Aadhaar cards. This mandate also required NIRA to store data in highly secured servers, a need which was met by Amazon S3 storage.

“During scale-up, as we encountered new data related regulations or partner lender’s requests concerning data processing, we always found that AWS had a ready solution. It reflects AWS’ forward thinking to pre-empt challenges and build client solutions well in advance,” said Nupur Gupta, Co-Founder, NIRA.

architecture diagram of how NIRA finance uses AWS lambda to enable their lending solution

AWS Lambda

To meet its scaling requirements, NIRA implemented AWS Lambda, which is a pay-per-use cloud infrastructure solution. It is a serverless facility that offers automatic scaling. Essentially, if you have one user, you pay only for that one rather than setting up sites and waiting for the customer base to rise. With AWS Lambda, NIRA was able to scale up from a few thousand customers to 85,000 monthly active customers, without requiring any major infrastructure or configuration changes.  The pay-per-use model also enabled the company to keep its spending in check.

We were able to optimize AWS Lambda and scaled up to capacities of 10X. We have a good end user experience, but we might have dedicated servers and containerization requirements to be used in the future. AWS Lambda ensures that a minimal amount of time is spent when going from one scale to another,” said Gupta.

These AWS Lambda Functions are also invoked by the Amazon API gateway which handles API management and user authentication and authorization through Amazon Cognito service.

Amazon CloudWatch

NIRA used Amazon CloudWatch to log user events such as app registration, verification, uploading, and checks. Based on these logs, it can resolve customer queries through troubleshooting, to determine the point of failure. For instance, in a third-party API used for an electronic loan agreement, it could log in responses in Amazon CloudWatch, and find out if the process resulted in successful loan disbursement and gave a response or if it had a failure for a request.

Amazon CloudWatch provided data and actionable insights to monitor NIRA’s applications, understand and respond to system-wide performance changes, optimize resource utilization, and get a unified view of the operational health of the company infrastructure. 1

Amazon DynamoDB

Amazon DynamoDB NIRA benefits from DynamoDB’s fast response to customer queries and a self-managed NoSQL database that auto-scales for performance based on the volume of queries.

Other AWS Solutions Deployed

NIRA got access to AWS Activate through the Nasscom Startup and Techstars Accelerator programs that support startups with mentoring and business community development. The AWS Activate framework helped the company directly through credits as well as through invites to industry events. “We have an ongoing relationship with AWS for over two years now. They are very forthcoming with providing support, through tech solutions, knowledge sharing, and where required connecting us with external entities in the ecosystem,” added Gupta.

NIRA also implemented AWS Glue, an ETL tool used to extract data from the database, transform it, and load that information into required applications. It is also used for reporting.

Lastly, the finance company used Amazon SES (Simple Email Service), an email system, to facilitate all major communication with customers.

The Road Ahead

NIRA is planning to introduce multiple products and will scale its data infrastructure to support a higher level of reporting and analytics. Additionally, it wants to focus on controlling costs. The pay-as-you-go model of AWS gives the company flexibility to only incur costs for resources utilized, and Amazon CloudWatch enables monitoring of these resources — allowing the company to discover opportunities for cost-saving.

Going forward, NIRA is looking at automating its resources to address the needs of the fast-growing customer base.

References:

[1] https://www.youtube.com/watch?v=a4dhoTQCyRA

[2] https://aws.amazon.com/dynamodb/

from AWS Startups Blog

Democratizing Legal Services with Hong Kong-based Startup Zegal

Democratizing Legal Services with Hong Kong-based Startup Zegal

Karen Ng, Co-founder of legaltech startup Zegal, chats about how the Hong Kong-based company hopes to democratize access to legal services by building a SaaS platform through which users can discover and communicate with local legal professionals. Ng also shares Zegal’s international expansion history, who its main customers are, and how her role as a co-founder has changed along Zegal’s journey.

 

from AWS Startups Blog