Tag: DZone Cloud Zone

What I’ve Learned While Building a To-Do App Using Quarkus

What I’ve Learned While Building a To-Do App Using Quarkus

Image title

Recently I was working on a small project to build a simple browser-based app using Quarkus, JQuery, and Postgresql. The focus was more on learning Quarkus, while the others were complimentary. I was interested to get a first-hand experience on Quarkus and Graal VM. Some readers might be new with these words so let me start with a small introduction on these two projects.

Quarkus is an open source project working with a container-first philosophy to build cloud-native Java-based applications using a microservice architecture. It brings along significant improvement on app booting (a notorious area for improvement for Java-based applications) by build time meta-data processing and most importantly building standalone native images using Graal/Substrate VM. Graal VM applies a technique to statically analyze the classes to determine which classes and methods are reachable and will be used during application execution. Then it passes all this reachable code as the input to the GraalVM compiler which ahead-of-time (AOT) compiles it to the native binary. The native binary runs on a different VM (Substrate VM) that manages runtime components like de-optimizer, garbage collector, thread scheduling, etc.

I was anxious to check whether the native image has really made the booting faster and I am impressed. But the ride was not entirely smooth, mostly of my lack of knowledge (no doubt about that) and the technology has yet to mature. Unfortunately, StackOverflow is not overflowing with the required information!

A few more things to note:

  1. The project quite resources intensive to build (to be specific, native-image build). The first I tried to build on Google Cloud shell followed by a virtual box VM (2 CPU, 4 GB RAM), and I failed both times! It’s surprising considering I was building not more than 10 source code files. Finally, I managed to get going on a GCP n1-standard-2 (2 vCPUs, 7.5 GB memory), which worked smoothly.
  2. The Maven plugin probably needs some improvement. While the plugin does quite well creating a scaffolding and adding extensions, a few more features would have been appreciated, like listing all extensions available in a project and removing extensions from the project. Finally, after creating the native build, mvn clean fails as there are directories created under target/ with root permissions. You’ll have to go and manually delete them.
  3. The extension and guide lists are quite long. At the time of this writing, there are around 13 categories of extensions. Some of the extensions are quite interesting, like Kubernetes and lambda extensions, worth trying!
  4. Setting up Graal VM was another tricky area. Quarkus supports the Community Edition 1.0 RC16 of Graal VM, at the time of this writing. Also, importance has to be paid to installing the required OS packages as mentioned in the Quarkus guide, which can be found here and setting JAVA_HOME and GRAALVM_HOME .
  5. The native binary build failed multiple times due to Jandex index failure. I found some interesting solutions on StackOverflow, which can be found here. However, it did not work for me. An alternate solution that worked was available on GitHub issue logs is to execute mvn clean install -DskipTests -DskipITs before building the native binary. Hope this will be fixed in the next version.
  6. I was not able to externalize hibernate properties as Docker will not be able to talk to back end the PostGreSQL Docker on localhost. The solution is straigtforward can be found here.
  7. Deploying to Kubernetes has been made super easy. After adding the Kubernetes extension, you can run mvn package and the Kubernetes deployment manifest will be at target/wiring-classes/META-INF/kubernetes/kubernetes.yml . Apply changes if required and deploy them using kubectl.

Image title

Quarkus is built using other popular open source projects like Microprofile, Hibernet, Vert.x, RestEasy, and many others. It has been backed by tech majors like Oracle and Redhat. Personally, I find it to be an interesting tool to work on, but it has not matured enough to be production-ready yet, but has the potential to become a good alternative for Spring Boot.

from DZone Cloud Zone

Routing external traffic into your Kubernetes services

Routing external traffic into your Kubernetes services

There are several methods to route internet traffic to your Kubernetes cluster. However, when choosing the right approach, we need to consider some factors such as cost, security, and maintainability. This article guides you to choose a better approach to route the external traffic to your Kubernetes cluster by considering the above facts.

Image title

Before routing external traffic, let’s get some knowledge on the routing mechanism inside the cluster. In Kubernetes, all the applications are running inside a pod. A pod is a container which gives more advantages over static instances.

To access an application running inside a pod, there should be a dedicated service for it. The mapping between the service and pod is determined by a “label selector” mechanism. Below is a sample yaml which can be used to create a Hello World application. There you can get a clear idea about the label selector mapping.

--- 
apiVersion: extensions/v1beta1
kind: Deployment
metadata: 
  name: helloworld-deployment
spec: 
  replicas: 1
  template: 
    metadata: 
      labels: 
        app: helloworld
          image: gcr.io/hello-minikube-zero-install/hello-node
          imagePullPolicy: Always
          name: helloworld
          ports: 
            - 
              containerPort: 80

Let’s see how can we create a Kubernetes service for the above Hello World application. In this example, I have used theapp=helloworld label to define my application. Now you need to use this “helloworld” label as the selector of your service. Then only your service identifies which pods to be looked after by the service. Below is the sample service corresponding to the above application,

apiVersion: v1
kind: Service
metadata:
  name: "service-helloworld"
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 8080
selector:
    app: "helloworld"
  type: ClusterIP

This specification will create a new Service named “service-helloworld” which targets TCP port 8080 on any Pod with the "app=helloworld" label.

Here you can see the service type is “ClusterIP.” It is the default type of a Kubernetes service. Other than this, there are another two types of services called “NodePort” and “LoadBalancer.” The mechanism of routing traffic to a Kuberntes cluster will depend on the service type you used when defining a service. Let’s dig into more details.

  1. LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. In AWS, it will create an ELB for each service which exposes the type as the “LoadBalancer.” Then you can access the service using the dedicated DNS name of the ELB.
  2. NodePort: Exposes the service on each Node’s IP at a static port. You can connect to the NodePort service outside the cluster by requesting <NodeIP>:<NodePort>. This is a fixed port to a service and it is in the range of 30000–32767.
  3. ClusterIP: The ClusterIP service is the default Kubernetes service. Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. But to expose these services to the outside, you need an ingress controller inside your cluster.

Image title

By considering these services types, the easiest way of exposing a service outside the cluster is using the “LoadBalancer” service type. But these cloud load balancers cost money and every Loadbalancer Kubernetes service creates a separate cloud load balancer by default. Therefore, this service type is very expensive. Can you bear the cost of a deployment which creates a separate ELB (if the cluster is in AWS) for every single service you create inside the Kubernetes cluster?

Image title

The next choice we have is the “NodePort” service type. But choosing the NodePort as the service type has some disadvantages due to several drawbacks. Because by the design it bypasses almost all the network security provided by the Kubernetes cluster, it allocates a port from a range 30000–32767 dynamically. Therefore, standard ports such as 80, 443 or 8443 are cannot be used. Because of this dynamic allocation, you do not know the assigned port in advance, and you need to examine the allocated port after creating the service and on most hosts, you need to open the relevant port in firewall after the service creation.

Image title

The final and the most recommended approach to routing traffic to your Kubernentes service is “ClusterIP” service type. The one and only drawback of using ClusterIp is that you cannot call the services from the outside of the cluster without using a proxy. because by default, ClusterIP is only accessible by the services inside its own Kubernetes cluster. Let’s talk about how we can get the help of Kubernetes ingress controller to expose ClusterIP services to outside the network.

The following diagram illustrates the basic architecture of the traffic flow to your ClusterIP services through the Kubernetes ingress controller.

Image title

If you have multiple services deployed in Kubernetes cluster, I recommended the above approach due to several advantages.

  • Ingress enables you to configure rules that control the routing of external traffic to the services.
  • You can handle SSL/TLS termination at the Nginx Ingress Controller level.
  • You can get the support for URI rewrites.

When you need to provide external access to your Kubernetes services, you need to create an Ingress resource that defines the connectivity rules, including the URI path and backing service name. The Ingress controller then automatically configures a frontend load balancer to implement the Ingress rules.

from DZone Cloud Zone

Beyond Serverless: Why We Need A Stateful Data Fabric

Beyond Serverless: Why We Need A Stateful Data Fabric

The first iPhone was released on June 29, 2007. And while the advent of the iPhone was hardly the only catalyst of the smartphone revolution, I consider this to be as good a birthdate as any for one of humankind’s most consequential innovations. Since then, smartphones have been adopted faster than any other disruptive technology in modern history. But I’m not actually here to write about smartphones, because I think there was an even more important development that day in 2007. That development that changed the world? It was the announcement of the iOS operating system.

In my opinion, iOS changed how humans fundamentally interact with technology, in ways that will far outlast the smartphone era. What I mean is that iOS brought apps into the mainstream. Don’t get me wrong, we’ve been calling application software “apps” since at least 1981. And this didn’t just happen overnight. Until 2010, Symbian was the world’s most widely used smartphone operating system. But iOS crystallized the modern notion of how users engage with apps, and made them accessible to users with even the most limited technical ability. Like written language, the printing press and telecommunications before, apps have changed how we communicate with the world.

From Microservices to Serverless

*That seems like an awfully long lead-in for article about data fabrics…*

I know, I know. Thanks for sticking with me. The reason all this is important is because, while iOS changed users’ relationships with apps, it also changed our relationships with application infrastructures. Instead of shipping bytes of static data from one machine to another, apps now needed to interact with dynamic, continuously changing datasets. Whether data was being generated by mobiles users, sensors, or devices; traditional SQL database architectures were soon stretched to their limits by this new generation of mobile apps. Apps were now expected to be reactive, real-time, and highly available while dealing with unprecedented volumes of data being created. A new generation of specialized data processing and networking software would have to be created as the foundation for this new generation of apps.

Around this time, we saw the rise of microservices architectures and actor-based systems like Akka. We also saw the dawn of AWS and public cloud services. A new class of social media apps created the need for real-time databases like Apache Cassandra and performant message brokers like Apache Kafka. Today, microservices have become a ubiquitous part of enterprise architectures. And we’re starting to see even newer paradigms like serverless.

While serverless seeks to decouple an app’s operations from its infrastructure, this is only a first step. Mike Roberts defines two primary implementations of serverless, Backend-as-a-Service (BaaS) and Functions-as-a-Service (FaaS). In the former, Roberts explains BaaS applications “significantly or fully incorporate third-party, cloud-hosted applications and services, to manage server-side logic and state.” Most of these are rich front-end applications, where a relatively inflexible server-side architecture is perfectly cromulent and can be outsourced to multiple vendors. For the latter, FaaS apps are “run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a third party.”

This model works great for ephemeral tasks, but what about continuous processes with continuously variable event streams? These streaming apps would be better served by a stateful network of serverless functions.

One way to accomplish this is by creating a stateful data fabric.

So What Exactly is a Data Fabric?

According to Isabelle Nuage, a data fabric is “is a single, unified platform for data integration and management that enables you to manage all data within a single environment and accelerate digital transformation.” Data fabrics must integrate with external solutions, manage data across all environments (multi-cloud and on-premises), support batch, real-time and big data use cases, and provide APIs to new data transformations.

Data fabrics weave a cohesive whole from multiple disparate apps and microservices, many of which are hosted by third-party vendors. As such, a data fabric should reduce the effort required to integrate new microservices or platforms, while ensuring existing systems continue to operate as normal. In this regard, data fabrics are the ultimate middleware which all other systems orbit around. Simultaneously, data fabrics are the medium for communication across complex app architectures. Whether communication via stateless REST APIs or stateful streams, data fabrics serve to ensure all microservices have the latest relevant state.

For Stateful Apps, It’s All About the APIs

Stateful microservices alone don’t make a stateful data fabric. That’s because there’s a key difference between a collection of independent stateful microservices and a cohesive stateful system. Many serverless architectures today can claim to be stateful, in the sense that stateful data processors, such as Apache Flink, can be deployed via stateless containers. However, if the primary exchange of data is stateless, such as via REST APIs, then the application itself is primarily stateless.

Deploying stateful microservices via stateless containers is not the same as a data fabric being stateful. A stateful data fabric weaves a network of stateful, streaming links between multiple persistent microservices. This allows for the creation of truly stateful apps; microservices can continuously observe data streams for critical events and peers are continuously subscribed to receive relevant updates. In other words, a stateful data fabric enables real-time, peer-to-peer (P2P) computation instead of a stateless hub-and-spoke architecture oriented around a central datastore.

The open source Swim platform is an example of a fully stateful data fabric, which leverages stateful streaming APIs to communicate across microservices. Stateful microservices (called Web Agents) can perform real-time transformations as data flows or be combined with others to create real-time aggregations. Due to the real-time nature of streaming data, each Swim microservice is guaranteed eventual state consistency with peers. However, Swim can also be configured for other delivery guarantees. Because state consistency is automatically managed by the fabric, developers are no longer on the hook for managing state across isolated microservices via REST API.

Learn More

Share your thoughts about stateful applications below and let us know what you’re building using the open-source Swim platform.

You can get started with Swim here and make sure to STAR us out on GitHub.

from DZone Cloud Zone

What the Dock is Docker?: A Simplified Explanation of Containers and Docker

What the Dock is Docker?: A Simplified Explanation of Containers and Docker

Image title

The Problem

“Did you get the application I sent you?”

“Yeah, it doesn’t work.”

“It runs fine on my machine.”

I am sure we have all had multiple conversations that can be summed up with these three lines. When running applications within different environments, there are bound to be some issues with differing environments and other dependencies that could cause a program to work in one environment, but not on another. Problems tend to appear when your application requires a certain configuration or file that is not matched or found within a new environment. Obviously, when you bring your application from your development environment to your staging environment and from your staging environment to production, you want those transitions to occur without hiccups due to a multitude of possible issues such as differing versions of languages, differing configurations, or any other dependencies that your application may have.

Great, we have identified the problem that we are having; now how do we go about solving it? The answer is to use Docker to create Docker images that have everything necessary for the application to run properly. This includes the code, libraries, and any other dependencies that are required for the application to execute. We can then take these Docker images and deploy them to other environments and not have to worry about the dependencies of the application causing problems since all those dependencies are packaged up and included within the Docker image.

All of that may have sounded a little confusing. Containers? Docker? Docker images? If these words are new to you, that is completely fine. Let’s go over some terminology so we can wrap our heads around what was just said and what it can mean for you.

Containers

When you hear the word “container,” you probably think about something like a Tupperware container that you put your lunch in before going to work. Hopefully, when you put your burrito in that Tupperware container and take it to work, it does not go flying outside somewhere along the way. This could make everything outside of the container messy, which would be a problem and defeat the purpose of using the container. The burrito should be isolated in its container and have everything in it that is necessary to fulfill your hunger come lunchtime, and that container should be accomplishing its task whether it is still at home, work, or any other place you can think of.

In programming, containers are very much like the Tupperware container that our burrito is stored in. Usually, nothing should be able to get out of the container and ruin the environment outside of it. Within the container, there should be an environment that the burrito is happy to be in. If our burrito is a web application, it may need plugins, a certain version of PHP that can run those plugins, and any other number of things that are necessary to make the web application to run in the same way it was developed to run. These can all be included in the container. By packaging our applications with all the necessary code and dependencies that are required to make it run smoothly in any environment, it makes it easy to develop, move, and deploy applications.

Docker Images

A Docker image is something that we build that can be deployed as containers. Inside the Docker image, we have the application, the dependencies required to run the application, and so on. Put simply, the Docker image is an inactive version of a container. It is analogous to a class in Java, while containers are analogous to the objects.

Docker images are uploaded to hosting services called registries that host and distribute images. Registries include image repositories that you upload new versions of an image to. Registry services can both be hosted publicly, like on DockerHub, or on privately-owned servers on-prem or in the cloud. This sounds confusing, but it essentially means that you can pull current or previous versions of a Docker image from within a repository. If I wanted an image with PHP, for example, I could pull from a registry with a PHP repository and pull whatever version of PHP I need to make my application work.

Docker

Docker is the program that is going to allow you to build and share your images and then run your containers. It does this by setting up the Docker daemon which connects to the kernel of your host OS, and it is what allows you to create and manage your Docker images and containers. The other main piece of Docker is the Docker client, which is where you execute commands to communicate with the daemon.

Hopefully, these explanations cleared up some questions you may have had with Docker and the terminology used in the example solution. If there are still questions about anything, feel free to leave a comment and we will make sure to clear up any confusion.

What Are the Advantages of Using Docker Over Using Virtual Machines?

Now that you know the purpose of containers and Docker, a good question to ask is why use containers as opposed to the VMs for isolating applications, as in the big picture they can play similar roles.

Containers are much more lightweight than Virtual Machines. If you think about it, this makes a lot of sense. When you run a Virtual Machine, you have the host OS, hypervisor, and multiple guest OSs for every VM you run, which, as you could imagine, takes up a lot of memory even if you are not utilizing everything that comes along with a whole operating system. This is a lot of wasted memory that could be used for running more applications and processes.

When you run a Docker container, you rid yourself of the hypervisor and replace it with the Docker daemon, and all your applications are run on top of the host OS. Your containers have lightweight OSs that only have the files necessary to execute the application properly, as opposed to the whole OS running on a VM. Sometimes the OSs running on VMs are up to gigabytes in size while the sizes of containers are usually around 100 MB or less. This is what makes containers much more lightweight than VMs, allowing you to run many more containers on one machine than you could VMs, as they comparatively take up less resources.

Additionally, the increased file sizes of the OS for a VM make it take minutes to boot and execute the application, while it takes containers only seconds to be up and running. This prevents a lot of waiting around and allows developers to test everything much quicker. All of this is just to say that the overhead for a VM is much higher than the overhead for a container, which causes things to be slower and take up more space. Using containers and Docker solves a lot of these problems. This is not to say that containers should replace VMs; containers and VMs are often used together and have different use cases. That, however, is for another blog post.

It is also important to note that while there are many advantages to using Docker containers over VMs, there are also disadvantages. The main disadvantage of using Docker containers over VMs are the security concerns that can arise due to the container sharing the kernel of the host OS. If multiple containers are running on one system and the security of the host kernel is compromised, it could mean bad news for all the other containers running on the kernel.

What Are the Benefits to Using Docker in a Broad Sense?

As mentioned before, containers are only as big as they need to be, causing low overhead. Sometimes when using a VM, we take a bus to cross the street by loading an entire OS to execute the same processes that can be done with containers. With containers, you can be sure that you will only be using the resources necessary to execute your processes.

As long as a machine has Docker installed, your images and containers will work perfectly fine as the dependencies are held inside the container. Our earlier example of an application working on a development environment but not a staging environment or production environment should no longer ever be the case. This is a huge benefit as you no longer need to figure out which dependencies are causing problems between environments each time you move the application.

The fact that Docker containers are supported by multi-cloud platforms is vital because it increases the portability and scalability of applications. Had these multi-cloud platforms not supported containers, all of the benefits from using containers would not be utilized; luckily, that is not the case as most if not all multi-cloud platforms include support for Docker and containers.

With Docker images, it is easy to build your SDLC with your CI/CD platforms (assuming they support Docker, which they should). This will allow you to not have any worry that the environments will not be compatible with the Docker image.

In Summary

With Docker, you can create your containers that have everything you need to execute your application, making life easier in transferring your product between different environments. This allows you to save money in the meantime as you can cut down on your VM usage, which will save a lot of memory. Docker containers are supported by multi-cloud services and support CI/CD implementation, making it easier to transition to using containers with your current cloud services and CI/CD tools. Hopefully, from this post, you were able to gain a very basic understanding of what containers are, what Docker is, and how they can benefit you. If you have any unanswered questions, please leave them in the comments so we can clear up any confusion.

from DZone Cloud Zone

Azure’s Infrastructure-As-Code: ARM Templates, Validation, and Deployment Using Azure DevOps

Azure’s Infrastructure-As-Code: ARM Templates, Validation, and Deployment Using Azure DevOps

What is ARM? 

An ARM template is a JSON file used to configure and deploy various Azure resources like VMs, AKS clusters, web apps, VNets, functions, and more to the Azure cloud. The basic idea behind Infrastructure-as-Code (IAC) is to provide the infrastructure through automation rather than using manual processes. In this Agile development world, even infrastructure code is changing and so it needs to be committed to version control repositories so it can be built/deployed using repeatable processes. The IAC fits well in the Agile development process without manual interventions by auto-validation, redeployment of the resources using Continuous Integration and Continuous deployment (CICD).

A sample ARM template is shown here:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "adminUsername": {
            "type": "string",
            "defaultValue": "ashuser",
            "metadata": {
                "description": "User name for the Virtual Machine."
            }
        },
        "adminPassword": {
            "type": "securestring",
            "defaultValue": "Ashpassword123",
            "metadata": {
                "description": "Password for the Virtual Machine."
            }
        },
        "osDiskSize": {
            "type": "int",
            "defaultValue": 1024
        }
    },
    "variables": {
        "location": "[resourceGroup().location]",
        "addressPrefix": "10.0.0.0/16",
        "subnetName": "Subnet",
        "subnetPrefix": "10.0.0.0/24",
        "storageAccountType": "Standard_LRS",
        "publicIPAddressType": "Dynamic",
        "publicIPAddressName": "[concat('my-',variables('uniqString'),'-pip')]",
        "nsgName": "[concat('my-',variables('uniqString'),'-nsg')]",
        "nicName": "[concat('my-',variables('uniqString'),'-nic')]",
        "vmName": "[concat('my-',variables('uniqString'),'-vm')]",
        "vmSize": "Standard_DS1_v2",
        "virtualNetworkName": "[concat('my-',variables('uniqString'),'-vnet')]",
        "uniqString": "[toLower(substring(uniqueString(resourceGroup().id), 0,5))]",
        "vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]",
        "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]",
        "windowsImage": {
            "publisher": "MicrosoftWindowsServer",
            "offer": "WindowsServer",
            "sku": "2012-R2-Datacenter",
            "version": "latest"
        }
    },
    "resources": [
        {
            "apiVersion": "2017-04-01",
            "type": "Microsoft.Network/virtualNetworks",
            "name": "[variables('virtualNetworkName')]",
            "location": "[variables('location')]",
            "properties": {
                "addressSpace": {
                    "addressPrefixes": [
                        "[variables('addressPrefix')]"
                    ]
                },
                "subnets": [
                    {
                        "name": "[variables('subnetName')]",
                        "properties": {
                            "addressPrefix": "[variables('subnetPrefix')]"
                        }
                    }
                ]
            }
        }
    ]
}

Use the azure-pipelines.yml shown in full at the end of this post as the root of your repository and adjust variables to work for your situation. This yaml file will create a build pipeline for your project. Modify this file and commit it back to your repository so that your build/deploy processes are also treated as code (Process-as-Code).

Follow the instructions to create a build pipeline to validate the ARM templates.

Steps

pool:
  name: Hosted Ubuntu 1604
  • Select an Azure subscription to validate Arm templates in a resource group.
  • Validate an Azure Resource Manager (ARM) template for (Basic, Standard, and Premium) solutions to a resource group. You can also start, stop, delete, deallocate all Virtual Machines (VM) in a resource group.
  • Build pipeline references secret variable names such as resource group name, passwords, tenant ID, etc.Image title
  • Pass template parameters in Override template parameters.
  • Deployment mode should be “Validation” for validating ARM templates.
  • Validate mode enables you to find problems with the template before creating actual resources.Image title
variables:
  rgname: 'arm-resource-group’

steps:
- task: [email protected]
  displayName: 'standerd-template-validation'
  inputs:
    azureSubscription: 'subscription name(XXXX-XXXX-XXXX-XXXX-XXXX)'
    resourceGroupName: '$(rgname)'
    location: 'East US'
    csmFile: 'main-template.json'
    csmParametersFile: 'main-template.parameters.json'
    overrideParameters: '-solutionType "Project-Edison" -deploymenttype Standard -geo-paired-region "EastUS2" -signalRlocation eastus2 -acrDeploymentLocation "CanadaCentral" -omsWorkspaceRegion "southeastasia" -appInsightsLocation "eastus" -appInsightsLocationDr "southcentralus" -tenantId $(tid) -botAdClientId "XXXXXXXXXX" -adObjectId "XXXX-XXXX-XXX-XXX-XXXX" -adClientSecret "$(adsp)" -azureAccountName "[email protected]" -azurePassword "[email protected]" -adminName "adminuser" -sessionId "3791180c-24c5-4290-8459-a454feee90ab" -vmUsername "adminuser" -vmPassword "[email protected]" -aksServicePrincipalClientId "XXX-XXXXa63c-XXXX" -aksServicePrincipalClientSecret $(ksp) -signalrCapacity 1 -dockerVM Yes -githuburl "$giturl" -azureAdPreviewModuleUri "https://github.com/raw/dev/code/Azu.zip" -cosmosdbModuleUri "https://github.com/raw/dev/code.zip" -siteName "test"'
    deploymentMode: Validation
  condition: always()
steps:
- task: [email protected]
  displayName: 'Deleting-validation-RG'
  inputs:
    azureSubscription: 'Subscription name (XXXX-XXXX-XXXX-XXXX-XXX)'
    scriptLocation: inlineScript
    inlineScript: 'az group delete -n $(rgname) --yes'
steps:
- task: [email protected]
  displayName: 'Publish Artifact: ARM_templates'
  inputs:
    PathtoPublish: /home/arm
    ArtifactName: 'ARM_templates'

  • After everything is done, save and queue the build, and get the logs for success or failure of the build.

  • Each task will generate a build log for the corresponding job.Image titleImage title

Deploy an ARM Template to a Resource Group (CD)

In this example, we show deployment of three different solutions such as basic/standard/premium being deployed to the Azure cloud. First, a description of the solutions will be given, then the actual deployment pipeline will be discussed.

  • The Basic solution will have all core components in an ARM template. Basic pricing tier efficiently works for development/test.

  • The Standard solution has all the basic solution features plus has monitoring and HA (High Availability) features (Availability set, Availability zones, paired zones) to make application redundant at every level of failure, from an individual VM to the entire region.

  • The Premium Solution has all the standard solution features plus automated disaster recovery in another region. Each Azure region is paired with another region within the same geography, together making a regional pair. To protect an application against a regional outage, deploy the application across multiple regions, using traffic manager to distribute internet traffic to the different regions.

Deployment Pipeline (CD)

Add Artifacts from source (Build) to deploy a release pipeline through multiple stages. Choose source type build as the one created above for validation.Image title

Various stages can be added using the graphics tools in Azure DevOps. Note the parallel nature of these pipelines for each SKU.Image title

ARM Template deployment (Basic, Standard, Premium):

  • Select an Azure subscription to deploy Arm templates (Basic, Standard, Premium) in a resource group.

  • Deployment mode should be Incremental for deploy ARM templates to a Resource group.

  • Incremental mode handles deployments as incremental updates to the resource group. It leaves unchanged resources that exist in the resource group but is not specified in the template.Image titleImage title

variables:
  basic_rg: 's_basic1'
  location: 'westus'

steps:
- task: [email protected]
  displayName: 'Basic Solution   Deployment'
  inputs:
    azureSubscription: 'CICD (XXXX-XXXX-XXXX)'
    resourceGroupName: '$(basic_rg)'
    location: '$(location)'
    csmFile: '$(System.DefaultWorkingDirectory)/_SnS_Build_CI/ARM_drop/maintemplate.json'
    overrideParameters: '-solutionType Basic-Solution -deploymentPrefix tere -cognitiveServicesLocation "eastus" -omsLocation "eastus" -appInsightsLocation "westus2" -locationDr westcentralus -trafficManagerName "NA" -b2cApplicationId XXXX-XXXX-XXX -b2cApplicationIdDR "NA" -b2cPolicy B2C_1_SignUp-In -b2cTenant onmicrosoft.com -b2cScope https://onmicrosoft.com/ -b2cScopeDR "NA" -videoIndexerKey "XXXX-XXXX-XXXX" -keyVaultName "NA" -keyVaultwebAppSecretName "NA" -keyVaultResourceGroup "NA" -webAppCertificatethumbPrint "NA"'
    deploymentOutputs: 'output_variables'
variables:
  basic_rg: 'basic1'

steps:
- task: [email protected]
  displayName: 'Azure CLI '
  inputs:
    azureSubscription: 'CICD (XXXX-XXXX-XXXX-XXXX-XXXX)'
    scriptLocation: inlineScript
    inlineScript: 'az group delete -n $(basic_rg) --yes'

Image title

variables:
  standard_rg: 'ns_standard'

steps:
- task: [email protected]
  displayName: 'Standard solution '
  inputs:
    azureSubscription: 'CICD (XXX-XXXX-XXXX-XXX-XXX)'
    resourceGroupName: '$(standard_rg)'
    location: 'East US 2'
    csmFile: '$(System.DefaultWorkingDirectory)/__Build_CI/ARM_drop/main-template.json'
    overrideParameters: '-solutionType Standard-Solution -deploymentPrefix tere -cognitiveServicesLocation "eastus" -omsLocation "eastus" -appInsightsLocation "westus2" -locationDr westcentralus -trafficManagerName "traficmanagersns" -b2cApplicationId "XXXXXX" -b2cApplicationIdDR "NA" -b2cPolicy "B2C_1_b2csignup_in" -b2cTenant "snsiot.onmicrosoft.com" -b2cScope "https://abac.com " -b2cScopeDR "NA" -videoIndexerKey "XXXX-XXX-XXXX-XXX" -keyVaultName "NA" -keyVaultwebAppSecretName "NA" -keyVaultResourceGroup "NA" -webAppCertificatethumbPrint "NA"'

The standard solution will have two regions

  • Primary Region(Deployment)
  • Secondary Region (Re-Deployment)Image title
  • After deploying the ARM templates, delete the resource group by using the following Azure CLI task.
variables:
  standard_rg: 'ns_standard'

steps:
- task: [email protected]
  displayName: 'Azure CLI '
  inputs:
    azureSubscription: 'CICD (XXX-XXXX-XXX-XXX)'
    scriptLocation: inlineScript
    inlineScript: 'az group delete -n $(standard_rg) --yes'

Image title

variables:
  standard_rg: 'ns_premium'

steps:
- task: [email protected]
  displayName: 'premium solution'
  inputs:
    azureSubscription: 'CICD (XXXX-XXXX-XXXX-XXXX)'
    resourceGroupName: '$(premium_rg)'
    location: '$(location)'
    csmFile: '$(System.DefaultWorkingDirectory)/_SnS_Build_CI/ARM_drop/main-template.json'
    overrideParameters: '-solutionType Premium-Solution -deploymentPrefix "security" -cognitiveServicesLocation "eastus" -omsLocation "eastus" -appInsightsLocation "westus2" -locationDr westcentralus -trafficManagerName traficmanager -b2cApplicationId XXXX-XXXX-XXXX-XXX -b2cApplicationIdDR https://abac.com -b2cPolicy B2C_1_b2csignup_in -b2cTenant abc.onmicrosoft.com -b2cScope https://abc.com   -b2cScopeDR https://demo1  -videoIndexerKey XXXXXXX -keyVaultName "NA" -keyVaultwebAppSecretName "NA" -keyVaultResourceGroup "NA" -webAppCertificatethumbPrint "NA"'

After deploying the ARM templates, delete the resource group by using the following Azure CLI task.

steps:
- task: [email protected]
  displayName: 'Azure CLI '
  inputs:
    azureSubscription: 'CICD (XXX-XXXX-XXX-XXX)'
    scriptLocation: inlineScript
    inlineScript: 'az group delete -n $(premium_rg) --yes'

After everything is done, save and queue the release, and get the logs for success or failure of the release.

Each task will generate a deployment log for the corresponding job.Image titleImage titleImage titleImage title

An azure-pipelines.yml is shown here:

# Build and release (CI/CD) the pipelines using a yaml editor.
# Build pipeline(CI) is to validate the ARM templates: 
pool:
  name: Hosted Ubuntu 1604
variables:
  rgname: 'abc'
steps:
- task: [email protected]
  displayName: 'Basic solution-template-validation'
  inputs:
    azureSubscription: 'subscription name(XXXX-XXXX-XXXX-XXXX-XXXX)'
    resourceGroupName: '$(rgname)'
    location: 'East US'
    csmFile: 'main-template.json'
    csmParametersFile: 'main-template.parameters.json'
    overrideParameters: '-solutionType "Project-Edison" -deploymenttype Standard -geo-paired-region "EastUS2" -signalRlocation eastus2 -acrDeploymentLocation "CanadaCentral" -omsWorkspaceRegion "southeastasia" -appInsightsLocation "eastus" -appInsightsLocationDr "southcentralus" -tenantId $(tid) -botAdClientId "XXXXXXXXXX" -adObjectId "XXXX-XXXX-XXX-XXX-XXXX" -adClientSecret "$(adsp)" -azureAccountName "[email protected]" -azurePassword "[email protected]" -adminName "adminuser" -sessionId "3791180c-24c5-4290-8459-a454feee90ab" -vmUsername "adminuser" -vmPassword "[email protected]" -aksServicePrincipalClientId "XXX-XXXXa63c-XXXX" -aksServicePrincipalClientSecret $(ksp) -signalrCapacity 1 -dockerVM Yes -githuburl "$giturl" -azureAdPreviewModuleUri "https://github.com/raw/dev/code/Azu.zip" -cosmosdbModuleUri "https://github.com/raw/dev/code.zip" -siteName "test"'
    deploymentMode: Validation
  condition: always()

steps:
- task: [email protected]
  displayName: 'Deleting-validation-RG'
  inputs:
    azureSubscription: 'Subscription name (XXXX-XXXX-XXXX-XXXX-XXX)'
    scriptLocation: inlineScript
    inlineScript: 'az group delete -n $(rgname) --yes'

steps:
- task: [email protected]
  displayName: 'Publish Artifact: ARM_templates'
  inputs:
    PathtoPublish: /home/arm
    ArtifactName: 'ARM_templates'

# Release(CD) pipeline is to deploy a resources like vms,aks cluster,webapps into Azure cloud.
variables:
  basic_rg: 's_basic1'
  location: 'westus'

steps:
- task: [email protected]
  displayName: 'Basic Solution   Deployment'
  inputs:
    azureSubscription: 'CICD (XXXX-XXXX-XXXX)'
    resourceGroupName: '$(basic_rg)'
    location: '$(location)'
    csmFile: '$(System.DefaultWorkingDirectory)/_SnS_Build_CI/ARM_drop/maintemplate.json'
    overrideParameters: '-solutionType Basic-Solution -deploymentPrefix tere -cognitiveServicesLocation "eastus" -omsLocation "eastus" -appInsightsLocation "westus2" -locationDr westcentralus -trafficManagerName "NA" -b2cApplicationId XXXX-XXXX-XXX -b2cApplicationIdDR "NA" -b2cPolicy B2C_1_SignUp-In -b2cTenant onmicrosoft.com -b2cScope https://onmicrosoft.com/ -b2cScopeDR "NA" -videoIndexerKey "XXXX-XXXX-XXXX" -keyVaultName "NA" -keyVaultwebAppSecretName "NA" -keyVaultResourceGroup "NA" -webAppCertificatethumbPrint "NA"'
    deploymentOutputs: 'output_variables'
variables:
  basic_rg: 'basic1'

steps:
- task: [email protected]
  displayName: 'Azure CLI '
  inputs:
    azureSubscription: 'CICD (XXXX-XXXX-XXXX-XXXX-XXXX)'
    scriptLocation: inlineScript
    inlineScript: 'az group delete -n $(basic_rg) --yes'

from DZone Cloud Zone

The Key to Multi-Cloud Success

The Key to Multi-Cloud Success

In the era of cloud-based architectures, companies have implemented multiple cloud platforms but have yet to reap the full benefits. Whether it’s Amazon Web Services (AWS), Google Cloud, or Microsoft Azure (or some combination thereof), a recent Forrester study found that nearly 86 percent of enterprises have incorporated a multi-cloud strategy. Not only does this strategy take companies out of the business of hosting their own applications, it leads to benefits like avoiding vendor lock-in, reduced costs and optimized performance.

Photo by chuttersnap on Unsplash

While it’s clear that a multi-cloud strategy offers many benefits and flexibility for an organization, there are also more moving parts to track (with the added challenge of hybrid, multi-generational infrastructure), making it all the more critical that businesses have a solid monitoring strategy in place. Luckily, the collection of monitoring data is essentially a solved problem; instead, businesses today are faced with an endless cycle of “day two” operational challenges, such as avoiding downtime and maintaining visibility.

As Andreessen Horowitz so aptly put it, software is eating the world; every company is becoming (or has become) a software company. Software is not only ubiquitous, it’s powerful, enabling us to solve a wide range of problems. This emphasis on software within companies has led to the emergence of multiple cloud providers such as Amazon, Google, and Microsoft — who all recognized the trend and seized the opportunity by building cloud computing platforms.

Because companies no longer have to build their own data centers, they can focus on their core business of delivering value to their customers. With the public cloud, they’re able to achieve far greater time-to-value than they would if left to build their own data centers and cloud platforms.

Now, companies can build a portable software stack that is DevOps-driven, free from vendor lock-in and capable of delivering a superior set of capabilities than can be gained from a single provider. Although we’re now consuming infrastructure from cloud providers (which has its own inherent risks), we have better tooling to enable multi-cloud strategies (e.g., from companies like HashiCorp), minimizing the risk of being reliant on any one provider for cloud services.

The Missing Piece: Multi-Cloud Monitoring

In this software-dependent world, availability is critical, and downtime is not only expensive but damaging to business reputation. As a result, monitoring systems and applications has become a core competency, crucial to business operations. To fully reap the rewards of a multi-cloud strategy and thrive in this cloud-based world, implementing a unified monitoring solution is critical for success. In addition to the existing benefits multi-cloud offers, a unified solution gives operators constant and complete visibility into their infrastructure, applications, and operations.

Surviving as A Modern Enterprise

Improved operational visibility through monitoring is often cited as a top priority among Chief Information Officers (CIOs) and senior operations leadership, and good monitoring is a staple of high-performing teams, yet too often it’s implemented as an afterthought in reaction to changes in the mission-critical systems that power businesses. When this happens, organizations can struggle to reap the benefits of multi-cloud because they lack sufficient visibility to detect and avoid problems, or recover from expensive downtime.

Further complicating this underlying challenge is the fact that ephemeral infrastructure platforms such as Kubernetes are the new normal, while digital transformation, cloud migration, DevOps, containerization, and other initiatives are compelling movements in the modern enterprise. Although they vary in scope and overlap or intersect in practice, they are unified in purpose: to deliver increased organizational velocity, empowering organizations to ship more changes, faster. While a boon to business initiatives and developer productivity, these practices can exponentially increase the number and duration of “day two” operational challenges. Delaying adoption of the solution to these challenges only increases risk exposure and cost.

Future Proof Your Monitoring

According to Gartner, the number of cloud-managed service providers is expected to triple by 2020. While this is good news for analysts, investors, and operators alike — everyone (except Amazon?) benefits from a competitive market — it suggests that the multi-cloud trend will only become more diverse moving forward. Given the already complex landscape and this forecast, it’s impractical to expect turn-key monitoring solutions to provide sufficient coverage — a different approach is needed.

The good news is that the solution is surprisingly simple: treat monitoring and observability like we do the rest of our DevOps toolchain — as a workflow. When containerization gained in popularity and we incorporated Docker and Kubernetes into our multi-cloud strategy, we didn’t have to replace our CI pipelines, we simply shipped containers instead of RPMs, essentially making our CI tools future proof.

For monitoring and observability, that future-proof solution is the monitoring event pipeline. At the end of the day, there are only so many mechanisms for observing systems (APM and observability client libraries, Prometheus-style /metrics or /healthz endpoints, logs, and good old-fashioned service health checks are a few great examples); once we start to think about these as workflows that can be automated via monitoring pipelines, we’re empowered to continuously adapt and thrive (maintaining visibility and avoiding downtime) in the ever-evolving and increasingly multitudinous cloud world of IT infrastructure.

For more on monitoring workflows for multi-cloud environments, don’t miss Caleb’s Sensu Summit 2019 talk. Details + registration info here.

from DZone Cloud Zone

Migrating to the Cloud? Here’s Why Automation is Essential

Migrating to the Cloud? Here’s Why Automation is Essential

Nearly 60 percent of North American enterprises now rely on cloud platforms, according to a recent Forrester survey. The benefits of cloud are well known, but the question marks that remain have left a significant number of organizations hesitant to make the jump.

Several factors can slow down a company’s decision to adopt the cloud. Many organizations have the pre-conceived notions that cloud adoption comes with security risks and threats of data leakage. In reality, on-premise is probably just as risky than those fronts.

Plus, after a company has come to the decision to adopt the cloud, it can take a lot of time using traditional methods to design, develop and deploy the data infrastructure needed and migrate the desired data. Luckily, automation software can lessen the work.

Fast-Tracking an Organization’s Journey to the Cloud

With automation software, moving to the cloud is easier than ever before. Companies leveraging automation software can increase the success of data infrastructure cloud migration projects and shorten the time to value of that investment. Here’s how automation can help enterprises make the move to the cloud:

1. Helps Provide the Framework for An Agile Environment

As time evolves, so do data and analytics requirements. Data warehousing teams moving to the cloud have a fresh opportunity to re-examine current processes and improve how they will work given this new landscape. The ability to address changing requirements and new business needs more quickly without disrupting functions is desirable to every organization. This requires agility in terms of multiple iteration support. Data infrastructure automation enables IT teams to be more efficient and effective, creating an agile data warehousing environment.

2. Shrinks the Timeline to The Cloud

Before automation, migrating to the cloud was a long and grueling manual process that required a high-level of precision and was prone to human error. The conventional way has been overcomplicated and inefficient. Automation software provides data warehousing teams with full life cycle management from design and development through deployment, operations and even documentation – resulting in accelerated time to production. Leveraging automation within migration efforts can also ensure you are able to preserve the structure and integrity of migrated data. With automation, data warehousing processes become streamlined and more efficient and migration to the cloud can be accomplished much more quickly.

3. Decreases Costs by Getting Time Back

Moving to the cloud can be a costly investment both financially and otherwise. According to a recent Forrester report, labor costs can make up half of cloud migration projects. Since automation better facilitates and quickens the process, organizations can eliminate some of the more repetitive tasks needed to move to the cloud and reduce these costs. Existing data warehousing teams embarking on cloud migration projects are often able to then tackle the effort without the burden of hiring additional staff. In addition, automation delivers consistency and universal team guidelines, resulting in less chance of human error or cost-accruing delays down the line in understanding and/or troubleshooting a previous developer’s approach. Automation frees up staff time to focus on the higher-value, more strategic aspects of cloud data warehousing.

Adding Value to Your Cloud Journey

Automation is helping IT teams globally to speed up the data infrastructure lifecycle by up to 80 percent to better support data analytics needs. With automation, the data infrastructure feeding analytics becomes more consistent and reliable, giving the business improved access to timely enterprise data to better operate. In turn, analytics can power more efficient workflows and higher satisfaction rates for an organization’s customer base. Furthermore, being able to evolve the organization’s data infrastructure to better respond to new business needs becomes less painful and easier to achieve.

With the automation benefits of faster project delivery, lower risks and costs, and an increase in team productivity, those who are ready to pursue a move to the cloud no longer have to fear not being successful in a cloud migration project. Additionally, teams can rely on metadata-based automation to provide extra peace of mind that they will also be able to easily move, regenerate and optimize their cloud data warehouse for new data platforms if needed in the future. 

Whether organizations continue to rely on on-premises data warehouse platforms, migrate to the cloud or manage a hybrid environment of both for awhile, teams can use automation to provide more to the business faster. Data infrastructure automation allows data warehousing teams to be more efficient, add more value and innovate.  

from DZone Cloud Zone

13 Reasons Why You Should Use Heroku in Your Next Project

13 Reasons Why You Should Use Heroku in Your Next Project

Heroku is a Platform-as-a-Service (PaaS) that exists on the cloud, allowing software developers to build and run complex web applications without having to worry about the underlying hardware or the networking aspects of it.

I’ve been using Heroku for more than three years for some personal projects (small and big ones) and I truly recommend it for several reasons that I will explain below. Of course, this tool isn’t a silver bullet for cloud deployment; you will always need to know your needs in a project before choosing a tool to achieve it. Also, this is not any promoted article by SalesForce, I’m doing it for free because this tool really helps me almost every day to get things done in an easy way. 

Check out below the 13 reasons why:

1 .  User-Friendly Tool

Since its first use, Heroku has proven to be an easy-to-adapt tool, even for those who haven’t fully mastered cloud tool configurations. With a well-defined dashboard, you can perform tasks such as managing, deploy and tracking metrics without major difficulties. The UX of the tool comprises the use of both a technical person and an end user without problems.

2 .  No Infrastructure Needed

As a container-based tool in the cloud, it has the support of many programming languages, a great variety of add-ons, and deployment provisioning are provided by the tool, leaving developers free to focus on their project, without having to worry about infrastructure details such as what version the OS use or which libs to install or even how to configure the firewall or hardware. However, this abstraction comes with a price, where the average processing value is just above the big players like AWS and Google.

3 .  Useful DevCenter

I’m particularly a big fan of the DevCenter that Heroku provides. Since the first few times I have used Heroku, I have encountered small problems, either with deploy, configuration or even how to manipulate the UI of the tool. This is quite common when you’re experiencing a new tool, but I’ve always been able to find an answer in the DevCenter of the platform, where the configuration steps are always well-written and objective, always explaining what was done in the background. In fact, there is nothing like good documentation to keep us using a tool, because we must admit that, when we have problems (and we will have them), we will certainly find some solution there, giving the tool an extra boost of confidence. If you want to look further, you can access it here.

4 .  Great Community

The Heroku community is still small, compared to other giants like Azure and AWS. However, it is not the quantity that makes the tool, it is the quality and community, and all the people who contribute to the tool are highly active in it, whether it is divulging and sharing information, creating tutorials, or just using it in their day to day. Especially on Twitter, the Heroku community is very active, like Chris Castle (@crc) who will always be willing to help and talk about the tool. You can also always enjoy the tool’s new features here and here.

5 .  Heroku Is Not a Big Fish, and That’s Good

When we come to cloud providers, the default these days is to use AWS, Azure, or Google Cloud. When a team starts thinking about the cloud, one of these big players always appears in suggestions. But why not consider another tool? Will you really need that entire AWS infrastructure to your project? Do you need to use GCP with Kubernetes for your small application? You really must consider question like this along the time your engineering team will need to learn a new tool or even the cost of hiring some AWS Expert or Google Specialist for simple production infrastructure. Sometimes the big names aren’t the best solution for your problem.

6 .  The CLI

Heroku has an incredible Command Line Interface to help us to manage and control our applications on it. Commands like heroku logs and heroku ps will be your best friends when you use them. It’s really simple to install and start playing with it. Go ahead and create your first application using Heroku CLI.

7 .  Multi-Language Support

Heroku is really agnostic about language support. Actually, the Heroku platform supports more than eight languages from scratch, which includes languages like Node, Java, and Python.

“But since I don’t use any of those languages, I will choose another tool.”

Wait. If you use, for instance, C, Heroku has what they call “buildpack,” which allows you to build and deploy using any other language into the platform. Even if you didn’t find a buildpack for your purpose, you can customize and extend an existing one.

8 .  Supports Several Databases and Data Stores

Heroku allows developers to choose from several databases and data stores according to the precise needs of individual applications. The developers can take advantage of the Postgres SQL database as a service to make the application access data quickly and keep the data secure. At the same time, the developers can take advantage of specific add-ons to work with widely used databases and data stores like MySQL, MongoDB, and Redis. The add-ons make it easier for developers to store data, manage data stores, and monitor data usage.

9 .  Huge Toolbelt of Add-Ons

As I said above, we can take advantage of the amazing add-ons and plugins that Heroku provides out of the box. You can just select the add-on and link it to your application and the platform will do the job. Of course, this doesn’t work for all apps and plugins but is way easier than configuring it by hand.

10 .  Scale-Ready

The tool has some built-in commands to help you to easily scale and be ready for eventual traffic/resources changes. Here are some basic strategies on how to scale on Heroku:

  • Horizontal  Scaling: If you see increased request queue times in Scout or New Relic, you need to make your app faster or add more dynos. As soon as you’re using more than one, automate it.
  • Vertical Scaling: Because of Heroku’s random routing, you need concurrency within a single dyno. This means running more web processes, which consume more memory and may require a larger type.
  • Process Types: You’re not limited to just “web” and “worker(thread)” process types in your Procfile. Consider multiple worker process types that pull from different job queues.
  • App Instances: Heroku Pipelines make it relatively easy to deploy a single codebase to multiple production apps. This can be helpful to isolate your main app traffic from traffic to your API or admin endpoints, for example. Heroku will route traffic to the correct app based on the requested subdomain and the custom domains configured for each app.

11 .  Good Analytics/Reports Module

When you use the Hobby plan, you have access a user-friendly panel that shows some status of your application. You can see there some metrics about usage, throughput, memory usage, and events that happen (like Heroku’s overall status and application-specific details, like the unwanted memory exceed problem). As I said, the sad part is that it’s only available on Hobby plans, but for $7 a month, it’s worth it.

12 .  Autoconfiguration and “Convention Over Configuration” For Most Features

The focus here is to be as minimally invasive as possible. For instance, if you deploy a Spring Boot application, you should not need to perform any further configurations than that existing one in your project. If some specific configuration was needed (like a specific port or Java version) you can create a Procfile which will help you in your application needs. Another case is that the majority of network configurations are already provided, and here again, you have the possibility of configuring the details if needed too.

13 .  Deploy from Different Sources

Heroku has as main option git-based deployment. You can “link” your app directly from GitHub and enable default deployment each time when you push some code into master (Heroku will be listening on this branch). It also has an option inside (Heroku git), and you can use it as your repository as well. Alongside these options, you can deploy using Docker Container Registry inside Heroku, which is great for those project who already has an image.

These are some reasons why I’m using Heroku as my first option when considering deployment on the cloud. There are some trade-offs (as with any tool) that can be discussed in the future or in comments. Comments and suggestions are appreciated!

from DZone Cloud Zone

An Introduction to Edge Computing

An Introduction to Edge Computing

Internet of Things (IoT) is taking the world by storm, as it has become one of the most influential buzzwords not only in tech sector, but also many other businesses. From farms and factories to smart cities and homes, IoT technology is a continually expanding set of connected systems and devices. According to Statista, the installed base of IoT devices is forecasted to grow to almost 31 billion worldwide. As a result, cloud computing will emerge as an increasingly dominant trend as the enormous amount of data generated by billions of connected IoT devices need to be stored for processing and retrieval. Both the technologies -IoT and cloud computing are interconnected, with one providing the other a platform for success. 

In a traditional IoT architecture, data are collected from geographically dispersed sensors and transported to a central repository where it is combined and processed collectively. Increasing efficiency, scalability and performance in everyday tasks, integration of cloud computing with Internet of Things enables enterprises to make better business decisions faster and respond to changing market conditions in real time. 

IoT connections are expected to boom in coming years, projecting a reach of 13.7 billion by 2021 by Cisco Systems, thereby increasing the need for data center and cloud resources. Streamlining the unprecedented flow of traffic from all the connected devices, aggregating data, and extracting actionable insights, an IoT/cloud convergence proves to be the perfect combination for a data-driven world. 

While cloud computing has made it possible to process massive amount of data, it won’t be an ideal choice for all applications and use cases. Huge amount of data being sent back and forth from the frontlines of sensors to servers clogs network bandwidth thereby slowing down response times. The answer to address all such limitations associated with traditional cloud computing infrastructure is something called edge computing.

Unlike traditional cloud architecture that follows a centralized process, edge computing decentralizes most of the processes by pushing it out to the edge devices and closer to the end user. Since storage capacity and processing power are decentralized, it would provide precise results for IoT deployments, making it easier to operate and manage IoT devices. Edge computing ensures low-latency access, reduced bandwidth consumption, offline availability, and local machine learning (ML) inference. 

Low latency and faster real-time analysis of edge computing have a number of applications across various sectors such as automotive, consumer electronics, energy, health care and more. Autonomous vehicles are a strong use case in point, where data needs to be collected from the surrounding environment and cloud to make decisions quickly and safely. Patterns in sensor data should be detected, stored, and transferred quickly to aid real-time decisions at local nodes. The decentralized architecture of edge computing negates the latency in communication of critical data thereby ensuring safety.

According to CB Insights Market Sizing tool, the global edge computing market is estimated to reach $6.72 billion by 2022. As more and more connected devices emerge into the world, tech giants are heavily investing in a sophisticated edge computing strategy. Amazon, Microsoft, and Google have forayed into edge computing. Amazon was ahead in the emerging tech space by launching its edge platform, AWS Greengrass in 2017, while Microsoft jumped on the bandwagon with its Azure IoT Edge solution last year. Google also joined the race by rolling out two new products — an integrated software (Cloud IoT Edge) and custom hardware stack (Edge TPU) in order to leverage data directly at the edge. 

To summarize, we can say that edge computing is not here to replace cloud computing, but to complement it. Since edge computing technology is still in its infancy, challenges are likely to rise. But with the increasing demand for edge devices and applications, there will be more opportunities for enterprises to test and deploy this technology across various verticals. 

from DZone Cloud Zone

5 Lessons from the Google Cloud Outage

5 Lessons from the Google Cloud Outage

The Google Cloud outage was yet another reality check for enterprises and businesses, raising serious concerns over the reliability of the cloud and the vulnerabilities in the cloud’s architecture. The incident had a huge impact on performance and SLAs. It was a textbook example of what could go wrong if your application depends wholly on a single cloud provider.

Catchpoint detected increased page load and response times along with a dip in availability across some popular websites in the e-commerce, media, and gaming industries. Multiple Google services, including GSuite, Google Compute Engine, Google Nest, Snapchat, Discord, and Shopify suffered during the outage. The issue occurred across different ISPs in different locations.

Read the complete breakdown of the incident in our blog post here or watch the outage analysis webinar here.

Building the Right Cloud Monitoring Strategy

The complexity of the internet makes it all the more unpredictable, so incidents such as the Google Cloud outage are inevitable. These incidents provide insight into what could be done better. It is an opportunity to re-examine the existing processes and strategies within your organization, so you are well prepared in the face of another sudden outage. Here are some lessons to remember:

1. Do Not Trust Blindly

No matter how popular, resourceful, or process-driven your vendor is, expect failures and lapses. If your digital services need to be 100% reachable and reliable, your architecture and strategy must support the goal. Once you have architected and built for such level of reliability, the health of your application relies on how well you track performance and the processes you have in place to manage major incidents.

2. Avoid Putting All Eggs in The Same Basket

If you deploy all your services, support, and collaboration/monitoring tools on a single cloud provider or if your connectivity is through a single ISP, then it’s a recipe for disaster. For example, if the application is hosted on a specific cloud service and your monitoring tools are also running on the same cloud service, you will not be able to receive alerts or troubleshoot any issues.

After a few failures, your teams might start suffering from “Fear of Missing Outages” (FOMO). Build application resiliency by ensuring that critical services, monitoring tools, and communication tools are in different platforms and have no shared single points of failures.

3. Invest in Monitoring Tools

If you are relying on the vendor’s status page or Twitter for detecting outages or bad user experience, then you are gambling on end-user experience and your brand image. You need to actively monitor from outside the cloud infrastructure. So, without dedicated synthetic or black-box type of monitoring, you cannot baseline performance and nor can you prepare for sudden outages.

The ideal monitoring strategy gives you end-to-end visibility; you can track the health of your IT infrastructure and network. It should allow you to triangulate issues across the various provides your digital services rely on such as DNS Providers, CDNs, Partner/Vendor APIs, Cloud Providers, etc. The performance and reachability data can provide useful insights that will help optimize the application, which vendors you rely on, and reduce the risk of any negative impact on end-user experience.

4. Test and Test Again

The single biggest cause of outages is configuration changes that went haywire. More than 90% of the time, outages can be traced to code or configuration changes that were not well tested or were implemented incorrectly. So it is important to follow a stringent process when it comes to deploying code or configuration changes, which includes steps to validate whether the changes have the desired effects, and what to do in case they go bad.

Conduct robust testing of any configuration change in both QA environments and limited production environments to find any errors or performance issues. It is recommended to implement such changes during the weekend or late night, whenever your service has the least business-impacting traffic to the application so that you minimize the effect on end-user experience.

5. Monitor SLAs and Hold Vendors Accountable

When a vendor scrambles to fix an outage, they are under pressure trying to find a resolution before SLAs are breached. You do not want to stress out vendors in the middle of an outage; they know what it means. However, ongoing communication with the vendor and alignment over what happened and how to recover is key to ensuring trust on both sides.

The service provider is expected to compensate for SLA breaches. However, it is your responsibility to bring up SLA breaches with a vendor. But if all the monitoring data comes from the same vantage point, then you have no way to validate its veracity and determine the exact compensation needed. Monitoring from multiple vantage points (ISPs, network types, etc.) gives you unbiased performance data. You will then be able to hold your vendor accountable for every second that impacted end-user experience.

Run Your Own Show

Digital transformation has forced us to forego the traditional application architecture where we had control and visibility over critical components in the delivery chain. The current scenario outsources most of these critical components to cloud providers. This shift to the cloud results in limited control and reduced visibility, which puts end-user experience at a greater risk.

So, in a cloud environment, it is the cloud provider that runs the show. You can only sit back and hope the provider upholds the SLA. But is this the best way to manage your application performance? Certainly not. The risk involved in this approach cannot be ignored — it impacts everything from your revenue, efficiency, and productivity to the brand’s reputation itself.

On average, IT teams spend 46 hours per month handling outages and performance issues. This is due to ineffective and siloed monitoring strategies that result in higher Mean Time to Detect/Innocence (MTTD/I), which in turn delays the Mean Time to Repair (MTTR). During a crisis, the different teams within the organization, like the SRE, Ops and IT teams, end up resorting to finger-pointing, and are unable to maintain an acceptable MTTR.

The right service architecture, coupled with the proper monitoring strategy, allows you to run your own show, to take back control, and regain visibility. Implementing and maintaining the right monitoring strategy will insulate your application from performance degradation. This gives the IT teams the reins and boosts the confidence of those tasked with handling any performance crises, whenever they strike.

To learn more about how Catchpoint can help you detect outages in seconds, sign up for a free trial.

from DZone Cloud Zone