Blog – Named Google Cloud Migration Partner of the Year (2018)

No alt text provided for this image
Save this article for later

Rackspace is excited to have been named the Google Cloud Migration Partner of the Year 2018 at the Google Next ’19 Conference this week – read on to learn how our team can help customers on their GCP cloud journey.

204 views of your post in the feed

Blog – Rackspace offers Managed Azure Kubernetes Service

Rackspace offers Managed Azure Kubernetes Service

As a Microsoft® Azure® Expert Managed Service Provider, Rackspace is focused on delivering high-value services for customers who are actively running cloud-native applications and for customers who are interested in modernizing existing applications to take advantage of the performance, availability, and cost savings that they can realize by leveraging a container-based microservices architecture.

Introduction

Rackspace’s Fanatical Support for Azure offering continues to evolve, and we are excited to announce that our most recent enhancement to our managed services helps our customers remove the difficulty of architecting, deploying, and operating Kubernetes® at scale and helps customers choose the right solution for their needs to deliver containers services in Azure. We have developed a set of management capabilities for the Azure Kubernetes Service (AKS) that allows our customers to accelerate their Kubernetes journey and align their digital transformation and container strategies.

Adopting newer technologies into production environments can present significant challenges for businesses of all sizes. Rackspace works with our customers across the entire lifecycle of a Kubernetes service, from application inception through production deployment. Additionally, the effort to make AKS production-ready in a specific application environment requires deep Kubernetes skill sets, broad Azure experience, and a 24x7x365 Cloud Operations team that many customers prefer not to develop on their own.

Our customers not only need assistance in setting up their AKS environment with the right performance, availability, and cost designs in mind, but they also need assistance to meet their specific security requirements. Our Fanatical Support for Azure offering helps our customers simplify the process to set network policies and provide Azure role-based access and security management capabilities that are needed to securely reach the compliance and regulatory controls within their AKS environments.

Rackspace’s deep partnership with Microsoft enables us to consider the features and capabilities customers are looking for as they adopt containers within Rackspace managed environments and start moving more production workloads to Azure. Rackspace consistently delivers new sets of capabilities and provides customers with the management services for the solutions that they require.

Rackspace’s AKS offering

Some highlights of the Rackspace AKS offering capabilities include the following services and solutions:

  • Architectural design and infrastructure deployment of Azure and AKS services.
    • Design solutions to meet your developer and application requirements with a team of dedicated Azure and Kubernetes experts to ensure optimal workload productivity, placement, size for nodes and node pools, network infrastructure, and security to Azure and the AKS resources.
  • High availability and auto-repair recovery solutions.
    • Maximize uptime and availability for your workloads, including use of region pairs, multiple clusters with Azure Traffic Manager, and geo-replication of container images.
    • Proactively support self-healing clusters and the auto-repair of nodes and node pools as part of the recovery solution.
  • Multi-cluster management.
    • Include multi-tenancy core components and physical and logical isolation with namespaces for development workloads and production workloads, as well as a multi-team approach to microservices architecture.
  • Monitoring, Alerting and Operations Management.
    • Design solutions to monitor the performance of container workloads deployed to AKS, providing performance visibility by collecting memory and processor metrics from controllers, nodes, and containers.
    • Understand the behavior of the cluster under average and heaviest loads.
    • Provide knowledge to identify capacity needs and determine the maximum load that the cluster can sustain.
  • Zero downtime cluster upgrades.
    • Include orchestration for both the Kubernetes master and agent components to safely cordon and drain each AKS node and perform the upgrade without interruption to existing services.
  • Role-based access control and single sign-on support with Azure Active Directory (AD).
    • Include integration with AD by using role-based access controls (RBAC) and pod identities.
  • Application Modernization for containers.
    • Offer professional services capabilities to help transform your legacy application into a microservices architecture to take full advantages of the AKS platform.
  • Application Lifecycle Management.
    • Provide professional service capabilities to help manage the applications lifecycle of third-party applications, such as Grafana®, Prometheus®, Istio®, and so on.
  • Comprehensive security solutions for container images and infrastructure.
    • Secure the image and runtimes by using trusted registries and automated builds on base image updates.
    • Secure access to resources, limiting credential exposure and using pod identities and digital key vaults.
  • Rackspace Fanatical Experience™ with 24x7x365 support services to always be there when you need us.

Use the Feedback tab to make any comments or ask questions.

Optimize your environment with expert administration, management, and configuration

Rackspace’s Application services (RAS) experts provide the following professional and managed servicesacross a broad portfolio of applications:

We deliver:

  • Unbiased expertise: We simplify and guide your modernization journey, focusing on the capabilities that deliver immediate value.
  • Fanatical Experience™: We combine a process first, technology second approach with dedicated technical support to provide comprehensive solutions.
  • Unrivaled portfolio: We apply extensive cloud experience to help you choose and deploy the right technology on the right cloud.
  • Agile delivery: We meet you where you are in your journey and align our success with yours.

Chat now to get started.

Webinar – Microsoft Partner Thought Leadership Series – What to do after your cloud journey?

Thought Leadership Series: What to do after your Cloud Journey? 

  

Upcoming Episode:

Session 4:  What to do after your Cloud Journey?  Management & Optimization in a cloud environment. (April 23 2019, 09:00-09:30am PST)

Getting to the cloud isn’t the end of your Cloud Journey.  Being in the cloud changes the operating paradigm.  We’ll look at the differences and focus on best practices of cloud management and optimization.

This was the final episode from the Thought Leadership Series with RiverMeadow. If you missed this call, check out the video recording down below!

Michael_Kent2.jpgMichael is a 25-year technology veteran with expertise in virtualization, cloud and SaaS. He is responsible for designing RiverMeadow’s technology roadmap and works closely with customers and partners. Prior to joining RiverMeadow, Michael served as Chief Architect for CenterBeam, a MSP/ASP, and was responsible for the entire SaaS Product Line. This included driving CenterBeam’s most advanced and innovative offerings and developing strategies to bridge the gap between traditional on-premise migration services and Cloud migration.

David Lucky– Managed Services, Rackspace

David Lucky is a Product Marketing leader at Rackspace for the Managed Public Cloud services group, a global business unit focused on delivering end-to-end digital transformation services. David came to Rackspace from Datapipe where as Director of Product Management for six years he led product development in building services to help enterprise clients leverage managed IT services to solve complex business challenges. David has unique insight into the latest product developments for private, public and hybrid cloud platforms and a keen understanding of industry trends and their impact on business development. He holds an engineering degree from Lehigh University and is based out of Jersey City, NJ.

David-Lucky-1-150x150.jpg

How to Participate:

To attend the event, please download the calendar invite below. The invite will contain a MSFT Teams link which will be used to participate in the event. 

Please start asking questions in the below thread, or be ready to ask questions during the event after the small presentation.

‘See’ you there!

-MPC Team

 

 

AWS Partner Series – Unlocking Hybrid Architecture’s Potential with DevOps

This blog I wrote first appeared for the AWS Partner Blog Series and can be found here:

https://aws.amazon.com/blogs/apn/unlocking-hybrid-architectures-potential-with-devops/

Unlocking Hybrid Architecture’s Potential with DevOps

Last week in our MSP Partner Spotlight series, we heard from Jeff Aden at 2nd Watch and learned about the value that next gen MSPs can bring to their customers through well managed migrations and through 2nd Watch’s Four Key Steps of Optimizing Your IT Resources. Another area of new value that AWS MSPs can bring to their customers is management of their hybrid IT architecture, allowing customers at any stage of the cloud adoption journey to best leverage the AWS Cloud. This week we hear from Datapipe (APN Premier Partner, MSP Partner and holder of several AWS Competencies and AWS Service Delivery designations) as they discuss their approach and considerations in supporting their customers’ hybrid architectures.

Unlocking Hybrid Architecture’s Potential with DevOps

By David Lucky, Director of Product Management at Datapipe

Hybrid IT architecture, or what many customers call hybrid cloud, is increasingly prevalent in today’s fast-paced technology industry. Over the past few years, Datapipe has seen an initial reluctance towards cloud adoption transform into excitement, and hybrid architecture is emerging as a go-to solution for enterprise organizations looking for a way to manage their complex operations and run AWS as a seamless extension of their on-premises infrastructure.

Hybrid architecture gives organizations Application Programming Interface (API) accessibility, providing developers with programmatic access to control their environments through well-defined methods. APIs, commonly defined as “code that allows two software programs to communicate with each other,” are increasing in popularity in part due to the rise of cloud computing, and have steadily improved software quality over the last decade. Now, instead of having to custom develop software for a specific purpose, software is often written referencing APIs with widely useful features, which reduces development time and cost, and alleviates risk of error.

With API accessibility, developers can easily repurpose proven APIs to build new applications instead of having to manage them manually. This gives them more room to experiment and innovate and creates a culture of curiosity. In this way, the API accessibility of hybrid architecture leads to a necessary rebalancing of development and operations teams looking to solve problems earlier and more automatically than was previously possible with purely on-premises solutions.

To maintain the culture of curiosity that’s enabled by API accessibility through hybrid environments, we recommend organizations remove the silos that traditionally separate development and operations teams, and encourage open communication and collaboration – better known as DevOps. Implementing a DevOps culture helps organizations take advantage of a hybrid infrastructure to increase efficiencies along the entire software development lifecycle (SDLC). At Datapipe, we understand how critical the adoption of DevOps methodologies and agile workflows are for IT organizations to remain competitive and respond to the constantly evolving technology landscape. It’s the reason we expanded our professional services to include DevOps, and why we help organizations make the cultural switch to DevOps the right way, starting with people.

Individuals Over Tools

While many people conflate DevOps with an increase in automation tools, an organization can’t fully realize DevOps culture without starting with its people. A DevOps culture fosters open communication and constant collaboration between team members. It dissolves barriers between operations and development departments, giving everyone ownership over the SDLC as a whole, beyond their traditional, individual responsibilities. Being able to see the big picture allows team members to transition from being reactive to being proactive. That, in turn, involves shifting away from addressing problems as they arise to determining the root cause of the problem and finding a solution as a part of a continuous improvement mindset. Organizations that fully embrace this full-stack DevOps approach can provision a server in minutes instead of weeks, which is a vast improvement on the traditional SLDC model.

This mindset also means moving from a reactionary approach and solving problems through “closing tickets” to a proactive approach that involves consistently searching for inefficiencies and addressing them in real-time, so an organization’s software is continually improving at the most fundamental levels. Of course, addressing inefficiencies in the software also means addressing inefficiencies in workflows, which leads to the use of DevOps tools such as automation and writing reusable utilities.

However, productivity tools won’t increase efficiency on their own. An effective DevOps culture starts with open collaboration between team members, and then is reinforced by tools. At Datapipe, we see incorporating a DevOps culture through the lens of the “Agile Manifesto,” which promotes “individuals and interactions over processes and tools.” When you combine agile working practices with DevOps, you can manage change in a feature-focused manner, providing faster interaction and response. Managing change in this way means that organizations achieve their goals through a strong DevOps culture that automates the majority of the overall development and delivery process, enabling teams to focus on areas that create a differential experience. This takes time – and collaboration among team members – to set up. The real-time collaboration that marks a full-stack DevOps approach reduces the number of handoffs in a SDLC, thus accelerating the entire process, and decreasing an applications’ time-to-market.

Looking Ahead

Hybrid architecture growth is expected to continue. Industry analyst firm IDC predicts that 80 percent of all enterprise IT organizations will commit to hybrid architecture by the end of this year. This prediction is in line with what we’re seeing from our customers. As a next-gen MSP, we’ve seen an increase in enterprise companies looking for guidance on incorporating a DevOps culture to complement their digital transformations.

Take our work with British Medical Journal (BMJ), for example. BMJ started out over 170 years ago as a medical journal. Now, as a global online brand, BMJ has expanded to encompass 60 specialist medical and allied sciences journals with millions of readers. As a result of their dramatic growth, their old infrastructure could no longer support their application release process. In addition, as an increasingly global organization, BMJ’s capacity for allowing downtime – scheduled or otherwise – was diminishing. To solve this problem, BMJ needed to move to a sustainable development cycle of continuous integration and automation, which is only possible through a shift to a DevOps type culture. We helped BMJ implement this culture while assisting with changes to their infrastructure. The switch to a more open, collaborative culture not only allowed BMJ to implement a sustainable development cycle, complete with continuous integration and automation, but it also made them feel better prepared to take their next planned step of moving workloads to the AWS Cloud and embracing a hybrid environment. (More about how we helped BMJ move to a DevOps-oriented culture can be found here).

If you’re interested in leveraging DevOps to get the most out of your hybrid environment, we recommend starting with the following considerations:

  • Leverage object-oriented programming principles such as abstraction and encapsulation to build re-usable and parameterized components that can be assembled like building blocks. This can be done in configuration management with Chef Recipes, Puppet Modules, and Ansible Roles, or through infrastructure building blocks like Terraform Modules and AWS CloudFormation scripts.
  • When automating infrastructure management, test destruction as deeply as the creation process. This will give you the ability to iterate and test cleanly.
  • Balance the effort being put into upfront engineering versus operational management activities. More upfront engineering unlocks some great features with Auto Scaling on AWS. For more steady-state applications, the resources needed to set up and configure can sometimes be much less than the effort of working through automation. This makes it worthwhile to look for open-source modules to help you in your infrastructure and configuration management workflows.
  • For Auto Scaling groups within AWS, consider, as you engineer your process, the time tolerances your workload has from the time when AWS detects the need for a new instance to when they are fully operational. Fully-baked Amazon Machine Images tend to be the fastest time to operational, but this would require building an image for every version of your application. Packer is a great tool for this purpose. In addition, the more you embed user data or configuration management processes, the longer your instance will take to reach an operational state. Finally, keep in mind processes like domain joins and renaming of instances, which require reboots, can add time to the launch process and use them as sparingly as possible.
  • For a low-latency link between your resources in and out of the cloud, consider taking advantage of higher-level services like AWS Direct Connect, which provides a virtual interface directly to public AWS services and allows you to bypass Internet service providers in your network path. Datapipe client ScreenScape used Direct Connect to link their on-premises environment to Amazon CloudFront for a cloud environment that’s highly available, fully managed, and able to scale over time with proven capability. (Learn more here.)

Hybrid architecture offers organizations the power of both on-premises and cloud environments like AWS, giving them the tools to grow and innovate at a lower cost. For companies to fully capitalize on the benefits of these mixed environments, a culture change is necessary. By shifting to a DevOps culture and enabling teams to work together in a full-stack perspective, organizations can not only increase efficiency in their SDLCs, but also open up opportunities for immense engagement and creativity – qualities necessary for innovation. A next-generation MSP, with DevOps and Software-as-a-Service (SaaS) capabilities, can be a valuable guide for IT teams on their hybrid cloud journey. At Datapipe, we pride ourselves on being a next-generation MSP, and our proficiency with DevOps was a key differentiator that led to our position as a leader in the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide. By partnering with a next-gen MSP, like those included in AWS Managed Service Partner program, organizations don’t have to make the shift to DevOps on their own.

To get started or for assistance on your cloud journey, contact us at www.datapipe.com

David Lucky

Director of Product Management

www.Datapipe.com

Are you in the know? A recap of AWS re:Invent 2015

IMG_0012

Another AWS re:Invent has come and gone, and this year’s event was packed with exciting announcements and presentations. Below are a few of our favorite moments and highlights you should be aware of.

News Announcements

  • Amazon announced Amazon Inspector, an automated security assessment service that analyzes the behavior of the applications you run in AWS. The service will check common security standards and vulnerabilities to detect and remediate security issues early and often. In addition, Amazon Inspector will provide detailed reports and full audit trails. Security still remains the number one concern for enterprises moving to the cloud. Services like Amazon Inspector and Datapipe’s 24/7/365, “always on” customer support are crucial.
  • Amazon QuickSight provides easy-to-use business intelligence for big data needs at 1/10th the price of traditional on-premise solutions. Amazon QuickSight will be able to handle several types of data-intensive workloads, like ad targeting, marketing & sales analytics, and IoT device stream management. The benefits are plentiful in having business intelligence at your fingertips. Organizations can expand their analytical capabilities by taking advantage of big data and real-time reporting. Being cloud-first helps business leaders extract information from huge stores of both structured and unstructured data. An MSP like Datapipe will help you make sense of all that data.
  • Last year, Amazon announced enterprises could track AWS resource configurations with AWS Config. They’ve added to the feature, extending Config with a powerful new rule system. Businesses can use existing rules from AWS and from partners, or they can also define their own rules. These rules can be targeted at certain resources, types of resources, or at resources tagged in a specific way. Rules are run when those resources are created or changed, and can be evaluated hourly, daily, or on another periodic basis. This will also help businesses report compliance, or noncompliance, depending on what makes most sense for them. This is possible for any resource type supported by Config.
  • Amazon also unveiled Snowball at the event, essentially a smart box for shipping up to 50TB of data to AWS. While the reaction has been mostly positive, a few folks are wondering how secure this new product really is. But not to worry; just like Datapipe puts its customers’ security fears at ease, so should Snowball. It’s durable, has everything you need for power and networking, and with automatic encryption, it can safely hold data, and lots of it.
  • Amazon announced AWS IoT, a platform for building, managing and analyzing the Internet of Things. Amazon’s CTO Werner Vogels stressed that simply being connected doesn’t necessarily mean a platform is useful, so AWS IoT is aiming to change that with this release. AWS IoT allows billions of things to keep responsive connections to the cloud, and lets cloud applications interact with things. It receives messages from things and filters, then records, transforms, augments or routes them to other services within AWS or to an enterprise’s own code.

AWS Updates

Based on the feedback Amazon had received from Amazon ECS and Docker users, they’re announcing some new features. Top of the list? The EC2 Container Registry. When launching a container, an image is referenced. That image is pulled from a Docker registry, which is a critical part of the deployment process. Customers need a registry that is highly available, exceptionally scalable and globally accessible. Amazon’s EC2 Container Registry (Amazon ECR) will address these issues, making it easy for enterprises to store, manage, distribute and collaborate around Docker container images. Amazon ECR is integrated with ECS, so it’ll be easy to integrate into the existing product workflow. Images are stored durably in S3 and encrypted during both at rest and in transit. Users will pay only for the data stored and transferred to the Internet.

AWS Lambda was released at last year’s re:Invent conference, and this year, Amazon announced a plethora of additions to the event-driven computing service. Chief among them: support for the Python programming language, and support for AWS’ Virtual Private Cloud service. Lambda also has increased function duration – up to five minutes, Vogels said – as well as function versioning and scheduled functions. Similar to AWS, Datapipe is constantly innovating and working to improve our services. These announcements are very exciting, and we can’t wait to get our hands on the new and updated solutions.

Observations from the Show Floor 

Security Still Top of Mind: AWS re:Invent keynote speaker Andy Jassy says people are moving to the cloud because they want to control their destiny. We couldn’t agree more, and Datapipe’s Access Control Model for AWS (DACMA) allows us to effectively manage an enterprise’s system while allowing them to stay in control of their virtual infrastructure and data. There was a strong emphasis on security at this year’s show. And why shouldn’t there be? It’s still top of mind for enterprises. It’s also top of mind for us, so it’s great to see some new products focused on security.

Automation is Big: Automation helps businesses be more efficient, as less time is spent on repetitive tasks. A key theme at this year’s conference was encouraging enterprises to automate systems whenever possible. Stephen Schmidt, CISO of AWS, gets an email report every day with a listing of all privileged EC2 commands executed by his staff. The goal is for that report to be zero, thanks to automation. Datapipe announced automation services a few months back, and we’ve helped companies see the benefits. We continue to strive to make things easier for our customers. Automation is certainly a way to do that.

So, what exactly are customers asking for?: It’s a complicated answer, but some of the areas they’re focusing on include the IoT, mobile devices, enterprise apps, big data, marketing & commerce, life sciences, healthcare, and digital media. With more focus on those areas, there will be continued interest in DevOps and automation and cloud adoption framework, including security and governance, backup and disaster recovery, managed services, networking and migration.

Let’s Play Ball: AWS was deployed at all 30 Major League Baseball parks this year. The streaming service allowed MLB Advanced Media to capture 700,000 pitches and 130,000 balls in play, and has helped bring the nuances of the game closer to fans, broadcasters and players alike. With the MLB playoffs in full swing, there will be even more opportunities to see every bit of the action. We’re no strangers to sports either. Our disaster recovery system ensures TaylorMade Golf is covered in any situation. With so many people interested in each and every sport, it’s comforting for an enterprise to have a dependable solution they can deliver.

Fun side note: AWS had a greenhouse at re:Invent. Plenty of people stopped in, but few probably knew the work that went into it. AWS shared a timelapse of the building’s construction, and the result is pretty cool! You can check it out here.

Datapipe In Action

The Datapipe booth was bustling throughout the event. Our in-booth Cloud Enablement Theater hosted Lightning Talks from Datapipe partners AlertLogic, Equinix, FortyCloud, and TaylorMade.

“Expanding your Cloud Business with AWS using AWS Marketplace, a Global Customer Channel” was a breakout session focusing on AWS Partner Success stories. This session helped educate customers, technology and consulting partners across the Amazon ecosystem on how to leverage the AWS Marketplace. The Chef CEO opened the conversation by discussing how his company leverages AWS Marketplace to sell and deploy software for customers on AWS. Then we detailed a recent customer deployment with global sporting goods company, TaylorMade, and provided examples on how to utilize AWS in an effort to drive innovation and increase reach to customers. The Datapipe managed solution provided TaylorMade a disaster recovery system on AWS, leveraging Oracle and security services to provide an agile platform that saved TaylorMade 30 percent over the traditional IT platform. Barry Russell, Head of Business, AWS Marketplace, led the breakout session, which also included Datapipe’s Director of Product Management David Lucky and TaylorMade’s Manager of Infrastructure, Chris Smith.

Datapipe’s Client Success Manager, Patrick Ohler, also interviewed McDonald’s VP & CTO, Frank Ellermeyer on the chain’s move to AWS. On the AWS stage, Datapipe Senior Director of Cloud Products, Jason Woodlee, presented on Building and Deploying a Modern Big Data Architecture on AWS. Keep an eye on the Datapipe YouTube channel for videos of Datapipe’s AWS activities onsite, and visit our landing page for all things pertaining to our relationship with AWS.

A Sit Down With David Lucky: Recent AWS Success and Next Generation Trends (May 2016)

Screen-Shot-2016-05-19-at-11.16.50-AM

Jeff Bezos at Amazon Web Services recently stated that more than one million people from organizations of every size use AWS across nearly every industry. They have seen so much success to date – just last month announcing that AWS will be a $10 billion business by the end of 2016. Given this recent achievement, I wanted to take this time to sit down with David Lucky, who manages our close relationship with AWS, to get his take on Amazon’s recent success, as well as his predictions for what the future of the industry has in store.

As a long-time partner of AWS, what do you think their biggest successes have been?

David: Our partnership at AWS covers over a 5-year timeframe so needless to say their success in that time span has been incredible. Much of that success I believe is due to a relentless focus on innovation and customer obsession. A good example of this customer obsession is how they listen to what customers need, and deliver on them. For example, their fairly new database engine – Aurora. In the past, customers have been frustrated with the licensing terms and high costs of traditional database providers but still wanted the performance of an enterprise class database engine. Aurora addressed these seemingly incompatible needs and today, the service has been adopted by customers in large numbers and is AWS’ fastest growing service. As we know, cloud adoption is a journey and to achieve success, having partners that are focused on both innovation and customers is a winning combination. These two guiding principles, among others of course, have led to a scale of operations I don’t think anyone could have predicted five years ago. The amount of innovations all geared around customers has been key to their success.

Looking forward, where do you think AWS is headed – what are you most excited about?

David: There are truly many things to be excited about. When I think of the consumer device, the Amazon Echo, which is a home-based automation and speaker voice recognition system, coupled with the AWS cloud infrastructure behind it, I can’t help but be excited about the possibilities. Developers all over are using the AWS cloud to find new ways to use this service, as well as the Amazon Dash buttons and AWS IoT service – where AWS cloud platforms and consumer-oriented tech have a close relationship. This is providing developers a platform to create unique applications and obtain instant feedback. Creating new voice commands for the Echo using AWS Lambda or new home automation with the Dash and AWS IoT are innovations that are opening up the door to new realms of possibilities. I can’t wait to see what else my Echo will eventually be used for.

Any trends you think will be major focus areas at the upcoming AWS Summits and re:Invent later in the year?

David: In the most recent events we saw a lot of providers being recognized for exceptional customer service and dedication to innovation. These announcements bring to light the fact that AWS’ partner network is stronger than ever and in 2016, it sounds like the emphasis the company is placing on its partners will continue to grow. We saw two themes emerge around the new services announced at re:Invent that partners can take advantage of – reducing the friction with onboarding data to the AWS platform with offerings like Snowball, the RDS Database Migration Service and most recently the launch of the AWS Application Discovery Service. We are also seeing a security and governance theme with the AWS Config Rules, WAF, and Inspector – which last month went GA. I would expect continued innovation in the security space this year. All of these also address adoption of the cloud and reducing that potential friction.

I anticipate we’ll see more trends around tools and processes for automation in everything – this will continue to be a driver as we look to later this year and into next. I would imagine AWS will continue to innovate from their base of automation code for tools including CodeCommit, Code Pipeline and CodeDeply. In the past, we have also seen AWS announce a few products that compete with other marketplace vendors. It will be interesting to see how existing vendors innovate to differentiate themselves. Certainly the specific new products / features are something we expect at every AWS re:Invent and AWS Summit. I don’t want to give away any announcements though – the anticipation is half the fun.

The topic of next generation architectures is gaining popularity – what do you think the industry needs to do in order to get there? Is it a reality today?

David: There are aspects to next generation architectures, particularly the micro service architecture method of developing software through a more modular approach, that require companies to set business goals and establish processes around them. The goals and processes are necessary in order to implement and take advantage of these techniques and systems. The AWS platform including its EC2, API gateway service, containers and Lambda all provide the required toolsets to help enable this transformation for companies. However, there are key fundamental steps, including clear goals and policies, companies need to take in order to get there. And it will take some time for more companies to fully adopt these principles.