December 16, 2017

Database: 2017 Surprises and 2018 Predictions

By Tom Smith via www.dzone.com


Given how fast technology is changing, we thought it would be interesting to ask IT executives to share their thoughts on the biggest surprises in 2017 and their predictions for 2018.

Here's what they told us about databases: 

Lucas Vogel, Founder, Endpoint Systems

2017: The big surprise here was the Oracle Autonomous Database Cloud. A close second would be the release of Google Cloud Spanner, Google’s globally available distributed relational database platform. A distant third would be Microsoft CosmosDB, their globally available JSON database platform.

2018: The future for databases is going to be smaller. As microservices and containers evolve and stabilize, developers are going to realize that they can get far more performance from running embedded databases in their containerized microservices than they would spin up a containerized database server to run next to it. The Oracle Berkeley DB family of products offer some great embedded and self-replicating database solutions that make a compelling case for cloud and even IoT solution architectures. I also think relational database server pricing is only going to continue to get worse, as there is still a strong number of applications and platforms locked into one or two database vendors for their applications. Hopefully, we’ll see vendors take advantage of the cheaper database offerings available to them in the cloud.

Mike Kail, CTO, CYBRIC

2017: There continues to be a slow migration away from monolithic databases and a shift towards scale-out, database-as-a-service (DBaaS) solutions that are able to deliver additional features such as running big data stacks, such as Hadoop in a Docker container environment instead or requiring bare-metal hardware to guarantee performance.

2018: DBaaS vendors will continue to deliver more bespoke solutions to customers, including support for deep learning and GPU acceleration. Similar to other architectural trends, monolithic solutions will transform into microservices patterns.

Robert Reeves, Co-founder and CTO, Datical

RDBMS will continue to see growth and NoSQL will not replace it. Currently, the top relational database management system (RDBMS) vendors are Oracle, Microsoft, IBM, SAP, and Amazon. However, Amazon’s growth rate in 2016 was 107.9 percent, according to Gartner. No other RDBMS vendor matched that growth rate. The second place in growth rate was Alibaba at 99 percent, which is easy to do when you go from $4.2 million to $8.4 million. This is simply a result of RDBMS working in the cloud.

In prior years, we heard NoSQL would replace RDBMS, as it is better-suited for the cloud. That is simply not happening, nor will it. Relational database management systems solve a real business problem and the promises of NoSQL, outside of niche applications, will not deliver. Simply put, SQL is the number one language used by programmers for a reason. It works and will continue to do so for transactions which make up the vast majority of database needs.

Kannan Muthukkarupan, CEO and Co-founder, YugaByte

2017: The phenomenal adoption of Kubernetes as a means to make enterprise applications agile and portable across on-premises, hybrid, and public clouds exploded beyond all imagination in 2017. It was the tremendous growth itself that was surprising and not the fact that it was Kubernetes experiencing that growth. In the rapidly expanding cloud environment, any system that could automate deployment, scaling, and management of containerized applications, was always going places.

2018: Notwithstanding the many remarkable cloud-based advances made in 2017, the database tier remains a challenge and one that is driving demand for the next big thing in this space — a data tier that is intent-based, portable across clouds, and re-configurable with zero downtime. On November 2, 2017, the industry got its first glimpse of YugaByte, the open-source cloud-native database for mission critical applications that will satisfy this demand. Indeed, in 2018, the data tier will be the space to watch.

Philip Rathle, VP of Products, Neo4j

2017: Enterprises have surged in their adoption of graph database technology, even beyond analyst expectations. According to a recent Forrester Research report on graphs, 51% of global data and analytics technology decision-makers have implemented, are implementing, are upgrading, or are expanding graph databases in their organizations. In addition, we’re beginning to see a spike in conferences, events, and meetups focused on graph databases. For example, Neo4j’s GraphConnect New York City in October had over 1,000 attendees who represented a huge variety of industries. Additionally, we’ve seen that Cypher is now established as the go-to SQL for graph databases. Finally, it’s been exciting to see an influx of other graph vendor activities as this shows that the space is continuing to mature as we head into the next year.

2018: The most exciting new use case of graph technology is the pairing of knowledge graphs with machine learning and AI. Machine learning is going to help drive the next wave of competitive advantage for companies. But, it will come down to execution, as well as which companies can use graphs, machine learning and AI successfully. Whether it’s to connect with customers, reduce the risk of fraud, increase employee productivity or make better investment decisions, the possibilities are endless for how companies can choose to harness graph database technology. That is the difference that makes a company a connected enterprise and will ultimately drive the next wave of competitive advantage through machine learning.

Ken Tsai, Global VP, Head of Cloud Platform and Data Management, Product Marketing, SAP

2017: In late July, Gartner released its Hype Cycle for Data Management 2017, in which Hadoop distributions were marked as “obsolete before plateau.” Gartner was calling out the challenge of large, full-stack Hadoop distros, and the new capabilities associated with newer and easier SQL-based data platform technologies and managed cloud service that can cost-effectively address big data opportunities.

2018: 2018 will see a rise in the role of data platform technologies addressing the ever-growing globally distributed workforce. In order to comply with the new compliance measures going into effect next year, companies will need to go beyond data masking and implement innovative data anonymization strategies to protect privacy. We can also anticipate an expanded usage and growth of next-gen HTAP or translytical processing (beyond OLAP on transactions) to run all kinds real-time analytics workloads — machine learning, spatial, time series, graph — without sacrificing transactional integrity, performance, scale, and requiring separate SQL framework for each analytical engine.

Database-as-a-service (DBaaS) will continue to expand beyond multi-cloud support and into on-premise private clouds to enable new types of value-added data processing scenarios that previously weren’t available. Data integration and transformation tools will also get a new facelift, as enterprises are looking for solutions beyond ETL and data wrangling to create a logically centralized data governance and data pipelining management capabilities across diverse data system landscape. As a result, I see the rise of enterprise data operations — DataOps — becoming a more critical discipline for database analysts, data engineers, data analysts, and data scientists to understand.

Additionally, blockchain continues to be a hot topic that doesn’t have a unique use case (besides cryptocurrency). I anticipate that in 2018, we will start seeing use cases outside of cryptocurrency in addition to much more integrated blockchain and DBMS platform technologies.

I anticipate that we will start to view data privacy as a global issue rather than a local one, and that can’t be adequately addressed through data security (AKA eliminate data use or data access). Database platforms of the future need to incorporate new technologies and algorithms that protect data privacy while still enabling sharing of data without violating privacy compliance.

Ben Bromhead, CTO, Instaclustr

2017: I'm looking forward to what comes out of AWS re:Invent this year – Amazon always unveils some pretty cool stuff. On the trends side, I think the industry is moving rather quickly toward a database-as-a-service world, and cloud vendors (as well as new startups like FaunaDB and Instaclustr) are proving that model. As for the biggest event this past year? I think Kafka turning 1.0 and introducing KSQL on top of Kafka streams is pretty awesome. It's going to be one of those incremental changes that we look back on and think, "How did we live without that?!"

2018: Look for better dynamic scalability as databases integrate better with the cloud. Organizations will really begin to expect the same level of instant scaling from their data layer. On top of that, I think we'll see NoSQL databases continue to mature (and even consolidate), plus greater adoption of stream processing as something part of the database world. I also know we’ll see a range of new features with Apache Cassandra as the database project matures and takes the next step. There is a range of changes to the underlying storage mechanism that should provide significant performance improvements.

Yu Xu, CEO and Founder, TigerGraph

2017: MongoDB IPO in October was a big event for databases. It’s further validation that one size does not fit all in data management. Today's data is magnitudes more complex and is growing at incredible speed — meaning that enterprises need to look beyond traditional relational databases to manage it.

2018: We are quickly seeing real-time graph analytics develop as the next phase of new-generation database movement. Technologies that leverage graph databases are ideal for powering enterprise AI, machine learning, cybersecurity, and IoT applications and will continue to see widespread adoption.

Robert Anderson, VP of Product Management, IDERA

2017: Database sprawl has been commonplace for years, with database administrators (DBAs) spinning up databases as needed. Recently, we’ve seen organizations planning for the long-term and committing to increased volumes of database performance tool licenses all at once to support their database needs over the next two to three years.

2018: Database environments will become even more diverse, with DBAs increasingly acting as data professionals, enhancing their skills in data development and data science. Data governance will rise to be a top 5 initiative due to the emergence of the GDPR regulation, and online storage will continue to grow, causing more and more data security breaches. Docker container support will unlock Microsoft SQL Server support to many more application teams without requiring a full DBA, and SQL Server market share will increase among Linux users as MySQL use decreases. Finally, artificial intelligence (AI), machine learning, and even deep learning will gain traction in database management.

David Flower, President and CEO, VoltDB

During 2017, we saw the continued evolution of the database market. While some of the shifts, such as those toward more speed and scale, have been occurring for years, there are some specific market drivers that we see taking the market forward in 2018.

The death of Hadoop. Okay, that’s a pretty big exaggeration! Hadoop is finding it’s place in the enterprise, mostly for storing static data, but the hype that once surrounded the technology is certainly waning. The Strata Data Conference is now more focused on data science and AI and the Hadoop Summit has evolved into the DataWorks Summit. In addition, Hadoop pioneers Cloudera, Hortonworks and MapR have really scaled back the Hadoop-centric messaging.

The true cost surrounding open source. While still the most popular choice for testing, development, and pre-production environments, organizations are now looking more closely at the “true cost of ownership” for open-source technology. After all, MongoDB went public in 2017, citing nearly $100M in revenue, and that revenue has to come from somewhere.

The (real) time machine has arrived.Time is the asset. Real-time has been poorly defined — batch and near-real-time won’t be acceptable for many applications. And we are seeing analytics moving from the back-end (post-event) to the front-end (in-event or in-process), with 5G, ML, and AI forcing this even more. But the value is acting in that window of opportunity not tomorrow, not in one hour, not in one minute — but now! Real-time decisioning will be a key differentiator in FY18!

Kim Palko, Principal Product Manager, Red Hat JBoss Middleware

2018: There will be a renewed focus on security for data particularly in the public cloud, spurred on by the European Union's General Data Protection Regulation (GDPR). As the volume of data continues to increase, encouraged by the data generated by the Internet of Things (IoT), companies will continue to move more data into the cloud to reap the benefits of scalability, disaster recovery, flexibility, etc. and this move will require tighter security assurances.

Paul Kopacki, CMO, Realm

2018: After years of heavy focus on centralized repositories of big data, in 2018, the focus will shift in the other direction — toward the edges of the network and a new category of databases and data handling technology for mobile devices and IoT. Every device and every person is capturing, processing, and synchronizing increasingly large amounts of data, and older data technologies are not up to the challenge.

Peter Smails, Vice President, Marketing and Business Development, Datos IO

The biggest event for 2017 was the MongoDB IPO. The cloud has flipped the traditional database market on its head. A new breed of modern databases (including MongoDB, Apache Cassandra, Redis, and DynamoDB) is rapidly becoming the standard platform for cloud-native application deployment. MongoDB’s valuation and IPO are strong validation that these new databases are proliferating throughout enterprise IT.

Ravi Mayuram, SVP of Engineering and CTO, Couchbase

2018 predictions:

Digital transformations will accelerate, led by a fundamental rethink of data infrastructure. Businesses have begun to understand the link between customer engagement and digital transformation. And in turn, they’ve realized that using old infrastructure won't help them achieve this transformation. Therefore, more and more businesses will evolve their business models by fundamentally rethinking their data — how it is managed, how it is moved, and how it is presented to the customer. This fundamental rethink begins at the data infrastructure level, enabling the agility that will ultimately lead businesses to reach their digital transformation goals. The re-platforming of businesses’ database infrastructure to modern data platforms — platforms that allow fluidity of data movement and secure management from edge to cloud — will accelerate at an unprecedented pace.

Containing the database sprawl will be a mandate. One-trick technology solutions that solve singular customer problems will begin to peel away. To maintain a lasting business strategy, companies need to become a true partner for continual innovation rather than point to solutions that fill niche issues. The cost of integrating numerous solutions to a platform will not be worth the complexity and headache, and the businesses that provide one platform that fills multiple customer needs will thrive. Organizations need to adapt to customer expectations, and having an agile approach to technology will be the key differentiator.