The CyberSecure Monitoring solution by SolarEdge is among the world’s first integrated hardware-based cloud monitoring solutions for critical power infrastructure.
Cloud operators, including enterprises, cloud service providers, and communication service providers, now have a solution that enables them to evolve smoothly to a new cloud networking architecture and realize enhanced security, dramatically reduced complexity, lower total cost, and increased agility to safely accelerate their businesses at cloud speed.
Many organizations learn the hard way that, for all of the cloud’s great benefits, use that is not optimized typically leads to runaway costs for the unprepared.
Companies looking to maintain their competitive advantage and improve the quality of manufacturing activities should migrate their operations to the cloud and reap the benefits: better scalability, quicker turnaround times, increased revenues, and cost savings, among others.
Is it possible the public cloud isn't the be-all and end-all solution it claims to be?
The acceleration of digital transformation caused by the pandemic will continue for at least the next couple of years. Rapid and often continuous data center retooling is expected as businesses rely on and respond to the above trends to meet their evolving customer, employee, and partner needs.
The humble thermostat marked our society’s foray into environmental control and management. And my, what a difference 90 years makes.
Posted September 4, 2019September 4, 2019
By Sanjay Mirchandani
As a former CIO turned CEO, I can’t help but be excited about the rapidly shifting technology landscape. For years, IT leaders have tried everything to compress and minimize the cost, footprint and complexity of managing the ever-increasing deluge of data. I know, I was one of them.
These same leaders are now looking for innovative new ways to use this data to create forward-leaning, potentially disruptive opportunities for companies. To maximize the value of data, they’re moving to more flexible, scalable multi cloud environments; embracing DevOps for more agile development; and building cloud native, containerized applications to meet their company’s dynamic needs.
With thiscomes increasing complexity and data fragmentation across disparate, complexinfrastructure silos, storage environments and applications. Which makes itharder for companies to protect, manage, govern and use this data with theresources and skills they have today.
Posted September 4, 2019September 4, 2019
By Christophe Bertrand
It is significant in many ways. It addresses fundamental infrastructure and operational issues both current and future, for its customer base and the market in general. It also demonstrates Commvault’s commitment to evolving its platform stack beyond traditional backup and recovery, tackling the technical and operational complexities introduced by hybrid and multi cloud environments.
There is a significantstrain placed on enterprise IT to store, manage and protect all the dataorganizations need to deal with. Let’s face it: enterprise IT is onlygetting more complex and costly to manage as organizations become more and moredata-centric. Remember, there is no business without data, and no lastingbusiness without solid data protection. Our research is very clear aboutthis.
Zooming in on the management of dataprotection workloads, the complex interactions between the compute and storagelayers – combined with the multiplicity of “destinations” (on-premises, in thecloud, in multiple clouds) – makes is it extremely difficult to deliver dataprotection/management and operations efficiently at scale. This complexitypotentially negates the benefits of an elastic and flexible cloudinfrastructure – which is what organizations were trying to achieve in thefirst place. You can’t control or optimize what you can’t manage.
Posted September 3, 2019September 4, 2019
By Doug Morris and Thorston Thorpe
Introduction
There has been a significantincrease in the number of ransomware attacks and payments by organizations in2019. Businesses of all types have been adversely affected by ransomwarecrippling productivity and consumer confidence. Technology leaders have nowbegun to plan for ransomware attacks, understanding that a paradigm shift from“If we get hit by ransomware” to “When we get hit by ransomware” isrequired. The time to prepare for a ransomware attack is before ithappens. In addition to traditional security measures that must beadhered to, housing critical business data in multiple secure locations is alsoof paramount importance.
Ways Commvault can help
Posted August 28, 2019September 4, 2019
By Chris Powell
If happiness is a warm puppy, move over Disney World: the happiest place on Earth this week is Commvault’s Data Therapy Dog Park at VMworld in San Francisco!
Anyone who has attended a trade show or conference knows it can be overwhelming and stressful (just like dealing with data issues). So, we thought, “What’s a new environment that provides data insight AND a way to decompress?” Our answer: Puppies.
Our Data Therapy Dog Park signals Commvault’s new approach to engaging with our customers in a more personal, approachable way. The response to the park has been, well, just adorable. From customers and prospects to San Francisco police officers (and even a few competitors who are working the show), it’s pretty obvious that it’s hard to resist a puppy snuggle. Check out some too-cute videos and photos on Twitter @Commvault.
Posted August 26, 2019September 4, 2019
By Randy De Meno
Collaboration extends to ensure the confidence of Commvault data management, protection and recovery to low latency, high IOPS Azure Ultra Disk
As Microsoft enters General Availability of its new Microsoft Azure Ultra Disk offering, Commvault is once again proud to show off our engineering collaboration with Microsoft. The new high performing disk is focused on heavy I/O intensive workloads such as SQL, SAP, SAP/HANA and Oracle.
20 years of collaborative engineering and development continues
By Jeffrey Leeds
As the Commvault relationship owner at NetApp, I was delighted to be asked to guest blog the second of this series, kicked off by Nigel Tozer in his blog about NetApp’s “across every cloud” message.
With the vast majority of enterprise organizations either deploying or planning a hybrid multi cloud strategy, I’m going to make that my focus here.
Hybrid cloud – a strategy for today and tomorrow
The benefits for hybrid multi cloud look obvious, such as the agility, availability and elasticity that cloud brings, plus there are performance and economic advantages brought about by running the right workload in the right cloud. So if you’re thinking of running or extending this model, you’re not alone. Dave Bartoletti of Forester says 74 percent of enterprises already describe their IT strategy in this way1.
Posted July 30, 2019August 19, 2019
By Sandy Hamilton
Based on a quick search at urban dictionary, we were able to defy an extremely popular expression in the first quarter by having our proverbial cake and eating it too (because, clearly, you can only have it so good and can’t have it all, as the definition goes on to explain).
That’s because, in addition to new customer deals in the first quarter, existing customers and partners came out in droves to share their Commvault data transformation stories – setting the stage for our annual customer event, Commvault GO 2019, in just a few months.
We couldn’t be happier to hear from customers and partners because, really, they deserve the spotlight. Customers put their trust in us and our partners to help them modernize their infrastructure, deliver data readiness and leverage cloud to transform their business and solve hard problems. Hearing their success stories is so rewarding because our top focus is helping customers.
Posted July 29, 2019August 16, 2019
By Chris Powell
At Commvault, we believe there are no greater champions than our customers. They look after the data others use and are a proud group defending it. In that spirit, Commvault launched our Customer Champions Program to showcase, celebrate and powerfully connect our customers.
The Commvault Customer Champions Program provides the benefit of camaraderie among like-minded technology leaders. It also offers a wide range of unique opportunities for customers to showcase themselves as thought leaders, boost their personal brand and grow their professional and social media connections.
It’s easy to join the Commvault Customer Champions Program. Customers can choose from a number of engagement levels that best suit their needs. We offer our Champions opportunities to participate in case studies, video testimonials, webinars, press releases, media interviews and speaking engagements.
Posted July 25, 2019August 16, 2019
By Nigel Tozer
In my 12-year tenure at Commvault, NetApp has been an ever-present technology partner, with integration deepening as both companies have expanded our respective value propositions around data. An alliance with NetApp makes perfect sense for both parties: we have complementary portfolios, a shared channel and pretty much the same list of major technology vendors that we count as key alliances.
Last autumn, NetApp and Commvault further strengthened our relationship with a new global reseller agreement, allowing NetApp to sell Commvault Complete™ Backup and Recovery and Commvault Orchestrate™. For this reason, I thought it would be worthwhile examining the joint value we deliver to our customers and align it to the three key pillars that underpin NetApp’s “Data-Driven” message. A swift visit to NetApp’s website will reveal that the first of these is “Across every cloud,” which is where I’ll begin in this first blog in a series of three.
Data driven in the cloud
Posted July 23, 2019August 16, 2019
By Keith Townsend
With challenges such as integrating Kubernetes, serverless architectures and continuous integration/continuous development (CI/CD) into current operations, why is backup a CTO-level consideration?
If your existing data protection company focuses only on backup and recovery, you are doing your organization a disservice. Today’s data protection landscape goes far beyond backup and recovery. Enterprise customers are leveraging data protection solutions to power CI/CD, make data available to Kubernetes and serverless landscapes, and to leverage public cloud for analytics/machine learning (ML). With all that said, the most immediate benefit of modern data protection remains modern disaster recovery.
Defining disaster recovery
Choosing an enterprise cloud platform is a lot like choosing between living in an apartment building or a single-family house. Apartment living can offer conveniences and cost-savings on a month-by-month basis. Your rent pays the landlord to handle all ongoing maintenance and renovation projects — everything from fixing a leaky faucet to installing a new central A/C system. But there are restrictions that prevent you from making customizations. And a fire that breaks out in a single apartment may threaten the safety of the entire building. You have more control and autonomy with a house. You have very similar choices to consider when evaluating cloud computing services.
The first public cloud computing services that went live in the late 1990s were built on a legacy construct called a multi-tenant architecture. Their database systems were originally designed for making airline reservations, tracking customer service requests, and running financial systems. These database systems feature centralized compute, storage, and networking that served all customers. As their numbers of users grew, the multi-tenant architecture made it easy for the services to accommodate the rapid user growth.
All customers are forced to share the same software and infrastructure. That presents three major drawbacks:
Data co-mingling: Your data is in the same database as everyone else, so you rely on software for separation and isolation. This has major implications for government, healthcare, and financial regulations. Further, a security breach to the cloud provider could expose your data along with everyone else co-mingled on the same multi-tenant environment. Excessive maintenance leads to excessive downtime: Multi-tenant architectures rely on large and complex databases that require hardware and software maintenance on a regular basis, resulting in availability issues for customers. Departmental applications in use by a single group, such as the sales or marketing teams, can tolerate weekly downtime after normal business hours or on the weekend. But that’s becoming unacceptable for users who need enterprise applications to be operational as close to 24/7/365 as possible. One customer’s issue is everyone’s issue: Any action that affects the multi-tenant database affects all shared customers. When software or hardware issues are found on a multi-tenant database, it may cause an outage for all customers, and an upgrade of the multi-tenant database upgrades all customers. Your availability and upgrades are tied to all other customers that share your multi-tenancy. Entire organizations do not want to tolerate this shared approach on applications that are critical to their success. They need software and hardware issues isolated and resolved quickly, and upgrades that meet their own schedules.With its inherent data isolation and multiple availability issues, multi-tenancy is a legacy cloud computing architecture that cannot stand the test of time.
The multi-instance cloud architecture is not built on large centralized database software and infrastructure. Instead, it allocates a unique database to each customer. This prevents data co-mingling, simplifies maintenance, and makes delivering upgrades and resolving issues much easier because it can be done on a one-on-one basis. It also provides safeguards against hardware failures and other unexpected outages that a multi-tenant system cannot.
I think it’s fair to say that the role of the CIO has to be one of the toughest jobs in the world, but also one of the most rewarding.
For starters, CIOs are responsible for ensuring that each member of an organization has the resources and tools to be productive. To do so, CIOs must also provide adequate infrastructure so their organizations can extract relevant information and analysis from computer networks in real time.
As if all of this isn’t hard enough, CIOs must constantly stay updated on the latest emerging technologies that evolve at a dizzying pace. Otherwise, they may get blindsided by the next big IT trends such as virtualization, containers, or the emergence of DevOps as a practice. Finally, CIOs must pull off this technology magic under tight constraints due to budgets and limited human resources. It’s easy to see how these CIO challenges often make or break careers, and yet they keep drawing people back to this profession.
The Shifting IT Landscape Over Time
As we all know, mobility and cloud computing have made the biggest impact on traditional IT since mainframes gave way to client-server architectures in the 1980s. However, many CIOs and datacenter managers still struggle with how to navigate the blurry edges between the enterprise and public clouds.
In this fast-changing landscape, it’s worth recalling the pioneer days of the world wide web back in the early 1990s. I remember installing and launching this strange thing called a browser from Netscape, now known as Mozilla. It was a kind of revelation — you could type in online addresses, and content would suddenly appear from far-off places.
The commoditization of infrastructure is one of the most significant developments over the last couple of decades. The growth of web-scale companies like Google, Facebook, and Twitter (which collect, analyze, and extract information from a large volume of data) has influenced this commoditization of the infrastructure.
Looking back at the last couple of decades, enterprises have realized that the problems faced and solved by web-scale companies end up being problems in the enterprise after a short gestation time. Enterprises start seeing similar issues in scaling, management, and analysis of infrastructure, processes, and data. At multiple layers of the application stack, the enterprises adopt web-scale strategies to solve similar problems (Figure 1).
At the infrastructure layer, web-scale companies have focused on scale-out systems where compute, storage, and networking components blend into units of infrastructure that can be quickly replicated and grown without worrying about the complex organization of each of the components. Hyperconverged systems are an example of this trend.
At the data management layer, concepts like BigTable and MapReduce morphed from internal tools of web-scale companies to concepts and then to open source software and ecosystems. Every enterprise has a big data project using ideas and tools similar to the big data solutions practiced in web-scale companies.
However, at the infrastructure management layer, the adoption of web-scale tools and processes has been the slowest. Interestingly, even from the web-scale companies, the infrastructure management tools are the last ones being released and discussed. For example, MapReduce, BigTable, Spanner, Cassandra, DynamoDB, etc., were all discussed by Google, Facebook, and Amazon first. Google and Facebook also discussed how they build servers out of commodity hardware and promoted open compute. The infrastructure management tools such as Omega, Borg, Andromeda, and Tupperware were usually the last ones that were talked about publicly.