With 2.5 billion bytes of data currently being produced around the world every single day and the amount of data stored around the world doubling every two years, it's easy to see why data centers are growing in importance. It's also clear why increasing efficiency (in terms of systems and manpower) and safeguarding uptime (the number of systems that are running smoothly and available) are two of their highest priorities.
Data centers have traditionally been 'hardware-centric" - focused and reliant on physical equipment. And this hardware has very often been made by vendors who charge a lot for the initial purchase (especially if the hardware has been custom-made), for maintenance and for the upgrades. This has not only been financially expensive but has also come at the cost of flexibility and agility in a rapidly-changing business landscape.
Thankfully all major services in a data center can be virtualized. Pioneered by VMware, the Software-Defined Data Center(SDDC) extends virtualization beyond compute (i.e, servers) to network and storage as well. All data center resources and services become software-defined.
Expensive vendor- specific hardware is replaced with affordable off-the-shelf industry-standard hardware. Because virtualized systems can be copied and saved, they can easily be reproduced in the event of a system failure. And with automation, this reproduction can be almost immediate, meaning less downtime.
In the software-defined data center, the hypervisor is the controller. It pools together hardware resources which can be allocated precisely when and where they're most needed.
Management software that uses pre-defined policies vastly simplifies SDDC operations. All applications - wherever they're located - can be centrally monitored and managed. Different kinds of workloads (VMs or containers, for example) can be set up, run, and managed in different kinds of environment - physical, virtual, or cloud - using the same management software. And with automation, far fewer people are needed to do this.
Physical Data Centers
Every bit of information that you access on a device is the result of a transfer of data between where the information is processed and stored, and the device to which the data is sent. Whether you're searching for a good restaurant or the best price on a textbook, the information you find comes from a website that hosts its content in a data center. Today, most people rely on the internet to get the majority of their information about the world around them, and most companies rely on it to reach a good proportion of their customers. This is why efficiency and agility in data centers are so vital.
Data centers are often presumed to be large warehouse-sized structures owned by a large corporation or a government, but they can also be set up on-site by small businesses and companies themselves. These data centers house computer systems, called servers, that are used to share or compute data for clients such as a smartphone user or a business website faster than a regular computer could.
Data center infrastructure consists of three main components: compute systems (a server or host), storage devices, and networks. In a physical data center, this will all be hardware, and massive amounts of it are needed for all the data currently in circulation. It was estimated in 2016, for example, that Google had 2.5 million servers at the time.
Effective management means monitoring data availability, capacity, and performance, and providing robust data security (as well as power management, effective cooling systems, and security measures).
As mentioned in the last section, physical data centers are inflexible. They’re also costly to maintain with multiple manual processes and disconnected operations. Even with compute and storage virtualization, applications are still linked to the physical network infrastructure. Without network virtualization, data center operations will remain very manual, and therefore slow and expensive.