Exciting Times …

We live in an interesting time where technological innovations are taking a much faster pace than ever before. The reason for this is the increasing need to make things faster, extreme amount of data, simplicity and cost!

In the past few years, many software-based solutions have been developed to leverage commodity hardware for a very cost-effective, easy-to-use infrastructure. While classic workloads (Databases – OLTP, OLAP, Exchange, etc.) are still managed by the enterprise data center, new workloads (object stores, NoSQL, etc.) were quickly adopted. Newly developed analytical solutions provide a deeper insight and make better decisions about fast and massively growing data (BigData). Different vendors HyperVisorjai dramatically simplified the handling of different loads and maximized the use of hardware. Public cloud providers (Amazon, Azure, etc.) And private / converged cloud providers (VCE, Nutanix, etc.) Develop tightly integrated hypervisors and management applications for scalability software distributions on off-the-shelf hardware with a few clicks install and run, thus greatly simplifying the work of data center administrators. The data center defined by the Software is no longer just a buzz word, it is now happening. Users build their own infrastructure – regardless of server, switch, storage, software – either in public or private clouds where resources are already integrated and ready for use

These changes create exciting time for everyone in the data center [19659003] But the new breakthrough innovations in hardware …

Although the first wave of software-based innovation related to "today's" hardware-based hardware continues and matures – a fundamental shift has begun in the underlying hardware technologies. These new hardware technologies are quite confusing. As they move from mere ideas to real products – a new wave of software innovation is inevitable. These new hardware not only show the early signs of huge benefits of today's applications, but also reveal new uses. There is a great deal of excitement in durable memory (Xpoint, etc.), Low Delayed Interconnected Products / Solutions (RoCE, etc.) Arriving, low overhead tank technology and recognition of new roles for the FPGA / GPU. All these technologies serve the same purpose as a more cost-effective acceleration of workload.

So … What does the software mean and what solution options do you offer?

As most of these hardware components enter the ecosystem, they also show that the software package is needed. The software package must adapt and be able to combine these new components to dramatically improve it.

Let's look at an example of these two hardware innovations and potential shortcomings in today's software package that prevent total exploitation. The new "durable memory" and "low latency network connectivity" technologies are promising that the following components are available through the following components:

  • High permanent memory (for storage) with "1μsec" delay
  • Network connection with "1μsec" delay

This is a magnitude better than combined delays (100s μsec), which today exist for rack-like components. So imagine the effect that access to durable data can be extremely effective through both "inside" and "through" calculated nodes. Very confusing! They are able to accelerate most of today's workloads (5X / 10X / 20X Acceleration?), Whether single-threaded (1 line depth) or multi-threaded (with multiple lines of depth). This means that a rack built with these capabilities can run much more workload (and faster) than in equivalent footprints. This has a significant impact on business, energy, property, and so on. But that's not all. New storage access models (durable memory and low latency networking) also promise to dramatically improve / simplify the programming of a significant number of applications.

These innovations will have a greater impact on workload than all flash arrays when they arrive at the data center!

However, today's software package is not ready to actually take advantage of the upcoming disturbing hardware. The current system software package (the IO path and the data path) masks the benefits offered by the technologies. One of the research materials of the Georgia Institute of Technology (Systems and Applications for Persistent Memory) notes: "Research has shown that storage becomes faster, [105] As mentioned earlier, traditional storage sets assume that storage is in another address space and work in block device abstraction and perform intermediate layers such as side caching to generate data When using PM (permanent memory), such layered design resulting in unnecessary duplication and translation in the software, eliminating these general costs, avoiding page caching and block layer abstraction, that PM utilizes the full potential of the PM … "

These hardware items come in …s "commodi" ty "at some point. The solution of the software package (especially path and data path IO) is still significant opportunity. As these components and software are not available in the "usable general product" format, innovation to provide these capabilities as an integrated product is a tremendous opportunity. Someone has to go back and find a solution to make these discrete but related innovations in a useful "finished product".

Therefore, there is a need for a user consumable product – which integrates these "new" components with the appropriate software package modification – to effectively deliver these components in a transparent way from the "existing" workloads and to create the "newer" loads .

Well … enough little research and effort is already in place …

A number of open source initiatives are in progress and many companies work together to create standard interfaces and demonstrate the benefits of different workloads. We are discussing a number of possible solutions and workload transitions.

Long Term Memory Programming Model

  • http://pmem.io
  • "Computer applications have been organizing data between two levels for years: memory and storage, we believe that emerging persistent memory technologies will introduce the third level. ) such as volatile memory, processor load, and storage instructions, but the contents will retain power outages, such as storage. "

Georgia Institute of Technology

  • SYSTEMS AND APPLICATIONS
  • "Emerging, non-volatile (or durable) memories overflow memory and storage performance and capacity gap, thus introducing a new level to exploit the full potential of future hybrid memory systems." New System Application and Deployment we need to develop application mechanisms that allow the optimal use of PM as both a fast storage and a scalable low-cost (but slower) memory "

SNIA

So … What are the products / solutions and markets we are talking about?

The momentum of building and recognition is growing in existence and possibilities of innovations and the expectation will be that commodities will be in the future. Data center ecosystems take time as the change – in a transparent, dramatic way, improves existing architectures and is suitable for easy experimentation.

Accordingly, to shake things and move them faster, it is a very attractive option for a solution / product – with innovative components that includes a software suite – where the solution is

  • Understand and Maximize Existing Workloads Sensitively and Transparently
  • Facilitates experimentation and introduces new workloads to support new open standards to create new platforms on the platform

I'm pretty sure there are many similar minds who are already working on products that will achieve similar goals. In my next article, some of the additional hardware innovations, effects and why they should become part of the overall solution and how they can affect today's converged solutions.

Source by Vikas Ratna

Leave a Reply

Your email address will not be published. Required fields are marked *