Author Archives: cking

Dell Precision at Computex – How Innovation Enables Business Evolution

By Charles King, Pund-IT, Inc.  June 12, 2019

A common view of evolutionary process imagines continual upward motion—graphics of “the ascent of man” depicting the development of early hominids into modern humans is a classic example. But in the IT industry, computational evolution or at least the spark of evolution can be said to move both upward and downward.

That’s due in large part to the ongoing efforts among PC, workstation and server vendors to improve their products with the latest, greatest CPUs, GPUs, memory, storage, networking and display technologies. Those enhancements result in high-end solutions designed for enterprises and other well-funded organizations. But at the same time, they can also be used to develop products suitable for individuals and companies with more modest budgets, enabling them to continue their own evolutionary journeys.

The mobile, desktop and rack-enabled Precision workstations recently introduced by Dell Technologies at Computex offer great examples of this process and its potential benefits. Let’s dig into the details. Continue reading

How Dell Technologies OEM & IoT Division is Addressing Clients’ Current and Future Needs

By Charles King, Pund-IT, Inc.  June 5, 2019

Original equipment manufacturers (OEMs) have been so central to the tech industry for so long that’s it’s easy to think you know everything about them. Mainly, people associate OEM functions and processes with the hardware vendors that make products which run popular commercial operating systems, like Microsoft Windows and Windows Server, along with thousands of business and consumer applications, toolsets, apps and utilities.

That’s a fair if generalized view of the subject, but it ignores the services that many of those same vendors offer to numerous companies, from tiny start-ups to multi-national enterprises. In essence, OEM organizations provide the computing “brains” that power everything from set-top cable boxes to bank ATMs to smart TVs and other home appliances to manufacturing and automation solutions to medical testing and imaging equipment to telco switching systems.

How OEMs contribute to the development, design and manufacturing of these products varies widely. That said, the notable success and continuing growth of Dell Technologies’ OEM organization over the past decade makes it a subject worth examining.

Dell Technologies OEM and IoT in the beginning

Shortly after returning to the company’s CEO position in 2007, Michael Dell prioritized growing the company’s OEM business. Why so? In large part, to leverage and extend Dell’s considerable supply chain capabilities and assets. In addition, as the company invested further in research and development (R&D), Dell recognized that the same new technologies that improved and differentiated his own company’s commercial products could offer similar benefits to OEM customers.

The OEM business grew steadily over the next five years but stepped-up significantly beginning in 2012, the year Dell appointed Joyce Mullen to take over managing the group. Mullen, previously Dell’s VP and GM of Global Alliances and Software, brought deep experience in global planning, operations, strategy and partner relationships to the table, all factors that helped substantially accelerate the OEM business’s growth.

Two years later, Mullen opened the first of what would eventually become three Dell Internet of Things (IoT) labs in Santa Clara, California. What does IoT have to do with OEM? During the course of growing its OEM engagements, the company discovered that many of its customers were well-positioned to leverage Dell’s edge-of-network solutions, such as the Edge Gateway 5000 Series.

In some cases, such as factory automation offerings and video security systems, customers were already collecting data that could be further analyzed with IoT solutions to enhance performance and service offerings. In addition, Dell recognized that supporting OEM customers’ unique intellectual property (IP) and vertical industry expertise for IoT use cases would open commercial opportunities that were too specialized for Tier-1 vendors to consider exploring.

That strategy further expanded the OEM and IoT organization’s engagements, and in November 2017, Mullen was named president of Dell’s Global Channel, OEM and IoT. The following March, the company named Bryan E. Jones, a longtime Dell and EMC marketing and sales executive, to become SVP and GM of the OEM and IoT Solutions division.

Dell Technologies OEM and IoT today

Recently, Jones hosted an industry analyst briefing to discuss the state of his organization. So how is Dell Technologies OEM and IoT business doing? Overall, darned good. Since 1998, the division has completed over 8,000 product designs that resulted in products available in more than 70 countries. Jones also discussed how the OEM and IoT group is contributing to Dell’s overall business, noting that in FY 2019, the division delivered $5B in orders, or roughly 5% of Dell Technologies’ overall revenues.

That’s a substantial piece of business by any measure. Jones also discussed the group’s robust 36% YoY growth rate which is running at a 23% premium to the industry average for OEM vendors. Given those points, it’s hardly a wonder that IDC and other research companies have noted Dell’s leadership in global OEM engagements. If the annual market for OEM/IoT grows to the $40B-$50B that the company’s internal research estimates, Dell is well positioned to continue prospering.

That’s especially true given how Dell Technologies OEM and IoT is transforming itself and its portfolio in order to better serve existing and prospective customers. For example, by leveraging Dell’s portfolio of Tier-1 solutions and its global manufacturing facilities, OEM customers can move quickly, embrace bold designs and scale production to meet growing demand and sales.

Jones also detailed how the company is helping customers adopt new technologies to improve their own products and services, including Dell hyperconverged platforms and software-defined solutions, VMware virtualization and hybrid cloud offerings, RSA security and Pivotal cloud-native development tools and applications. Working closely with Dell’s dedicated OEM engineering team enables customers to quickly overcome complexity problems and other impediments.

Those same team members can also customize underlying platforms as necessary and help accelerate product delivery. In addition, Dell’s supply chain and production capabilities enable customers to effectively reach new audiences that they may never have considered. At one level, that includes replicating and delivering solutions in global markets. But Dell also has the means to build pre-certified, trusted solutions in markets where local manufacturing is required or offers a competitive advantage.

Finally, the company’s OEM and IoT organization can provide a conduit that enables customers to leverage Dell to support and maintain their solutions in the field. That’s obviously invaluable for smaller companies without the wherewithal to offer such services, but it can vastly simplify life and business for larger companies, too.

Final analysis

In a separate discussion I had with Bryan Jones at Dell Technologies World 2019, he noted the larger value that his organization offers to customers. It isn’t simply a matter of executing successful manufacturing engagements. Instead, Dell Technologies OEM and IoT can help customers make the right decisions about the solutions they use today and open their eyes to the opportunities presented by next generation technologies and tools.

The key, Jones said, was to help customers uderstand the connections between their aspirational goals and commercial market opportunities. That may sound like feel-good hype, but consider the source: While IT vendors of every stripe love to highlight their entrepreneurial qualities, Dell Technologies has lived the dream – growing, literally, from a visionary kid building custom PCs in his dorm room to become a near-$100B company with 140,000+ employees in 35 years.

Obviously, not every customer will achieve that level of success. However, Dell Technologies’ OEM and IoT division can provide the solutions, assets, services and advice that customers need to achieve their aspirational goals and successfully bring their products to global markets.

© 2019 Pund-IT, Inc. All rights reserved.

HPE Buys Cray – Seeks to Return to HPC Relevance. Again.

By Charles King, Pund-IT, Inc.  May 29, 2019

The success of corporate acquisitions is never guaranteed. Too many things can go wrong. Deals look better on paper than they do in real life. Planned strategies falter. Hoped for synergies are DOA. Executive power plays cause long range damage. Key employees feel unloved and seek greener pastures.

Plus, there’s simple poor planning or execution. An acquiring company may believe that the object of its attentions offers assets, people and capital that can aid its own endeavors. But, after the deal is done, it never properly or fully does what’s necessary to gain the full benefits of the investment. The practical effect is the business equivalent of a sugar rush. A few months of “Whoa, mama! How fast can this sucker go?” ending with a Thelma and Louise-style flameout.

It happens to even highly successful companies, often time and again, with Hewlett Packard Enterprise (HPE) being a notable example. Though its recent purchase of legendary supercomputing leader Cray has been widely lauded, the history of HPE’s acquisitions suggests a more cautious approach is warranted, especially when it comes to supercomputing. Let’s consider that more closely. Continue reading

Lenovo Accelerate 2019 – Transformation Comes from Within

By Charles King, Pund-IT, Inc.  May 21, 2019

“Transformation” is a commonplace concept in the tech industry, and for good reason. Since the very beginning of computing, servers, personal computers, storage and networking systems and other technology products and services have been employed to fundamentally alter the ways that people live and work. Transformational IT tools and solutions have helped organizations achieve goals and successes that would have been unthinkable a generation or two ago.

But while these business transformations are compelling and even inspiring, how exactly does the process work? That subject was in the spotlight at last week’s Lenovo Accelerate 2019 partner conference in Orlando, Florida, and central to the Transform 3.0 event and industry analyst council hosted by its Data Center Group (DCG). Let’s consider what transformation means to the company and its partners, and how it is helping Lenovo DCG shift the competitive balance in numerous markets. Continue reading

IBM Reinvents the Z Mainframe, Again

By Charles King, Pund-IT, Inc.  May 14, 2019

Reports of the imminent demise of IBM’s Z mainframes, the company’s flagship enterprise system platform, have been floated – only to plummet ignominiously earthward – for over a quarter century or nearly half of the time the mainframe has been commercially available. Such rumors initially arose among IBM’s competitors in the early 1990s when the company was on the ropes, reeling like a punch-drunk boxer past his prime, until Lou Gerstner’s sober management got it back in fighting trim.

You can understand why some vendors would willingly spread garden variety fear, uncertainty and doubt (FUD), attempting to undermine faith in a platform they didn’t have a snowball’s chance in hell of besting. But how and why has IBM proved them, along with countless numbers of doubtful analysts, reporters and industry experts so wrong, so regularly for so long? The answer is fairly simple: Along with being the industry’s most stable, resilient and secure enterprise system, the IBM Z is also more flexible and adaptable than other platforms.

In essence, the reason that the mainframe has thrived for well over a half century is because IBM has reinvented it time and again to support the evolving needs and business requirements of its enterprise customers. That ability to evolve in order to support the evolution of others is clear in the Tailored Fit Pricing for IBM Z offerings that the company announced this week.

Continually evolving the mainframe

So how exactly has IBM altered the mainframe over the years? For the first three decades, the company’s path was fairly conventional. The mainframe, after all, began as a digital complement to the mechanical calculators and other transaction-focused business machines that were central to IBM’s success. Over time, new technologies, including increasingly powerful database and middleware offerings, were used to extend the mainframe’s ability to support and extend emerging business applications.

Then in the mid- to late-1990s, IBM began exploring uncharted territory with its decision to formally and financially support Linux and other open source technologies, beginning with its (then named) zSeries mainframes. The decision was not universally popular—in fact, some IBM board members believed Linux would destroy the mainframe’s value. History proved those naysayers to be as utterly wrong as they were shortsighted.

Other adaptations soon followed, including co-processors for robustly supporting Java (zAAP), integrated information solutions, including DB2 (zIIP) and cryptographic (4767) workloads without measurably impacting the mainframe’s core capabilities. There were also new mainframe form factors, such as lower capacity (and priced) “midrange” mainframes (z13s and z14 ZR1), and the introduction of a Z mainframe solution sporting an Intel-based IBM BladeCenter extension (zBX) for enhancing and managing multi-platform environments

Linux continued to be a major driver for mainframe customers, eventually becoming the operating environment of choice for over half of IBM’s Z workload sales. Shortly thereafter, the company introduced LinuxONE mainframe that was provisioned entirely with and for Linux-based workloads. LinuxONE, in combination with robust new secure service containers is central to IBM’s blockchain solutions and IBM Cloud’s blockchain services.

The need for new mainframe pricing

These points aside, an area where IBM was somewhat less forward thinking was in how it charged enterprises for using mainframe software. In short, software license costs are based on how much processing systems perform in a given time period.

For years, license costs have been calculated according to a “rolling four-hour average” (R4HA) peak, which set the monthly bill for IBM’s clients. Last year, IBM also began offering flexible “container” pricing for less predictable mainframe use cases, like development/testing efforts and modern applications.

Neither of these models were fully satisfactory for mainframe customers because they did not address existing production workloads, which remained R4HA-based. Not surprisingly, clients’ IT staff focused on rigorously managing mainframes to avoid peak R4HA consumption and charges, reducing their availability for critical work and projects. Plus, the monthly billing model made budgeting unpredictable, at best—a critical issue for IT departments under constant pressure to rein in or reduce costs.

Finally, they inhibited customers from fully leveraging their mainframes for modern applications and related commercial opportunities, like mobile payment processing.

Tailored Fit Pricing – Dressed to thrill Z customers

The last thing any vendor wants to do is to become an impediment to customers pursuing new business. So how is IBM addressing these issues? With Z mainframe software pricing that one company executive described as “Two sizes fit most.” They are:

  1. Enterprise Capacity Model – Designed for clients that require operational simplicity and complete cost predictability but who also expect substantial workload growth. Pricing is based on past usage and growth patterns and is priced at a consistent monthly rate. Essentially, this model is discounted full capacity.
  2. Enterprise Consumption Model – Supports highly-flexible, container-based (cloud-like) consumption and billing. The client makes monthly license charge (MLC) and million service usage (MSU) baseline commitments, but built-in annual entitlements and reconciliation processes reduce or eliminate seasonal variability issues. Also includes aggressive (~50% price/performance) pricing for MSUs as application usage grows.

Moreover, IBM is offering considerable flexibility in how its new pricing models are designed and configured. For example, a Consumption Model deployment can support separate containers supporting specific use cases. So, a customer might utilize one container/pricing model for legacy applications leveraging IBM middleware solutions (DB2, IMS, WS MQ) and another for modernized workloads.

IBM noted that it has deployed Tailored Fit Pricing solutions during the past year for 25 Z mainframe customers (16 Consumption Models and 9 Capacity Models). Though customers chose the new pricing offerings for a variety of technical, budgeting, financial and market reasons, most saw benefits in simplified operations, greater flexibility in dealing with usage spikes and easier provisioning of resources for dev ops. Moreover, some reported notably improved peak performance and enhanced planning capabilities.

Final analysis

With its new Tailored Fit Pricing models, IBM is clearly onto something good. A notable point about both models is that discounted growth pricing is offered on all workloads – whether they be 40-year old Assembler programs or 4-day old JavaScript apps. This is in contrast to previous models which primarily rewarded only brand-new applications with growth pricing. By thinking outside the Big Iron box, the company has substantially eased the pain for its largest clients’ biggest mainframe-related headaches. That’s important from a customer satisfaction perspective but it also impacts the competitive and business value of IBM’s flagship enterprise computing platform.

By making access to Z mainframes more flexible and “cloud-like,” IBM is making it less likely that customers will consider shifting Z workloads to other systems and environments. As cloud providers become increasingly able to support mission critical applications, that’s a big deal. The new pricing models should also help companies derive greater value from their IT organizations. That’s great financially but it is also a critical point considering the shortages in and demand for workers with enterprise IT skills.

Overall, the Enterprise Capacity Model and Enterprise Consumption Model offer more proof of IBM’s ability and willingness to adapt the Z mainframe to support the changing requirements of its clients. Sometimes that results in substantial new and improved technologies, such as those we’ve seen time and again in emerging generations of IBM Z. Other times, like now, it simply involves IBM listening to its customers and then determining how the mainframe can be evolved to meet their needs.

© 2019 Pund-IT, Inc. All rights reserved.

Dell Technologies World 2019 – Putting Customers’ Workplaces #1

By Charles King, Pund-IT, Inc.  May 8, 2019

There was a time when most personal computers (PCs) for business had a specifically utilitarian look and feel: clunky, durable, built for the long haul—not for speed. It was more about practicality than a dedication to any specific design aesthetic. While consumers tended to replace their PCs every 3 to 4 years, it wasn’t unusual to see commercial organizations squeezing 4 to 5 or even six years out of workplace PCs.

Things began to change in the mid-2000s with the advent of Bring Your Own Device (BYOD) trends among younger workers who preferred highly mobile solutions to tethered office PCs and phones. Their employers and IT vendors followed close behind with generations of ever more powerful, sleek client devices, including notebooks, tablets and smart phones. But it would be a mistake to think that client devices alone define workplace computing. Equally or even more important are the related deployment and PC lifecycle management (PCLM) services vendors offer commercial customers.

Last week at Dell Technologies World 2019, the company showed off the 10th generation of its venerable Latitude mobile PCs for business. In addition, it introduced the new Dell Technologies Unified Workspace, a suite of services that it offers businesses for deploying, managing, maintaining and securing client devices of every kind. Let’s take a closer look at what Dell announced and what it means for the company’s tens of thousands of commercial customers.

Dell Latitude – a longitudinal view

With over two decades of successful mobile PCs under its belt, Dell has a deep understanding of what businesses need from commercial notebooks. This longitudinal viewpoint has allowed the company to both respond to and to anticipate customers’ requirements as it develops new Latitude notebooks, including those announced last week. So, what are customers looking for in mobile PCs?

  • Ever lighter notebooks that still deliver maximum performance in a range of form factors and price points
  • Longer battery life, faster charging times and wireless features that enable workers to stay fully productive wherever they happen to be
  • Integrated security features and services that keep notebooks and business data secure from increasing numbers and kinds of threats
  • Durable and functional, yet stylish designs

In essence, organizations want mobile PCs to deliver maximum business value, enabling employees to be fully productive. But they also want the latest, greatest features in terms of performance, security and good looks.

Dell’s response: Not a problem.

Dell’s 10th gen Latitude features and models

The new Dell Latitudes offer numerous new features and technology options. All support the latest 8th gen Intel Core processors, as well as optional Intel Core vPro chips and Intel Wi-Fi 6 (Gig+) solutions. The new systems also feature Dell’s ExpressCharge (providing up to 80% battery charge in one hour), ExpressCharge Boost (up to 35% charge in 20 minutes) and ExpressConnect (intelligently chooses and connects to the strongest available WiFi network) technologies.

Security features include optional (for some models) fingerprint readers built into the power buttons and Windows Hello-capable IR cameras for biometric authentication. Many systems can be equipped with Dell SafeScreen (which allows more privacy in public settings), new camera privacy shutters, FIPS 201 contacted Smart Card readers or contactless Smart Card Readers with SafeID to protect user credentials. New Latitudes also support Dell’s new SafeBIOS utility which verifies systems’ firmware integrity via a cloud-based service.

Finally, the 10th gen Latitude portfolio is optimized for Dell’s new Unified Workspace service. More on that in a bit.

The new systems include:

  • Latitude 7000 series – These include the Latitude 7400 2-in-1 Dell announced at CES in January, new 13- and 14-inch Latitude notebooks and the Latitude 7200, a 12-inch detachable 2-in-1. All can be equipped with up to 32GB of memory. Select configurations deliver up to 20 hours of battery run time—up to 25% more than previous systems. The 7000 series also offers the industry’s first narrow border 4X4, CAT16 cellular antenna for gigabit LTE connectivity.
  • Latitude 5000 series – According to Dell, its new 5000 series offers the smallest mainstream business notebooks in its class. Systems are available in 13-, 14- and 15-inch configurations and offer up to 20 hours of battery run time. Available displays include narrow border HD, Full HD and touch screen configurations. Dell is also introducing the new Latitude 5300 2-in-1 which features a 360° hinge and a Corning Gorilla Glass touch screen with anti-glare coating. The 5300 can be configured with up to 32GB or memory and up to 1TB of storage.
  • The Latitude 3000 series – These are entry-level notebooks with enterprise capabilities. The 3000 series is available in updated 14- and 15-inch models, along with a new 13-inch solution that Dell calls “the world’s smallest and lightest essential business notebook.”
  • Three new commercial modular docking stations that offer upgradeable connectivity options, including Thunderbolt 3, dual USB-C or single USB-C. The new solutions support Dell’s new ExpressCharge and ExpressCharge Boost technologies. The upgradable power and connectivity options are designed to enable customers to adapt to and support the changing needs of their workforce for several generations of Latitude systems.

Dell Technologies Unified Workspace

Dell’s Unified Workspace offering integrates solutions across Dell’s device and service offerings, as well as solutions provided by VMware, Secureworks and CrowdStrike to provide workers highly personalized and secure endpoint devices and services while also simplifying device lifecycle processes. In other words, Dell’s new offering is designed to take traditional PCLM processes to an entirely new level.

Unified Workspace qualifies as a significant expansion and enhancement of the Provisioning for VMware Workspace ONE services that Dell announced last fall. That solution enabled customers to have Dell notebooks, desktop PCs and workstations preconfigured at the factory with specific applications and settings so that systems are ready to be put to work as soon as they are unboxed with minimal effort required by a company’s IT staff.

How is Unified Workspace different? It starts in the planning stage with Dell analytics providing insights on how individual employees are using PCs to help customers choose the right systems and applications. After PCs are deployed, an array of new Dell solutions can be implemented to help secure them and the customers’ data resources.

These include, Dell SafeBIOS – an off-host BIOS verification utility integrated with VMware Workspace ONE, Secureworks and CrowdStrike (and also available as a standalone download). The solution stores untampered BIOS information away from devices so that security operations can compare settings and quickly detect and defend against BIOS attacks.

Dell SafeBIOS also complements Dell SafeGuard and Response, a comprehensive threat management and response portfolio built on Secureworks’ threat analytics engine and integrated with CrowdStrike’s endpoint protection platform. In addition, customers can choose Dell’s ProSupport Plus with SupportAssist to quickly detect and resolve endpoint problems and component failures.

Finally, Dell Unified Workspace deployment, management, security and support solutions can be extended across and integrated with business environments regardless of the devices, operating systems and cloud providers that customers prefer. Just as importantly, customers can freely choose which Unified Workspace elements and services they prefer, as well as when and how to implement them.

Final analysis

So, what are we to make of all this? There are several points worth considering. First, Dell’s Latitude announcements demonstrate how fully its commercial client organization continues to develop and drive innovations that matter deeply to the company’s business customers. As workers and workplaces evolve, vendors need to provide PC offerings that help their commercial customers adapt to and profit from those changes.

Dell’s new solutions clearly fit into this mold with notebooks that are considerably more powerful and more power-efficient than the previous nine generations of Latitude systems. With three levels of offerings—the 7000, 5000 and 3000 series—the company has produced a unified portfolio of mobile PC endpoints and docking solutions that can address, support and fulfill virtually any business process or challenge.

An associated but little discussed issue is the degree to which Dell’s consumer PC division has become an engine of innovation that also drives the company’s commercial PCs. How so?

The aesthetic and materials innovations that have been central to the XPS line’s notable success have steadily found their way into Dell’s Latitude and Inspiron solutions, resulting in client portfolios that reflect broader trends in business and consumer PCs, and resonate with the people who use them at work and at home. I hope to write more on this topic at a future date.

Finally, Dell’s new Unified Workspace shows how the company is driving workplace innovations whose impact extends well beyond individual endpoints. By vastly simplifying PC lifecycle management, personalizing worker endpoints and ensuring that PCs and the data they contain are secured against external attack, the company is helping its business customers efficiently address and effectively manage their top-of-mind issues and concerns.

Moreover, the ability of Dell’s Unified Workspace to agnostically support heterogeneous devices and cloud platforms demonstrates the depth of Dell’s understanding of modern work environments and its dedication to putting its customers and their workplaces #1. That customer-focused approach is central to Dell’s new Latitude and Unified Workspace solutions and was a core message that reverberated throughout Dell Technologies World 2019.

© 2019 Pund-IT, Inc. All rights reserved.

Virtustream and the Value of Truly Trusted Enterprise-Class Cloud

By Charles King, Pund-IT, Inc.  May 8, 2019

At Dell Technologies World 2019 last week, Virtustream—Dell Technologies’ enterprise-class cloud business—announced a pair of new initiatives worthy of consideration. The first was an expanded collaboration between Virtustream and Equinix, a provider of private networking solutions for directly connecting enterprise customers with cloud computing platforms.

The second was a major update of the Virtustream Healthcare Cloud designed to greatly simplify the planning, deployment and migration of electronic healthcare records (EHR) systems hosted in the cloud. However, both announcements reflect more substantial issues: what constitutes “enterprise-class” cloud computing and how does it differ from commonplace cloud services? Moreover, why or when do organizations need these services?

Let’s consider how Virtustream’s announcements reflect on these larger points.

The case for enterprise-class cloud

So exactly what does “enterprise-class” mean in terms of cloud? In large part, it concerns the ability of a platform or vendor to support mission-critical workloads and information. That is, the applications and data without which large enterprises would be dead in the water.

Numerous internal issues can negatively impact mission-critical performance, ranging from simple operator errors or inexperience to system errors and network glitches to faulty or postponed patches. Then toss in external factors, including weather events, seasonal or unusual network traffic, natural and manmade disasters, criminal and governmental cyber-attacks and Murphy’s Law events that are impossible to anticipate.

Plus, let’s not forget that applications and data in highly-regulated industries, like healthcare and finance and some global markets need to follow compliance rules that load further complexities onto already-strained mission-critical processes.

Enterprise system vendors know how to build solutions with the resiliency, availability and scalability (RAS) and security features necessary to assure the five 9s (99.999%) or more of system availability that enterprises typically demand in quality of service (QoS) agreements. If they didn’t, they would have gone out of business long ago.

Cloud platforms and providers are in a somewhat similar, somewhat different position. Though most work with specific work groups or organizations within enterprises and some claim to be able to support enterprise-class functionality, their customers aren’t exactly rushing to deploy mission-critical workloads on those public cloud platforms. In fact, it’s arguable that the evolution of hybrid- and multi-cloud is, at least, partly due to businesses’ distrust of public clouds and their preference for limiting the data and applications they employ cloud platforms to support.

That said, a handful of cloud vendors do develop and deliver enterprise-class, mission-critical services and solutions and have the history and experience to prove it. Virtustream is one of those select vendors and numerous enterprises have entrusted the company with their mission-critical applications.

Those include NIBCO which wanted to modernize its IT infrastructure and redirect resources to projects that added value to the business. That was a challenging proposition given the time and resources allocated to data center operations and SAP support. After collaborating with SAP experts at Virtustream, NIBCO chose Virtustream Enterprise Cloud as its solution. By turning over responsibility for day-to-day SAP operations to Virtustream, NIBCO was able to achieve greater flexibility, agility and speed of execution.

Virtustream and Equinix

So, what’s new about Virtustream and Equinix? While the companies have worked together in the past, their new expanded collaboration enables Virtustream nodes in the U.S. and Europe to directly connect to the Equinix Cloud Exchange (ECX) Fabric thus increasing functionality, automation, speed-to-deployment and other service options for Virtustream customers. The new enhancements cover all workloads, including mission-critical applications supporting sensitive data, such as financial details, and personal information, like healthcare records.

The expanded collaboration includes additional private connectivity options building on existing IPSEC VPN, MPLS, and AT&T NetBond (in selected markets) solutions, reducing complexity, simplifying direct connectivity and supporting vendor management enhancements. The portfolio supports secure, scalable and reliable connections, allowing Virtustream and Equinix to offer 99.999% availability-based QoS controls and low latency. The companies also say most customers’ time-to-connect can be dramatically reduced. Finally, ECX Fabric provides private and secure connectivity to all major public cloud providers for customers with multi-cloud requirements.

Healthcare Cloud

Virtustream’s major update to its enterprise-class Healthcare Cloud features advanced architecture components that improve the platform’s flexibility and scale. Customers can employ improved automation tools to simplify the deployment and migration of EHR systems hosted in the Virtustream Healthcare Cloud. Virtustream also now supports the use of VMware Horizon for secure and flexible application access. Overall, the update should improve healthcare customers’ business agility and will also enable them to consider and adopt new solutions and services from Dell Technologies.

Final analysis

Like most technology products and processes, enterprise-class computing didn’t arrive in the market fully-formed and functional. Instead, what we now consider to be enterprise-class or mission-critical computing evolved in stages as large-scale organizations adopted computational tools and adapted them to their discrete business requirements. As a result, enterprise-class isn’t so much a product or service as it is a statement of a vendor’s capabilities—assuring large customers that the vendor has their back and can support their needs however sizable and complex, whatever their requirements and wherever they do business.

Similarly, since enterprises and their businesses are always evolving, there is little rest for the vendors that supply such services, including cloud platform providers. In order to thrive themselves, vendors need to ensure that their customers have access to the technologies, tools and services that best suit their requirements. That’s the central message behind Virtustream’s expanded collaboration with Equinix and the major update to its Healthcare Cloud.

Both will enable Virtustream’s cloud customers to achieve measurably better results than they did before. Both will enable enterprises to become measurably better businesses. Those are the central tenants and values of enterprise-class solutions, including those provided by Virtustream.

© 2019 Pund-IT, Inc. All rights reserved.

Dell Technologies World 2019 Highlights “Better Together V2”

By Charles King, Pund-IT, Inc.  May 1, 2019

Corporate acquisitions nearly always float on rising tides of optimism. The soon to be merged businesses and their executives believe their lives, futures and potential will be brighter if conjoined than apart and spend significant resources to that end. However, there is still considerable work ahead after a deal is done.

Products, divisions, leadership positions great and small all need to be considered, rejiggered and sometimes replaced. Old faces depart, new hires arrive. At the end of what is often a years-long process, the whole is, hopefully, greater than the sum of the separate parts.

But how often is that true? In the case of large-scale tech industry deals, middling results or outright failure is all too often the case. Just look at HP’s Compaq, EDS and Autonomy acquisitions. But there have been some notable successes, including Dell’s 2016 purchase of EMC for $67B, the largest such deal in tech industry history.

The fully mature results of Dell’s effort were on display in Las Vegas this week at Dell Technologies World 2019. Let’s consider what the company has accomplished, where it is going and what that means for its customers, partners and competitors. Continue reading

Enabling AI from the Edge to the Cloud: Gyrfalcon Technologies Offers New IP Licensing Model

By Charles King, Pund-IT, Inc.  April 24, 2019

Artificial Intelligence (AI) is a cause célèbre inside and outside the IT industry, inspiring often heated debate. However, a point that many—especially AI focused vendors—make is that cloud-based computing offers the best model for supporting AI frameworks, like Caffe, PyTorch and TensorFlow, and related machine learning processes.

But is that actually the case?

Gyrfalcon Technology (GTI) would argue that delivering robust AI at far edges of networks and in individual devices is both workable and desirable for many applications and workloads. In fact, the company offers a host of AI inference accelerator chips that can be used for those scenarios, as well as cloud-based server solutions for AI applications.

Now GTI is licensing its proprietary circuitry and intellectual property (IP) for use in System on Chip (SoC) designs. As a result, silicon vendors will be able to enhance and customize their own offerings with GTI’s innovations.

Let’s take a closer look at what Gyrfalcon Technologies is up to.

AI in the cloud

Why do most AI solutions focus on cloud-based approaches and architectures? You could call it an extreme case of “When all you have is a bulldozer, everything looks like a dirt pile” syndrome. The fact is that until fairly recently, the cost of AI far outweighed any practical benefits. That changed with new innovations, including cost-effective technologies like GPUs and FGPAs.

Some of the most intriguing and ambitious AI projects and commercial offerings, like human language processing, were undertaken by cloud vendors and infrastructure owners, including Amazon, Google and IBM, supported on the silicon side by NVIDIA, Intel and chipmakers. They had the compute and brain power to take on large-scale efforts where data accumulated by edge devices, like smart phone conversations and commands, is relayed to cloud data centers.

There, the data is used for training and enabling AI-based services, such as language translation and transcription, and products like smart home speakers.Are there any problems with this approach? Absolutely, with data privacy and security leading the charge. AI vendors uniformly claim that they are sensitive to their customers’ concerns about privacy and have tools and mechanisms in place to ensure that data is anonymized and safe. But Facebook, Google and others have been regularly dinged for mishandling or cavalierly maintaining customer data.

Cloud-based AI can also suffer latency issues, especially if network traffic is snarled. That might not be a big deal when you’re asking Alexa to recommend a good restaurant but it’s more problematic if it involves AI-enabled self-driving cars. There’s also the matter of using energy wisely. With the percentage of electricity consumed by data centers continuing to rise globally, building more IT facilities to support occasionally frivolous services seems like a literal waste.

AI at the edge

Gyrfalcon Technologies would argue that while cloud-based AI has an important role, it isn’t needed for every application or use case. Instead of a bulldozer, some jobs require a shovel or even a garden trowel. To that end, GTI offers a range of AI inference accelerator chips that support AI Processing in Memory (APiM) via ultra-small and energy efficient cores running GTI’s Matrix Processing Engine (MPE).

As a result, GTI’s solutions, like its Lightspeeur 2801 AI Accelerator can deliver 2.8 TOPS while using only 300mW of power. That makes it a great choice for edge-of-network devices, including security cameras and home smart locks. After being set up, chip adaptive training functions allow devices to learn from their surroundings. For example, a smart lock might use arrival and departure patterns to identify the residents of a home.

Enabling AI at the edge means that devices will be able to perform many functions autonomously or, if cloud connectivity is required, will be capable of vastly reducing the amount of data that needs to be transmitted. That lowers the costs, complexity and network traffic of AI implementations.

For cloud-based applications GTI offers the Lightspeeur 2803 AI Accelerators which are used in concert with GTI’s GAINBOARD 2803 PCIe card. For example, a single GAINBOARD card delivers up to 270 TOPS using 28 Watts for 9.6 TOPS/Watt, or about 3X greater efficiency than what competitors’ solutions offer.

Final analysis

The IT industry rightfully focuses on the value that innovative technologies and products provide to both consumers and businesses. Such solutions regularly come from massive Tier 1 vendors with decades of experience and billions of dollars in annual R&D funding. But oft times, innovative products and approaches are the brain children of smaller vendors like Gyrfalcon Technologies that are unawed by conventional wisdom.

With its AI Processing in Memory (APiM) and Matrix Processing Engine (MPE) technologies, GTI has enabled clients, including LG, Fujitsu and Samsung to reimagine how artificial intelligence can be incorporated into new consumer and business offerings. By licensing its Lightspeeur 2801 and 2803 AI Accelerators circuitry and intellectual property (IP) for use in System on Chip (SoC) designs, GTI is offering existing and future clients remarkable autonomy in determining how AI can best serve their own organizations and their end customers.

© 2019 Pund-IT, Inc. All rights reserved.

IBM Storage Enhances Mid-Market and Channel Solutions

By Charles King, Pund-IT, Inc.  April 17, 2019

Continual product evolution is one of the tech industry’s best and longest running selling points. It’s the foundational truth underlying technical chestnuts, like Moore’s Law and provides the subtext for innumerable marketing and promotional campaigns. But an often unaddressed yet valuable point to consider is the top-down way in which this evolution usually proceeds.

Developing new products costs money – lots, in fact, when it comes to business solutions. So not surprisingly, new products are initially designed to address the needs of large enterprises and other organizations that can afford to foot the bill and are willing to pay a premium for the new features, capabilities and benefits those solutions provide.

But eventually – often, fairly quickly – what were once enterprise-specific technologies find their way into more affordable, yet still innovative products designed for smaller companies and the channel/business partners that serve them. These points are clear in the new and updated additions IBM recently made to its Storwize V5000 family of solutions. Continue reading