Artificial Intelligence (AI) is steadily gaining ground in many mainstream applications. In healthcare, AI increases the accuracy of patient diagnoses and speeds the development of new drugs. It enables the digitization and monitoring of supply chains, prevents downtime in manufacturing machinery, and spots defects for quality control faster than any human. AI improves security and the processing of long queues in airports, government services, and other public areas through facial recognition. In financial services AI is used to manage regulatory requirements and detect credit card fraud. AI helps researchers better understand global health issues, food shortages, and weather patterns, and is well on the way to helping us achieve fully autonomous driving. The list goes on; AI is redefining the way we live and do business with breakthrough innovations at breakneck speeds.
Although it is important to have an effective machine learning model, high performance computing assets, and large data lakes, the quality of the dataset and capturing data from wherever it lives is the key to effective AI. Most organizations don’t consider the complexity, expertise, and the deep coordination needed by their IT teams to be successful in building a solution that does this. This is where HPE and IIS can help!
Share insights, not data, for improved AI outcomes
Presently, most AI model training occurs at a central location and relies on centralized merged datasets. This approach can be both inefficient and costly due to the need to move large volumes of data to the same source. Further, data privacy and regulatory or compliance requirements that limit data movement or sharing data externally (such as HIPAA in healthcare) can potentially lead to inaccurate and biased models as well as unnecessary duplication of research by multiple entities.
A decentralized solution is needed to train AI models and harness insights at the edge so businesses can make decisions faster, at the point of impact, and yield better experiences and outcomes. Additionally, by sharing only the learnings, not the data, among multiple organizations working toward the same goal, various industries across the world can collaborate and use its collective intelligence to empower AI for the greater good of all humanity.
Introducing two new breakthrough solutions for scaling AI applications
HPE Swarm Learning is the industry’s first privacy-preserving, decentralized machine learning framework for the edge or distributed sites. It uniquely enables organizations to use distributed data at its source, increasing the dataset size for training. Data scientists can build machine learning models to learn in an equitable way, blending local and global insights while preserving data privacy and ownership.
To ensure that only learnings captured from the edge are shared, and not the data itself, HPE Swarm Learning uses blockchain technology to securely onboard members, dynamically elect a leader, and merge model parameters to provide resilience and security to the swarm network. The solution provides customers with containers that are easily integrated with AI models using the HPE swarm API.
With HPE Swarm Learning customers can freely share AI model insights across their organization and with external industry partners without exposing source data or compromising privacy, while helping remove biases to increase accuracy in their AI models.
HPE Machine Learning Development System is a complete, ready-to-use machine learning development solution for enterprises to easily build and train machine learning models at scale. The HPE Machine Learning Development System helps enterprises bypass the high complexity associated with adopting AI infrastructure. The new system, which is purpose-built for AI and supports the HPE Swarm Learning framework, is an end-to-end solution that integrates a machine learning software platform, compute, accelerators, and networking to allow enterprises to immediately begin efficiently building and training optimized machine learning models at scale.
The system features an optimized AI infrastructure using the HPE Apollo 6500 Gen10 Plus system to improve accuracy in models with state-of-the-art distributed training, automated hyperparameter optimization, and neural architecture search, which are key to machine learning algorithms. The HPE Machine Learning Development System can accelerate the typical time from proof of concept (POC) to production from months or weeks to just days.
The HPE Machine Learning Development System is offered starting in a small building block, with options to scale up. Basic configurations start with 32 NVIDIA GPUs – delivering approximately 90% scaling efficiency for workloads such as Natural Language Processing (NLP) and Computer Vision – and can be as large as 256 NVIDIA GPUs for more complex workloads.
“Swarmifying” data to empower AI for the greater good
Together, HPE Swarm Learning implemented on the HPE Machine Learning Development System enables a wide range of organizations to access next-era AI and collaborate with other entities to improve insights across all kinds of applications, such as:
- Banking and financial services firms can band together to collectively fight credit card fraud by sharing fraud-related learnings with more than one financial institution at a time without sharing account numbers.
- Manufacturers can benefit from predictive maintenance to gain insight into equipment repair needs by collecting data from sensors across multiple manufacturing sites and remedying them before they fail and cause unwanted downtime.
- Healthcare facilities can derive learnings from medical records, CT and MRI scans, and gene expression data to be shared among hospitals and specialists to identify cancerous cells, improve diagnostics of diseases, and create new therapies while protecting patient information.
HPE & IIS: Pushing the boundaries of modern AI
Swarm learning is a powerful new approach used to securely harness insights at the edge to enable the sharing of learnings to predict anomalies, trends, mutations, and simulations of all kinds without exposing source data. In fact, transformative use cases for AI are emerging every day. From bots that assist humans with more complex tasks to AI-inspired artwork and music, detecting drought and crop diseases in agriculture, and anticipating virus mutations to create preventative medicines, AI is creating new experiences.
The truth is that HPE did not invent AI. But HPE is inventing the tools that make AI better. As an HPE Global Partner of the Year, IIS has the technical expertise to implement solutions for a wide range of HPE products, including the new Swarm Learning solution to capture larger datasets collected from multiple distributed sources, and the powerful Machine Learning Development System to securely transform private data into shareable insights.
To learn more about IIS can implement the industry’s first privacy-preserving, decentralized machine learning framework for the edge and bring next-era AI to your development efforts, visit us at iistech.com and email us at email@example.com.