We’re Building the Future of Data Infrastructure

Latest Marvell Blog Articles

  • August 27, 2024

    Bringing Payments to the Cloud with FIPS Certified LiquidSecurity®2 HSMs

    By Justin Sykes, Marketing Manager, Marvell

    Highest performance, lowest power, smallest footprint, most efficient solutions… 

    This is the constant mantra at Marvell when designing our state-of-the-art system-on-chips (SOCs), regardless of the product category. In this case, highest performance, low power, small footprint and most efficient solutions have resulted in the Marvell 88PA6120 - an ink, thermal and color laser multifunction printer SoC that Prinics has selected for its newest mobile thermal photo printer dubbed the “best picture kit in my hand.”  Prinics PrintKit M1The new Prinics PicKit M1, a handheld battery powered photo printer, is an example of a brand new fun device. The PicKit M1 is extremely easy to use thanks to the integration of NFC and WiFi direct. Touch any NFC enabled phone to the printer and it will launch the intuitive PicKit app and establish a WiFi direct connection. No NFC? No problem! Simply launch the app manually. From the app you can select from any of the pictures on the phone and print. A minute later you have a high quality 2.1”x 3.4” photo the share with friends. There is nothing like the joy of holding a real photo in your hand! 

    Whether at the highest end of performance, middle- or low-end of a product class, Marvell’s family of printing SoCs seek to drive down cost and improve performance of existing or new product categories. And not just at the SoC level. We don’t stop at a diverse product line and broad range of print solutions; we offer the most complete software-development kit (SDK) and solutions for traditional ink/laser/thermal and dye-sub printers, as well as for mobile and 3D printing. For example, the Marvell SDK allows OEMs faster time-to-market. Reference designs allow OEMs to capitalize on other Marvell high-performance, low-power product lines such as the Marvell Avastar™ 88W8782, a highly integrated wireless local area network (WLAN) SoC, that gives consumers highly convenient wireless print options using Apple AirPrint, Google Cloud Print, and Mopria™ Print Service for Android devices or Wi-Fi local printing. Plus, Marvell’s strong cryptography and tamper protection schemes offer security and protect consumables from counterfeiting. 

    Here’s how Marvell’s latest high-end printing SoC delivers on best-in-class performance, footprint and cost. The Marvell 88PA6120 system-on-a-chip (SoC), offers unmatched integration, I/O connectivity and performance for traditional printing and 3D printing solutions. It combines powerful processing with a host of I/O capability and dedicated imaging hardware acceleration to deliver high performance and excellent image quality. It integrates a powerful 533MHz ARM v7-compatible processor to handle all the application processing requirements. Its super-integrated design lowers the solution part count to achieve lower cost. Marvell solutions are also highly configurable enabling a single SOC to support a wide range of ink and laser printers, MFP’s, and 3D printers. 

    It’s no wonder Marvell is the world’s number one supplier of printer SoCs, and the Prinics PicKit shows that there is still more innovation and growth to come in the printer market. And at Marvell, we will continue our mantra to keep at developing higher-performance, lower power, smaller footprint, lower cost SoCs.

  • July 11, 2024

    Bringing Payments to the Cloud with FIPS Certified LiquidSecurity®2 HSMs

    By Bill Hagerstrand, Security Solutions BU, Marvell

    Payment-specific Hardware Security Modules (HSMs)—dedicated server appliances for performing the security functions for credit card transactions and the like—have been around for decades and not much has changed with regards to form factor, custom APIs, “old-school” physical user interfaces via Key Loading Devices (KLDs) and smart cards. Payment-specific HSMs represent 40% of the overall HSM TAM (Total Available Market), according to ABI Research1. 

    The first HSM was built for the financial market back in the early 1970s. However, since then HSMs have become the de facto standard for more General-Purpose (GP) use cases like database encryption and PKI. This growth has made HSM usage for GP applications 60% of the overall HSM TAM. Unlike Payment HSMs, where most deployments are 1U server form factors, GP HSMs have migrated to 1U, PCIe card, USB, and now semiconductor chip form factors, to meet much broader use cases. 

    The typical HSM vendors that offer both Payment and GP HSMs have opted to split their fleet. They deploy Payment specific HSMs that are PCI PTS HSM certified for payments and GP HSMs that are NIST FIPS 140-2/3 certified. If you are a financial institution that’s government mandated to deploy a fleet of Payment HSMs for processing payment transactions, but also have a database with Personally Identifiable Information (PII) data that needs to be encrypted to meet General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA), you would also need to deploy a separate fleet of GP HSMs. This would include two separate HW, two separate SW, and two operational teams to manage each. Accordingly, the associated CapEx/OpEx spending is significant. 

    For Cloud Service Providers (CSPs), the hurdle was insurmountable and forced many to deploy dedicated bare metal 1U servers to offer payment services in the cloud. These same restrictions that were forced on financial institutions were now making their way to CSPs. Also, this deployment model is contrary to why CSPs have succeeded in the past, which was to offer when they offered competitively priced services as needed on shared resources. 

  • June 18, 2024

    Custom Compute in the AI Era

    This article is the final installment in a series of talks delivered Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024. 

    AI demands are pushing the limits of semiconductor technology, and hyperscale operators are at the forefront of adoption—they develop and deploy leading-edge technology that increases compute capacity. These large operators seek to optimize performance while simultaneously lowering total cost of ownership (TCO). With billions of dollars on the line, many have turned to custom silicon to meet their TCO and compute performance objectives.

    But building a custom compute solution is no small matter. Doing so requires a large IP portfolio, significant R&D scale and decades of experience to create the mix of ingredients that make up custom AI silicon. Today, Marvell is partnering with hyperscale operators to deliver custom compute silicon that’s enabling their AI growth trajectories.

    Why are hyperscale operators turning to custom compute?

    Hyperscale operators have always been focused on maximizing both performance and efficiency, but new demands from AI applications have amplified the pressure. According to Raghib Hussain, president of products and technologies at Marvell, “Every hyperscaler is focused on optimizing every aspect of their platform because the order of magnitude of impact is much, much higher than before. They are not only achieving the highest performance, but also saving billions of dollars.”

    With multiple business models in the cloud, including internal apps, infrastructure-as-a-service (IaaS), and software-as-a-service (SaaS)—the latter of which is the fastest-growing market thanks to generative AI—hyperscale operators are constantly seeking ways to improve their total cost of ownership. Custom compute allows them to do just that. Operators are first adopting custom compute platforms for their mass-scale internal applications, such as search and their own SaaS applications. Next up for greater custom adoption will be third-party SaaS and IaaS, where the operator offers their own custom compute as an alternative to merchant options.

    Progression of custom silicon adoption in hyperscale data centers.

    Progression of custom silicon adoption in hyperscale data centers.

  • June 12, 2024

    How AI Will Change the Building Blocks of Semis

    By Michael Kanellos, Head of Influencer Relations, Marvell

    Aaron Thean, points to a slide featuring the downtown skylines of New York, Singapore and San Francisco along with a prototype of a 3D processor and asks, “Which one of these things is not like the other?”

    The answer? While most gravitate to the processor, San Francisco is a better answer. With a population well under 1 million, the city’s internal transportation and communications systems don’t come close to the level of complexity, performance and synchronization required by the other three.

    With future chips, “we’re talking about trillions of transistors on multiple substrates,” said Thean, the deputy president of the National University of Singapore and the director of SHINE, an initiative to expand Singapore’s role in the development of chipets, during a one-day summit sponsored by Marvell and the university.

  • June 11, 2024

    How AI Will Drive Cloud Switch Innovation

    This article is part five in a series on talks delivered at Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024. 

    AI has fundamentally changed the network switching landscape. AI requirements are driving foundational shifts in the industry roadmap, expanding the use cases for cloud switching semiconductors and creating opportunities to redefine the terrain.

    Here’s how AI will drive cloud switching innovation.

    A changing network requires a change in scale

    In a modern cloud data center, the compute servers are connected to themselves and the internet through a network of high-bandwidth switches. The approach is like that of the internet itself, allowing operators to build a network of any size while mixing and matching products from various vendors to create a network architecture specific to their needs.

    Such a high-bandwidth switching network is critical for AI applications, and a higher-performing network can lead to a more profitable deployment.

    However, expanding and extending the general-purpose cloud network to AI isn’t quite as simple as just adding more building blocks. In the world of general-purpose computing, a single workload or more can fit on a single server CPU. In contrast, AI’s large datasets don’t fit on a single processor, whether it’s a CPU, GPU or other accelerated compute device (XPU), making it necessary to distribute the workload across multiple processors. These accelerated processors must function as a single computing element. 

    AI calls for enhanced cloud switch architecture

    AI requires accelerated infrastructure to split workloads across many processors.

Archives