By Justin Sykes, Marketing Manager, Marvell
Highest performance, lowest power, smallest footprint, most efficient solutions…
This is the constant mantra at Marvell when designing our state-of-the-art system-on-chips (SOCs), regardless of the product category. In this case, highest performance, low power, small footprint and most efficient solutions have resulted in the Marvell 88PA6120 - an ink, thermal and color laser multifunction printer SoC that Prinics has selected for its newest mobile thermal photo printer dubbed the “best picture kit in my hand.” The new Prinics PicKit M1, a handheld battery powered photo printer, is an example of a brand new fun device. The PicKit M1 is extremely easy to use thanks to the integration of NFC and WiFi direct. Touch any NFC enabled phone to the printer and it will launch the intuitive PicKit app and establish a WiFi direct connection. No NFC? No problem! Simply launch the app manually. From the app you can select from any of the pictures on the phone and print. A minute later you have a high quality 2.1”x 3.4” photo the share with friends. There is nothing like the joy of holding a real photo in your hand!
Whether at the highest end of performance, middle- or low-end of a product class, Marvell’s family of printing SoCs seek to drive down cost and improve performance of existing or new product categories. And not just at the SoC level. We don’t stop at a diverse product line and broad range of print solutions; we offer the most complete software-development kit (SDK) and solutions for traditional ink/laser/thermal and dye-sub printers, as well as for mobile and 3D printing. For example, the Marvell SDK allows OEMs faster time-to-market. Reference designs allow OEMs to capitalize on other Marvell high-performance, low-power product lines such as the Marvell Avastar™ 88W8782, a highly integrated wireless local area network (WLAN) SoC, that gives consumers highly convenient wireless print options using Apple AirPrint, Google Cloud Print, and Mopria™ Print Service for Android devices or Wi-Fi local printing. Plus, Marvell’s strong cryptography and tamper protection schemes offer security and protect consumables from counterfeiting.
Here’s how Marvell’s latest high-end printing SoC delivers on best-in-class performance, footprint and cost. The Marvell 88PA6120 system-on-a-chip (SoC), offers unmatched integration, I/O connectivity and performance for traditional printing and 3D printing solutions. It combines powerful processing with a host of I/O capability and dedicated imaging hardware acceleration to deliver high performance and excellent image quality. It integrates a powerful 533MHz ARM v7-compatible processor to handle all the application processing requirements. Its super-integrated design lowers the solution part count to achieve lower cost. Marvell solutions are also highly configurable enabling a single SOC to support a wide range of ink and laser printers, MFP’s, and 3D printers.
It’s no wonder Marvell is the world’s number one supplier of printer SoCs, and the Prinics PicKit shows that there is still more innovation and growth to come in the printer market. And at Marvell, we will continue our mantra to keep at developing higher-performance, lower power, smaller footprint, lower cost SoCs.
Celebrating our interns every day!
At Marvell, we celebrate every day the contributions of over 400 talented interns who have joined us this year! On National Intern Week last week, and any day, we recognize their meaningful impacts as they learn, connect, and contribute across 13 countries and more than 30 locations.
Career insights from Marvell leaders
Over the past several weeks, our interns have had the opportunity to engage directly with Marvell executives in sessions designed to share career experiences, offer advice, and answer questions. In a recent session, Marvell Chairman and CEO Matt Murphy shared insights from his career journey, highlighting the significant role internships played in shaping his professional path. He emphasized the value of Marvell’s internship program and the company’s commitment to nurturing talent by not only offering internships but also providing many interns with full-time roles upon graduation.
Payment-specific Hardware Security Modules (HSMs)—dedicated server appliances for performing the security functions for credit card transactions and the like—have been around for decades and not much has changed with regards to form factor, custom APIs, “old-school” physical user interfaces via Key Loading Devices (KLDs) and smart cards. Payment-specific HSMs represent 40% of the overall HSM TAM (Total Available Market), according to ABI Research1.
The first HSM was built for the financial market back in the early 1970s. However, since then HSMs have become the de facto standard for more General-Purpose (GP) use cases like database encryption and PKI. This growth has made HSM usage for GP applications 60% of the overall HSM TAM. Unlike Payment HSMs, where most deployments are 1U server form factors, GP HSMs have migrated to 1U, PCIe card, USB, and now semiconductor chip form factors, to meet much broader use cases.
The typical HSM vendors that offer both Payment and GP HSMs have opted to split their fleet. They deploy Payment specific HSMs that are PCI PTS HSM certified for payments and GP HSMs that are NIST FIPS 140-2/3 certified. If you are a financial institution that’s government mandated to deploy a fleet of Payment HSMs for processing payment transactions, but also have a database with Personally Identifiable Information (PII) data that needs to be encrypted to meet General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA), you would also need to deploy a separate fleet of GP HSMs. This would include two separate HW, two separate SW, and two operational teams to manage each. Accordingly, the associated CapEx/OpEx spending is significant.
For Cloud Service Providers (CSPs), the hurdle was insurmountable and forced many to deploy dedicated bare metal 1U servers to offer payment services in the cloud. These same restrictions that were forced on financial institutions were now making their way to CSPs. Also, this deployment model is contrary to why CSPs have succeeded in the past, which was to offer when they offered competitively priced services as needed on shared resources.
This article is the final installment in a series of talks delivered Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024.
AI demands are pushing the limits of semiconductor technology, and hyperscale operators are at the forefront of adoption—they develop and deploy leading-edge technology that increases compute capacity. These large operators seek to optimize performance while simultaneously lowering total cost of ownership (TCO). With billions of dollars on the line, many have turned to custom silicon to meet their TCO and compute performance objectives.
But building a custom compute solution is no small matter. Doing so requires a large IP portfolio, significant R&D scale and decades of experience to create the mix of ingredients that make up custom AI silicon. Today, Marvell is partnering with hyperscale operators to deliver custom compute silicon that’s enabling their AI growth trajectories.
Why are hyperscale operators turning to custom compute?
Hyperscale operators have always been focused on maximizing both performance and efficiency, but new demands from AI applications have amplified the pressure. According to Raghib Hussain, president of products and technologies at Marvell, “Every hyperscaler is focused on optimizing every aspect of their platform because the order of magnitude of impact is much, much higher than before. They are not only achieving the highest performance, but also saving billions of dollars.”
With multiple business models in the cloud, including internal apps, infrastructure-as-a-service (IaaS), and software-as-a-service (SaaS)—the latter of which is the fastest-growing market thanks to generative AI—hyperscale operators are constantly seeking ways to improve their total cost of ownership. Custom compute allows them to do just that. Operators are first adopting custom compute platforms for their mass-scale internal applications, such as search and their own SaaS applications. Next up for greater custom adoption will be third-party SaaS and IaaS, where the operator offers their own custom compute as an alternative to merchant options.
Progression of custom silicon adoption in hyperscale data centers.
By Michael Kanellos, Head of Influencer Relations, Marvell
Aaron Thean, points to a slide featuring the downtown skylines of New York, Singapore and San Francisco along with a prototype of a 3D processor and asks, “Which one of these things is not like the other?”
The answer? While most gravitate to the processor, San Francisco is a better answer. With a population well under 1 million, the city’s internal transportation and communications systems don’t come close to the level of complexity, performance and synchronization required by the other three.
With future chips, “we’re talking about trillions of transistors on multiple substrates,” said Thean, the deputy president of the National University of Singapore and the director of SHINE, an initiative to expand Singapore’s role in the development of chipets, during a one-day summit sponsored by Marvell and the university.