Micropower switches extend batteries for IoT

Two low power Hall effect switches, the MLX92216 and MLX92217, have 1.0 microwatt power consumption and narrow tolerances for a predictable power budget, helping to extend battery runtime, said Melexis.
These magnetic devices detect open or closed positions and can replace traditional reed switches in IoT and industrial or white goods.
The growing demand for smart devices to be battery-powered but long lasting is a challenge for IoT. The need for low power components which are both reliable and accurate is paramount, said Melexis.
The MLX92216 and MLX92217 microwatt switches are three-wire monolithic magnetic switches which are claimed to deliver industry leading accuracy, consistency and reliability throughout the lifetime of the application. Unipolar, omnipolar and latch variants are available, making the device suitable for applications such as lid close/open detection, wearables, industrial machines, appliances, tablet cases, valves, energy and flow meters, while also serving as a replacement for traditional proximity and reed switches.
The switches have an integrated logic for automatic sleep/awake sequencing enabling 0.9 microA average current consumption with no action necessary from the user (depending on the model variant). The MLX92217 has an ‘enable’ pin that allows users to activate or deactivate the automatic sleep logic, driving the standby current consumption down to 200nA whilst still ready to be woken up by the system. Paired with min/max tolerance variation (less than 50 per cent), these devices provide a stable power budget and longevity, including peak current consumptions of less than 1.5 microA.

> Read More

Single mode LTE Cat 1bis IoT module is smallest yet, says u-blox

The u-blox LEXI-R10 is a compact module for applications requiring medium data rates. The small LTE Cat 1bis IoT module measures 16 x 16mm.

> Read More

Contactless connections simplify and streamline design, says Molex

The MX60 series of contactless connectivity solutions have been developed to ease device pairing, streamline design engineering and boost product reliability, said Molex.

> Read More

IP and SDK accelerate on-device and edge AI design, says Cadence

AI IP and software tools to address the escalating demand for on-device and edge AI processing have been unveiled by Cadence. The scalable Cadence Neo neural processing units (NPUs) deliver a range of AI performance in a low-energy footprint, said the company and this is claimed to bring new levels of performance and efficiency to AI SoCs.
The Neo NPUs deliver up to 80TOPS performance in a single core, in order to support both classic and new generative AI models. They can also offload AI/ML execution from any host processor, including application processors, general-purpose microcontrollers and DSPs. This is achieved with a simple and scalable AMBA AXI interconnect.
Cadence has also introduced the NeuroWeave software development kit (SDK) which it said provides developers with a “one-tool” AI software solution across Cadence AI and Tensilica IP products for no-code AI development.
“While most of the recent attention on AI has been cloud-focused, there are an incredible range of new possibilities that both classic and generative AI can enable on the edge and within devices,” pointed out Bob O’Donnell, president and chief analyst at TECHnalysis Research. For these intuitive, intelligent devices to be realised will need a flexible, scalable combination of hardware and software solutions with a range of power requirements and compute performance, “all while leveraging familiar tools” he believed. “New chip architectures that are optimised to accelerate ML models and software tools with seamless links to popular AI development frameworks are going to be incredibly important parts of this process,” he added.
The Neo NPUs are suitable for power-sensitive devices as well as high-performance systems with a configurable architecture. SoC architects will be able to integrate an optimal AI inferencing solution in a range of products, including intelligent sensors, IoT and mobile devices, cameras, hearables/wearables, PCs, AR/VR headsets and advanced driver-assistance systems (ADAS). New hardware and performance enhancements and key features/capabilities include:
The single core NPUs are scalable from 8GOPS to 80TOPS, with further extension to hundreds of TOPS with multi-core devices, said Cadence. They support 256 to 32K MACs per cycle, allowing SoC architects to optimise embedded AI to meet power, performance and area (PPA) tradeoffs.
Offloading of inferencing tasks from any host processor (e.g., DSPs, general-purpose microcontrollers or application processors) significantly improves system performance and power, said Cadence.
Support for Int4, Int8, Int16 and FP16 data types across a wide set of operations that form the basis of CNN, RNN and transformer-based networks allows flexibility in neural network performance and accuracy tradeoffs while the NPUs offer up to 20 times higher performance than the first-generation Cadence AI IP, with two to five time the inferences per second per area (IPS/mm2) and five to 10 times the inferences per second per Watt (IPS/W)

> Read More

About Weartech

This news story is brought to you by weartechdesign.com, the specialist site dedicated to delivering information about what’s new in the wearable electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: weartechdesign.com