DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 7436

Evolution of 10 Gbps Ethernet Next-Gen Embedded Design Solution

0

10 Gbps Ethernet (10GbE) has established itself as the standard way to connect server cards to the top-of-rack (ToR) switch in data-center racks. So what’s it doing in the architectural plans for next-generation embedded systems? It is a tale of two separate but connected worlds.

Inside the Data Center

If we can say that a technology has a homeland, then the home turf of 10GbE would be inside the cabinets that fill data centers. There, the standard has provided a bridge across a perplexing architectural gap.

Data centers live or die by multiprocessing: their ability to partition a huge task across hundreds, or thousands, of server cards and storage devices. And multiprocessing in turn succeeds or fails on communications—the ability to move data so effectively that the whole huge assembly of CPUs, DRAM arrays, solid-state drives (SSDs), and disks acts as if they were one giant shared-memory, many-core system.

Figure 1. Autonomous vehicles, for example, can generate a deluge of high-speed data.

This need puts special stress on the interconnect fabric. Obviously it must offer high bandwidth at the lowest possible latency. And since the interconnect will touch nearly every server and storage controller card in the data center, it must be inexpensive—implying commodity CMOS chips—compact, and power efficient.

And the interconnect must support a broad range of services. Blocks of data must shuffle to and from DRAM arrays, SSDs, and disks. Traffic must pass between servers and the Internet. Remote direct memory access (RDMA) must allow servers to treat each other’s memory as local. Some tasks may want to stream data through a hardware accelerator without using DRAM or cache on the server cards. As data centers take on network functions virtualization (NFV), applications may try to reproduce the data flows they enjoyed in hard-wired appliances.

Against these needs stand a range of practical constraints. Speed and short latency cost money. What is technically achievable on the lab bench may not be feasible for 50,000 server cards in a warehouse-sized data center. Speed and distance trade off—the rate you can obtain over five meters may be impossible across hundreds of meters. And on the whole, copper twisted pairs are cheaper than optical fibers. Finally, flexibility matters: no one wants to rip out and replace a data center network to accommodate a new application.

After blending these needs and constraints, data-center architects generally came to similar conclusions (Figure 2). They connected all the server, storage, and accelerator cards in a rack together through 10GbE over twisted pairs to a ToR switch. Then they connected all the ToR switches in the data center together through a hierarchy of longer-range optical Ethernet networks. The Ethernet protocol allowed use of commodity interface hardware and robust software stacks, while giving a solid foundation on which to overlay more specialized services like streaming and security.

Figure 2. A typical data center, before upgrade to faster networks, uses 10GbE for interconnect within a server rack.

Today, the in-rack links are evolving from 10 Gbps to 25 or 40 Gbps. But the 10GbE infrastructure has been deployed, cost-reduced, and field-proven, and is ready to seek new uses.

Embedded Evolution

As 10GbE was solidifying its role in server racks, an entirely different change vector was growing in the embedded world. Arguably, the change started in systems that were already dependent on video—ranging from broadcast production facilities to machine-vision systems. The driving force was the growing bit rate of the raw video signal coming off of the cameras.

Perhaps the first application domain to feel the pain was broadcast, where 1080p video demanded almost 3 Gbps. The industry responded with its own application-specific serial digital interface (SDI). But as production facilities and head-ends grew more and more to resemble data centers, the pressure to transport multiple video streams over standard network infrastructure grew. And 10GbE was a natural choice. The progression from 1080p to 4K HD only accelerated the move.

But video cameras were used in machine vision as well. Some applications were fine with standard-definition monochrome video at low scan rates. But in many cases, the improved resolution, frame rate, dynamic range, and color depth of HD enabled significantly better performance for the overall system. How, then, to transport the video?

For systems only interested in edge extraction or simple object recognition, and for uses like surveillance, where the vast majority of data is discarded immediately, local vision processing at the camera is an obvious solution. With relatively simple hardware, such local processing can slash the required bandwidth between the camera and the rest of the system, bringing it within range of conventional embedded or industrial busses. And in many other cases local video compression at the camera can substantially reduce bandwidth requirements without harming the application.

Not every situation is so cooperative. Broadcast production studios are loath to throw away any bits—if they use compression at all, they want it to be lossless. Motion-control algorithms may need edge-location data at or even below pixel-level resolution, requiring uncompressed data. And convolutional neural networks, the current darlings of leading-edge design, may rely on pixel-level data in ways completely opaque to their designers. So you may have no choice but to transfer all of the camera data.

Even in situations where compression is possible, a module containing multiple imaging devices—say, several cameras and a lidar in an autonomous vehicle, for example—can eat up more than 1 Gbps quite easily just sending preprocessed image data.

And crossing that 1 Gbps boundary is a problem, if you had planned to connect your high-bandwidth device into the system with Ethernet. Once you exceed the aggregate capacity of a1 Gbps Ethernet link, the next step is not 2, it is 10. Hence, the growing importance of 10GbE. But even with its economies of scale and ability to use twisted-pair or backplane connections, the step up to 10GbE means more expensive silicon and controller boards. It’s not a trivial migration.

Get a Backbone

In many systems, 10GbE can handle not only the fastest I/O traffic in the design, but all the fast system I/O (Figure 3). For simplicity, reliability, cost, and weight, they can be a big enough advantage to justify the cost and power of the interfaces. For example, linking all the major modules in an autonomous vehicle—cameras, lidar, location, chassis/drivetrain, safety, communications, and electronic control unit—through a single 10GbE network could eliminate many meters and several kilograms of wiring. Compared to the growing tangle of dedicated high-speed connections today—often requiring hand-installation of wiring harnesses—the unified approach can be a big win.

Figure 3. In an embedded system, 10GbE can provide a single backbone interconnect for a variety of high-bandwidth peripherals.

But unifying system interconnect around a local Ethernet network also presents issues. One, ironically, is the very issue that motivated interest in 10GbE in the first place: bandwidth. A machine-vision algorithm consuming the raw output of two HD video cameras would already be using over half the available bandwidth of a 10GbE backbone. So in systems with multiple multi-Gbps data flows, there are some hard choices to make. You can employ multiple 10GbE connections as point-to-point links. Or, if the algorithms can tolerate the latency, you can use local compression or data analytics at the source to reduce bandwidth needs—partitioning vision processing between camera and control unit, for example.

Another issue is cost. A small, low-bandwidth sensor may not be a sensible candidate for a $250 10GbE interface, or even a $50 chip. You may want to consolidate a number of such devices on one concentrator, or simply provide a separate, low-bandwidth industrial bus for them.

Timing is Everything

In abstract we have offered a promising scenario. Data centers have built up a huge infrastructure of chips, media, and boards behind 10GbE. Now the giant computing facilities are moving on to 25 or 40GbE, and all that infrastructure will go looking for new markets. At the same time, data rates in some embedded systems have sped past the bounds of frequently-used 1GbE links, hinting at just the sort of opportunity the 10GbE vendors are seeking.

But reality doesn’t dwell in abstracts. In particular, the real embedded world cares about latency and other quality of service parameters. If a data-center ToR switch frequently shows unexpected latencies, the worst result is likely to be slightly longer execution times—and hence higher costs—for workloads there were never time-critical. In the embedded world, if you miss a deadline you break something—usually something big and expensive.

This is a long-understood issue with networking technology in embedded systems. And it has an established solution: the cluster of IEEE 802.1 standards collectively known as time-sensitive networking (TSN). TSN is a set of additions and changes to the 802.1 standards at Layer 2 and above that allow, in effect, Ethernet to offer guaranteed levels of service in addition to its customary best-effort service.

So far, three elements of TSN have been published: 802.1Qbv Enhancements for Scheduled Traffic, 802.1Qbu Frame Preemption, and 802.1Qca Path Control and Reservation. Each of these defines a service critical to using Ethernet in a real-time system.

One service is the ability to pre-define a path through the network for a virtual connection, rather than entrusting each packet to best-effort forwarding at each hop. By itself this facility may not be that useful in embedded systems, where the entire network is often a single switch.

But other parts of this spec are more relevant: the ability to reserve bandwidth or stream treatment for a connection and to provision redundancy to ensure delivery. Another service is the ability to pre-schedule transmission of frames for a connection on a network also carrying prioritized and best-effort traffic. And yet another element defines a mechanism for preempting a frame in order to transmit a scheduled or higher-priority frame. Together these capabilities allow a TSN network to guarantee bandwidth to a virtual connection, to create a virtual streaming connection, or to guarantee maximum latency for frames over a virtual connection.

Since TSN is essentially an overlay on 802.1 networks, TSN over 10GbE is feasible. At least one vendor has already announced a partially TSN-capable 10GbE media access controller (MAC) intellectual property (IP) core that works with standard physical coding sublayer IP and transceivers. So it is possible to implement a 10GbE TSN backbone now with modest-priced FPGAs or an ASIC.

Using 10GbE for system interconnect in an embedded design is no panacea. And employing TSN extensions to meet real-time requirements may preclude using exactly the same Layer-2 solutions that data centers use. But for embedded designs such as autonomous vehicles or vision-based machine controllers that must support high internal data rates, 10GbE as point-to-point links or as backbone interconnect may be an important alternative.

 

By Altera Training

These African hospitals are adopting artificial intelligence in their healthcare operations

0

Sophia Genetics, global leader in Data-Driven Medicine, unveiled today, at the 2017 Annual Meeting of the American College of Medical Genetics and Genomics (ACMG) in Phoenix, the list of African hospitals that have started integrating SOPHiA, the company’s artificial intelligence, into their clinical workflow to advance patients’ care across the continent.

Medical institutions at the forefront of innovation already using SOPHiA in Africa include:

  • PharmaProcess in Casablanca, Morocco;
  • ImmCell in Rabat, Morocco;
  • The Al Azhar Oncology Center in Rabat, Morocco;
  • The Riad Biology Center in Rabat, Morocco;
  • The Oudayas, Medical Analysis Laboratory, Morocco;
  • The Center for Proteomic & Genomic Research (CPGR) in Cape Town, South Africa;
  • The Bonassama District Hospital in Douala, Cameroon.

African hospitals are adopting SOPHiA to – no matter their experience in genomic testing – get up to speed and analyze genomic data to identify disease-causing mutations in patients’ genomic profiles, and decide on the most effective care. As new users of SOPHiA, they become part of a larger network of 260 hospitals in 46 countries that share clinical insights across patient cases and patient populations, which feeds a knowledgebase of biomedical findings to accelerate diagnostics and care.

Among other diseases, SOPHiA will be a key partner for African hospitals in oncology. Breast cancer, for instance, has been described as a “serial killer” on the continent as lack of relevant diagnostics and personalized care means that 60% of women with breast cancer in Africa die versus 20% in the US and EU.

According to a 2012 global report from the International Prevention Research Institute, an earlier diagnostic of breast cancer could increase life expectancy by 30%. Globally, on the continent, the number of new cases of cancer every year should jump to 1.6 million by 2030. As oncology expertise might be based in different places across the globe, SOPHiA, ensures that the knowledge of a specialist in Paris will for instance be accessible to save patients in Nairobi.

Visa’s Everywhere Initiative is back for entrepreneurs to win $50,000

0

Participate for a chance to win up to $50,000 and the opportunity to work directly with Visa.

In its third annual Everywhere Initiative., Visa will once again present three real-life business challenges for startups to help solve. If you’re a startup, enter your submission for a chance to win up to $50,000 and the opportunity to work directly with Visa and collaborate on the innovative payment solutions of the future.

  • Challenge 1: How can your company use connected devices to facilitate simpler, more seamless and powerful commerce experiences for consumers?
  • Challenge 2: How can Visa APIs augment your company’s product or service offerings and ultimately drive more meaningful commerce or finance related experiences for customers?
  • Challenge 3: How can you harness Visa capabilities and other third party services to create transformative commerce experiences at sporting events, transportation hubs or other venues where people congregate?

Submit your proposal here by April 6. Submissions will be reviewed on a rolling basis, so don’t wait, says Visa.

Securing energy grid gets boost as Kaspersky unveils CyberSecurity for Energy

0

Malicious attacks on industrial systems – including industrial control systems (ICS) and supervisory control and data acquisition systems (SCADA) – have increased significantly in recent years. As the Stuxnet and BlackEnergy attacks have shown, one infected USB drive or single spear-phishing email is all it takes for attackers to bridge the air gap and penetrate an isolated network.

Traditional security is no longer enough to protect industrial environments from cyber threats. As threats targeting critical infrastructure increase, choosing the right advisor and technology partner to secure your systems has never been more important.

https://youtu.be/UKnrEoqiI50

Kaspersky Lab has announced the global availability of Kaspersky Industrial CyberSecurity for Energy, a vertical advanced package for energy enterprises, based on Kaspersky Lab’s suite for protection of industrial infrastructure.

Modern electrical power grids are complex networks, with integrated automation and control functions. However, because they communicate through open protocols, they do not have sufficient built-in cybersecurity functions to combat the increasingly sophisticated range of security threats they face.

The Report

Kaspersky Lab’s recent report on industrial cybersecurity found that 92% of externally available industrial control system (ICS) devices use open and insecure Internet connection protocols. Since 2010 the number of ICS-component vulnerabilities has also increased by a factor of 10, making these devices an easy and lucrative target for cybercriminals. The challenge for energy companies is clear, with Ernst & Young’s most recent Global Information Security Survey revealing that 42% of power and utilities companies say it’s unlikely they would be able to detect a sophisticated attack.

Kaspersky Industrial CyberSecurity (KICS) for Energy is dedicated to helping energy companies secure every layer of their industrial infrastructure, without impacting on the operational continuity and consistency of technological processes. Kaspersky Lab’s solution protects SCADA level control centers and Substation Automation Systems on every level: upper level of automation including Servers, HMI, Gateways, Engineering workstations. Secondary automation equipment: Protection relays, Bay Controllers, Merging units, RTU and other substation bus and process bus IED and overall network infrastructure.

The Key Solution Benefit

The solution provides a variety of advanced technologies to protect industrial nodes (including servers, HMI, Gateways and Engineering workstations) and network infrastructure. The latter offers network monitoring and integrity checking with the capability of deep application protocol inspection (including IEC 60870-5-104, IEC 61850, and other standards and protocols for electric power infrastructures).

“Electrical power equipment automation, control and protection are no longer handled by closed systems and, as things stand, detecting a potential threat is extremely difficult, both technically and organisationally,” said Andrey Suvorov, Head of Critical Infrastructure Protection, Kaspersky Lab. “That’s why energy enterprises need to bolster their defences to combat increasingly prevalent cyberattacks and avoid the nightmare scenario of complete loss of service and the impact that would have on citizens and society in general.”

Alexander Golubev, Chief IT Security Officer at Electrical Distribution Network Northwest Federal District, Rosseti, commented: “Being one of the major operators of electric grids in Russia, it is very important for our company to ensure uninterrupted operations, including those caused by cyberattacks on our IT infrastructure. A large number of our subsidiaries has been using Kaspersky Lab’s solutions for a long time, as they allow them to effectively detect and block all types of cybersecurity threats in a timely manner. As a result of this positive experience, we are evaluating the option to extend cooperation to the field of industrial security. The test deployment of Kaspersky Industrial CyberSecurity for Energy on one of our substations has become the first important step in this direction”.

Training

You can get solid top grade training on cybersecurity at First Atlantic Cybersecurity Institute.

V-Exchange sues Etisalat Nigeria for $5 million copyright infringement

0

It is not looking good for Etisalat these days.

According to The Guardian, mobile fintech firm, V-Exchange Limited, is suing Etisalat Nigeria for N2 billion for alleged copyright infringement. V-Exchange specialises in providing instant finance solutions to individuals and corporate entities, and the story is they offered to partner with Etisalat to launch an instant loan service. According to them, Etisalat one-upped them by launching a similar service – KwikCash – without permission or due credit. Hmm.

The V-Exchange co-founder said he was however shocked when it heard that Etisalat had gone ahead to launch the instant loan service without his approval.Also speaking at the media briefing, the Chief Executive Officer, V-Exchange, Mrs Kemi Ayinde, noted that well-wishers had called to congratulate her on the successful launch of the product not knowing that her firm was not involved with the launch.

Etisalat denies the allegations.