DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 7442

The Advent of Artificial Intelligence in Agriculture

0

A group of maize farmers stands huddled around an agronomist and his computer on the side of an irrigation pivot in central South Africa. The agronomist has just flown over the pivot with a hybrid UAV that takes off and lands using propellers yet maintains distance and speed for scanning vast hectares of land through the use of its fixed wings.

The UAV is fitted with a four spectral band precision sensor that conducts onboard processing immediately after the flight, allowing farmers and field staff to address, almost immediately, any crop anomalies that the sensor may have recorded, making the data collection truly real-time.

In this instance, the farmers and agronomist are looking to specialized software to give them an accurate plant population count. It’s been 10 days since the maize emerged and the farmer wants to determine if there are any parts of the field that require replanting due to a lack of emergence or wind damage, which can be severe in the early stages of the summer rainy season.


At this growth stage of the plant’s development, the farmer has another 10 days to conduct any replanting before the majority of his fertilizer and chemical applications need to occur. Once these have been applied, it becomes economically unviable to take corrective action, making any further collected data historical and useful only to inform future practices for the season to come.

The software completes its processing in under 15 minutes producing a plant population count map. It’s difficult to grasp just how impressive this is, without understanding that just over a year ago it would have taken three to five days to process the exact same data set, illustrating the advancements that have been achieved in precision agriculture and remote sensing in recent years. With the software having been developed in the United States on the same variety of crops in seemingly similar conditions, the agronomist feels confident that the software will produce a near accurate result.

As the map appears on the screen, the agronomist’s face begins to drop. Having walked through the planted rows before the flight to gain a physical understanding of the situation on the ground, he knows the instant he sees the data on his screen that the plant count is not correct, and so do the farmers, even with their limited understanding of how to read remote sensing maps.

The Potential for Artificial Intelligence in Agriculture

Hypothetically, it is possible for machines to learn to solve any problem on earth relating to the physical interaction of all things within a defined or contained environment…by using artificial intelligence and machine learning.

The principle of artificial intelligence is one where a machine can perceive its environment, and through a certain capacity of flexible rationality, take action to address a specified goal related to that environment. Machine learning is when this same machine, according to a specified set of protocols, improves in its ability to address problems and goals related to the environment as the statistical nature of the data it receives increases. Put more plainly, as the system receives an increasing amount of similar sets of data that can be categorized into specified protocols, its ability to rationalize increases, allowing it to better “predict” on a range of outcomes.

The rise of digital agriculture and its related technologies has opened a wealth of new data opportunities. Remote sensors, satellites, and UAVs can gather information 24 hours per day over an entire field. These can monitor plant health, soil condition, temperature, humidity, etc. The amount of data these sensors can generate is overwhelming, and the significance of the numbers is hidden in the avalanche of that data.

The idea is to allow farmers to gain a better understanding of the situation on the ground through advanced technology (such as remote sensing) that can tell them more about their situation than they can see with the naked eye. And not just more accurately but also more quickly than seeing it walking or driving through the fields.

Remote sensors enable algorithms to interpret a field’s environment as statistical data that can be understood and useful to farmers for decision-making. Algorithms process the data, adapting and learning based on the data received. The more inputs and statistical information collected, the better the algorithm will be at predicting a range of outcomes. And the aim is that farmers can use this artificial intelligence to achieve their goal of a better harvest through making better decisions in the field.

In 2011, IBM, through its R&D Headquarters in Haifa, Israel, launched an agricultural cloud-computing project. The project, in collaboration with a number of specialized IT and agricultural partners, had one goal in mind – to take a variety of academic and physical data sources from an agricultural environment and turn these into automatic predictive solutions for farmers that would assist them in making real-time decisions in the field.

Interviews with some of the IBM project team members at the time revealed that the team believed it was entirely possible to “algorithm” agriculture, meaning that algorithms could solve any problem in the world. Earlier that year, IBM’s cognitive learning system, Watson, competed in Jeopardy against former winners Brad Rutter and Ken Jennings with astonishing results. Several years later, Watson went on to produce ground-breaking achievements in the field of medicine, leading to IBM’s agricultural projects being closed down or scaled down. Ultimately, IBM realized the task of producing cognitive machine learning solutions for agriculture was much more difficult than even they could have thought.

So why did the project have such success in medicine but not agriculture?

What Makes Agriculture Different?

Agriculture is one of the most difficult fields to contain for the purpose of statistical quantification.

Even within a single field, conditions are always changing from one section to the next. There’s unpredictable weather, changes in soil quality, and the ever-present possibility that pests and disease may pay a visit. Growers may feel their prospects are good for an upcoming harvest, but until that day arrives, the outcome will always be uncertain.

By comparison, our bodies are a contained environment. Agriculture takes place in nature, among ecosystems of interacting organisms and activity, and crop production takes place within that ecosystem environment. But these ecosystems are not contained. They are subject to climatic occurrences such as weather systems, which impact upon hemispheres as a whole, and from continent to continent. Therefore, understanding how to manage an agricultural environment means taking literally many hundreds if not thousands of factors into account.

What may occur with the same seed and fertilizer program in the United States’ Midwest region is almost certainly unrelated to what may occur with the same seed and fertilizer program in Australia or South Africa. A few factors that could impact on variance would typically include the measurement of rain per unit of a crop planted, soil type, patterns of soil degradation, daylight hours, temperature and so forth.

So the problem with deploying machine learning and artificial intelligence in agriculture is not that scientists lack the capacity to develop programs and protocols to begin to address the biggest of growers’ concerns; the problem is that in most cases, no two environments will be exactly alike, which makes the testing, validation and successful rollout of such technologies much more laborious than in most other industries.

Practically, to say that AI and Machine Learning can be developed to solve all problems related to our physical environment is to basically say that we have a complete understanding of all aspects of the interaction of physical or material activity on the planet. After all, it is only through our understanding of ‘the nature of things’ that protocols and processes are designed for the rational capabilities of cognitive systems to take place. And, although AI and Machine Learning are teaching us many things about how to understand our environment, we are still far from being able to predict critical outcomes in fields like agriculture purely through the cognitive ability of machines. 

Conclusion

Backed by the venture capital community, which is now funneling billions of dollars into the sector, most agricultural technology startups today are pushed to complete development as quickly as possible and then encouraged to flood the market as quickly as possible with their products.

This usually results in a failure of a product, which leads to skepticism from the market and delivers a blow to the integrity of Machine Learning technology. In most cases, the problem is not that the technology does not work, the problem is that industry has not taken the time to respect that agriculture is one of the most uncontained environments to manage. For technology to truly make an impact in the field, more effort, skills, and funding is needed to test these technologies in farmers’ fields.

There is huge potential for artificial intelligence and machine learning to revolutionize agriculture by integrating these technologies into critical markets on a global scale. Only then can it make a difference to the grower, where it really counts.

by  Joseph Byrum – a senior R&D and strategic marketing executive in Life Sciences – Global Product Development, Innovation, and Delivery at Syngenta.. 

First Atlantic Cybersecurity Institute (Facyber) Now Serving Francophone Africa

0

Through a strategic partnership with Cameroon-based K10 CASA Consulting, First Atlantic Cybersecurity Institute (Facyber) is now serving Francophone Africa. This partnership will help Facyber expand its cybersecurity learning solutions to new markets in Africa.

For us, this is a critical relationship as K10 CASA Consulting is local with deep presence in the Francophone Africa markets. K10 CASA will coordinate the enrollment of Learners in the local markets. And when necessary, it will help coordinate cybersecurity and digital forensics seminars/workshops in the markets.

 

First Atlantic Cybersecurity Institute (Facyber) is a cybersecurity training, consulting and research company specializing in all areas of cybersecurity including Cybersecurity Policy, Management, Technology, Intelligence and Digital Forensics.

The clientele base covers universities, polytechnics, colleges of education, governments, government labs and agencies, businesses, civil organizations, and individuals. Specifically, the online courses are designed for the needs of learners of any discipline or field (CS, Engineering, Law, Policy, Business, etc) with the components covering policy, management, and technology. Please see complete Facyber curricula here.

The programs are structured thus:

  • Certificate Program (Online 12 weeks)
  • Diploma Program (Online 12 weeks)
  • Nanodegree Program (Live 1 week)

 

For Further Information please contact:

K10 CASA Consulting Africa

Ancien Immeuble Ringo, Nylon-Bastos

Yaoundé, Cameroon

Phone: +237 697 405 721

Press@k10casaconsulting.com

http://k10casaconsulting.com/

 

Facyber, USA

7429 Lighthouse PT,

Pittsburgh, PA 15221 USA

info@facyber.com

http://facyber.com/

Here is why Customer Capital Is better than Venture Capital

0

The decision on how to fund your early stage agritech startup has significant consequences for founders who are navigating the avalanche of information surrounding startup financing. 


About 18 months ago I was looking to raise Venture Capital (VC) Series A financing for AgDNA. During the process, I was discussing our progress with one of our existing private investors when he asked me “why are you prioritising venture capital over customer capital?”

I didn’t appreciate the impact of his question at the time. Heck, I’ve got an MBA. I’ve read all the startup success stories. Growing your business with VC is how it’s done. Or at least that’s what I thought at the time.

Raise cash. Grow business. Live happily ever after…

The VC Minefield

However, the reality of capital raising is much different. Once you go down the VC path, new dynamics come into play. Company valuation, preference shares, pre-emptive rights, compounding interest, board seats, shareholder dilution and the expectation of a 10X return for the VC fund.

Hmmm, did I miss this MBA class?

The Pitch

Nevertheless, we signed up with the team at AgFunder to get the word out about AgDNA and get in front of the VC agritech community. We launched our campaign on AgFunder and were immediately being introduced to genuine agritech VC firms. Perfect, term sheet here we come!

The pitch about our business started out well and became more and more succinct with every presentation. Although after about a dozen pitches I could see a trend beginning to take shape. Essentially every VC put AgDNA in the agritech “software” bucket. This meant a lot of energy had to go into differentiating our value proposition from our competitors along with educating investors on the merits of software in agriculture.

I soon learned VC doesn’t automatically translate into expertise about your sector. There are some very knowledgeable VC firms and some that are still trying to figure out how agriculture works and what agritech means for the customer.

Side note: if you’re an agritech startup looking to raise capital then contact AgFunder — great team, great contacts, great results.

The Term Sheet

After plenty of pitching, AgDNA received several term sheet offers. We accepted the one we thought was the best fit for our business. It was a corporate VC as lead investor with two other VC funds co-investing. They gave us a fair valuation, with standard VC terms.

Remember the plan. Raise cash. Grow business. Live happily ever after…

However, within a week of signing the term sheet, the lead corporate VC appointed a new CEO to their parent company and many projects within their organisation were put on hold. Including new investments by their VC business unit.

Subsequent changes in management by the incoming CEO resulted in changes to our term sheet. Moving the goal posts this early on in the relationship felt like a sign of things to come. So we decided to reject the revised term sheet and go our own way.

Back to square one.

That Question

By this stage, almost 12 months had gone by, and I remebered once again the question of “venture capital versus customer capital.” The answer was now much clearer to me. Customer capital (aka sales revenue) was the most obvious and efficient source of cash to grow the business.

Customer capital doesn’t come with all the fine print of venture capital. It doesn’t need to be paid back, and it forces you to focus on the core of your business. But sales channels take time to develop, and the seasonality of agriculture means cashflow can be lumpy.

Realigning the business toward customer capital and organic growth would require laser focus.

The Runway

Every startup needs working capital to function, to grow the team, to build a great product and to delight its customers. So we raised additional seed capital from private investors within the ag sector. This allowed us to remain true to our core beliefs around agritech and it allowed all shareholders to remain on equal terms with only ordinary shares on issue.

Most importantly, it allowed the company to remain 100% focused on the customer.

Rocket Fuel

I am still a firm believer in the role of VC and the impact it can have on the growth trajectory of a startup. But it must be timed just right.

Venture Capital is like rocket fuel. Switch to thrusters, press ignition and hold on as the acceleration compresses you back in your seat. But be careful, accelerating your startup too early or in the wrong direction can be disastrous.

I have watched numerous agritech startups raise capital too early. Their product market fit was unclear and their revenue models questionable at best.

Build it. Nail it. Scale it.

This is the formula for many technology business success stories. Accelerating your agritech startup too early with VC finance could have long-term unintended consequences. At AgDNA we elected to make sure we are well into the “Nail It” phase before reconsidering VC backing.

The Outcome

Customer capital in the form of sales revenue is the cheapest type of financing for any business. The ability to grow revenue to the point where the company is breakeven and ultimately profitable can provide a lot of flexibility and freedom to operate going forward.

With customer capital you have options. You can continue to grow organically, or you can consider raising venture capital to accelerate growth and “scale it.” And with a healthy amount of customer capital, you can explore VC on your terms — because you want to — not because you have to.

Of course, the decision to take on VC finance and its timing is different for every startup and personal for every founder. But if you’re looking to raise capital for your startup, ask yourself “venture capital or customer capital”?

You might find that focusing your energy on the customer and building a profitable business is the right answer in the near term. It might not be as glamorous as VC, but long term it might be the best decision you ever made.

Paul Turner is CEO of AgDNA, the Australian precision ag and farm management software company. He originally published this as “Venture Capital Versus Customer Capital: What’s Right For Your Agritech Startup?”

How Internet of Things (IoT) will Change the Design of Embedded Systems

0

The Internet of Things (IoT) is about to change profoundly the design of embedded systems—but probably not in the way you are thinking. The change will begin not in silicon or in algorithms but in business models. Yet it will quickly permeate every aspect of embedded design.

Early warnings of the shift began several years ago, when IBM—the quintessential hardware company—began to divest its hardware operations to focus on services. Today, we see the CEO of Apple saying he is focused on doubling the company’s services business—including the App Store and Apple Music—from last year’s $25 billion, which already exceeded Mac sales.

But what do IBM or Apple corporate strategies have to do with embedded design? The answer is illustrated in a recent product announcement from a very different kind of company.

An All-Hearing Speaker

Consider the Amazon Echo. At first glance it is a rather expensive amplified, wireless loudspeaker—a really simple embedded system. But to stop there would be to completely misunderstand Amazon’s business model for the Echo. And without getting the business model, you would conclude the Echo is the most hopelessly over-designed audio appliance in history. But it is not. It is a harbinger.

Reading Amazon’s literature, one learns that the Echo has not only a Bluetooth port like any other wireless speaker, but also a WiFi connection that allows it to play audio files directly from the Amazon cloud. The user interface for this connection is Amazon’s Alexa voice-recognition personal assistant.

But Alexa can do much more than front-end Amazon Music. It can also access other commercial music services. And Web services like Yelp or Google Calendar. And it can answer questions with search-based responses. And it gives access to thousands of commercial apps, and to Amazon shopping. In short, Alexa makes this wireless speaker a portal for the retail Web.

But wait—there’s more. Echo can also front-end smart-home IoT networks, giving you voice control over everything from lights to door locks to your furnace.

The point Amazon doesn’t advertise is that in providing all this, the Echo becomes the collection point for a huge amount of personal data. It observes what you listen to, what questions you ask, what you buy, and how you interact with your home. It can infer from its beam-forming microphones array and cloud-based postprocessing, the identities and locations of people in your home, and what media they are playing audibly. There are already tales about an Echo attempting to execute commands given by an Alexa commercial playing on a TV in another room.

Amazon can use all of this data to refine its services to you—improving its response to questions, tweaking its music or audiobook offerings, proposing products and up- or cross-selling. But it can also, with your permission—you did read that fine print before you clicked, right?—collect data about you for use by third parties, including application developers, market researchers, or retailers. Did you really walk back to the TV when that hair-loss ad came on? Each of these uses of data is a potential revenue source for Amazon.

Accordingly, if you take an Echo apart, what you find is not just a Bluetooth chip and a power amplifier. There is the WiFi port. There is a substantial DSP chip, an array of seven microphones, 256 Mb of DRAM and 4 GB of NAND flash. All of this hardware contributes to the user interface, which reportedly can capture spoken voice from across the room with the music turned up loud. But the cost probably was justified at least partly by anticipation of these new revenue streams.

A Layered Model

“Fine,” I hear you say. “But I don’t work for Amazon.” Consider that the Echo, and Google Home, and doubtless similar devices on the way from other giant companies with major Web retail presence, are not just harbingers of future consumer audio products. They are pointing out a path for many kinds of embedded systems.

Let’s look, for example, at the slowly-emerging industrial IoT. Traditionally, an industrial controller will be a microcontroller unit (MCU), digital signal processor (DSP), or FPGA with a number of inputs from the device under control, each input measuring a necessary state variable, and a number of outputs to various actuators (Figure 1). With the need to include the acronym IoT in the sales literature, some such designs are now including a network interface of some sort, with the at least implied ability to log data to and receive commands from this network port. This ability may simply rest on top of the controller’s legacy functionality. Or it may be used, latencies permitting, to move some of the control functions to an external hub or even into the cloud.

Figure 1. A first cut at an IoT-connected controller logs data and divides up the tasks.

Now with a sideways glance at the Echo, let’s go a little further. If the controller is logging system state to the cloud, someone up there is building a huge, and potentially valuable, data set. Big-data analysis might detect prospective optimizations of the control loop, or trends that could predict needs for maintenance. Or it might spot anomalies that could indicate trouble elsewhere in the system, or even operator malfeasance. All of this is technical data that can be used within the system.

The trend is for data that normally would have been locked into a local control loop to be collected and disseminated more widely within the control system, or even beyond. Just how deeply the need to give IoT access to control variables has penetrated into the hardware is illustrated by motor control subsystems and chips from German motion experts Trinamic. Company director of marketing Jonas Proeger observes “That is a very typical use of our monitoring and sensorless control functionalities like stallGuard2, which is sensing the load on the motor during movement.”

In addition to locally controlling motor stalls, the motion controller can continuously report data to an external app. “You can monitor your system while it is in use. If there is a continuous increase in load over time you can expect that your mechanical system is wearing out. Some of our customers use this to automatically detect when it’s time to call the service technician.”

In some cases, data collected at the point of control will be exported for use by other parts of the control network, or even by cloud apps, all in the name of improving system performance. But other uses of the data might have a very different client. State data analysis can give product planners and marketers insights into how the system is being used, and even where there are friction points between operators and equipment. Such data can lead to new features or services, and to further sales calls to enhance installed equipment.

Some of the data planners want, however, may lie beyond simple state information. It may require the controller to extend it sensors deeper into, or further beyond, the device under control. Perhaps information about the equipment’s surroundings is valuable, or data about the operator’s behavior. Think about that microphone array on the Echo, and consider that sound can carry a great deal of information about vibration, friction, shaft lashing, and other effects that might not be easy to extract from just winding currents on the motors.

Another interesting example of collateral information comes from a human-input technology developer called Quantum Interface. The company’s IP, instead of relying on mouse sliding and clicking, uses gesture input devices to capture, for example, the center of mass, velocity vector, and predicted trajectory of an operator’s hand in three-dimensional space.

Company founder and CTO Jonathan Josephson explains that such continuous, spatial motions can be more natural than pointing and clicking. They can create economies—for instance by allowing the system to observe an operator opening a valve rather than having to read a shaft encoder on the valve shaft. And maybe most important, such an interface can detect metadata—such as the operator’s feelings of urgency, uncertainty, or distraction—from the motion, even before a gesture is completed. Whether from whole-body motion, hand motion, or eye tracking, “the path tells us everything about you,” Josephson says.

The controller may even look beyond its operator for data. In many situations there is a threshold in how much state information you collect. Below the threshold it is more economical to measure individual state variables directly with analog sensors. Above the threshold, it makes more sense to capture many variables at once indirectly—for instance with video or audio—and then to use analytics to derive values for the individual variables.

The classic example comes from municipal services. It is far easier to derive parking-space occupancy from an overhead video camera than from sensors buried under each parking space. And once you have the video stream, you can further analyze it for traffic information, security information, and maybe even foot traffic and pedestrian behavior data for local merchants.

A New Line of Business

This 1st  idea illustrates an important point. Sometimes the data available to a controller have nothing to do with the controller’s original function—they are available because of the accident of the controller’s location, or of other data the controller must gather. If you use a video camera to monitor on-street parking, you are also going to collect a lot of data about sidewalk goings-on. Less obviously, perhaps, if you are using video to replace a number of shaft encoders and linear position sensors, using a wide-angle lens and aiming the camera up a bit could bring a windfall of data about operator behavior, other operations on the factory floor, or customer behavior in a retail setting.

But why? None of this data has anything to do with the controller’s function. Now we come to the importance of thinking about alternative business models. Data is money. Often this extraneous data can create great value when analyzed, properly sanitized, and marketed to third parties. Operator data can yield information for functional-safety compliance, worker training, or even workplace security. Shop-floor video could yield clues about workflow, process inputs, even incidentals like lighting, supervisor behavior, and weather. The wealth of information you can get by watching retail customers needs no elaboration.

The point is that not only is this data valuable, but you—the system vendor, service provider, or equipment owner—can sell it. This is a valuable potential revenue stream, sometimes eclipsing the value of the embedded system’s original function. And that changes everything.

A New Machine

The presence of the IoT brings a new dimension to a wide range of embedded systems, from industrial controllers to vehicular subsystems to drones and consumer appliances. Now designers must consider not only the embedded system’s primary function, but the potential value of the data the system might gather.

This value can come from several sources:

  • Data for improved control strategies, broader optimizations, or anticipatory maintenance
  • Vital inside information about users and use cases for product planning and marketing
  • Data that can be mined and sold to third parties.

As the value and volume of collected information grow, these requirements begin to materially alter the system design (Figure 2). For example, the Amazon Echo, far from being a simple Bluetooth speaker, devotes significant cost to a microphone array, signal processing, WiFi, and Internet access. Much of this cost can be justified only by the other functions—retail transactions and Web traffic—the Echo generates.

Figure 2. An evolved IoT controller evolves toward being a remote data collector. 

Moving deeper into the IoT-influenced age, we may begin thinking of embedded systems not as independent fixed-function devices, nor even as clusters of sensors and actuators for cloud-based algorithms. Rather, they may become collectors of dense, high-volume data, with hardware dedicated to transporting streaming data, to local analytics functions, and to vastly enhanced security. They become the ears, eyes, and perhaps snouts of increasingly voracious big-data analyses, seeking the ultimate profitability of omniscience.

By Ron Wilson, Altera Corp

Six technologies that will reshape Africa’s future – IoT, Big Data, Robotics, 3D Printing, AI, and Blockchain

0

Africa has registered impressive economic growth over the past decade and a half, displaying remarkable resilience in the midst of volatility and turmoil in global markets. Time is now ripe for the continent to turn the chapter and embark on a journey towards a major economic transformation. For this, Africa needs a new economic growth model powered by the strength of the real economy, entrepreneurship and innovation.

A new report from Intellecap explores the critical role emerging technologies can play in helping Africa address its age-old development challenges and achieve exponential growth over the next decade. They researched and interviewed a range of emerging technology specialists from around the world and experts with deep experience on the social entrepreneurship and impact space in Africa. That helped them to develop a framework for analyzing the potential of emerging technologies to amplify impact creation in the African context.

The report highlights how emerging technologies can trigger a set of big shifts to help Africa leapfrog and combat its development challenges. The research indicates that although early evidence of these shifts is already visible signaling the beginning of Africa’s innovation journey, significant whitespaces currently exist. The report identifies these key innovation whitespaces based on scanning of 100 technology use cases in Africa. It concludes by identifying a set of opportunities these whitespaces present for key stakeholders to help nurture a vibrant and high impact technology innovation ecosystem and in the process, become a part of Africa’s journey towards economic transformation..

While achieving food and nutrition security is widely recognized as arguably the greatest development challenge for Africa, securing water security and low carbon energy security are other mega challenges that are intertwined with the food security crisis. In addition, Africa needs to build holistic healthcare ecosystems, create a future ready workforce and financially include majority of its population.

The following are the main technologies which will reshape Africa’s tomorrow:

  • Internet of Things: IoT uses an array of sensors to enable capturing of real time data from a wide range of sources including computing devices, mechanical and digital machines, objects, animals and people.
  • Big Data: Big data refers to the use of advanced data analytics including predictive analytics for extracting value from voluminous or complex data sets
  • Artificial Intelligence: AI constitutes advanced algorithms applied to large data sets for observing patterns, gathering insights, problem solving, predicting and real-time decision making
  • Blockchain: A Blockchain is a tamper-proof record of transactions distributed across all participants in a Blockchain network. Via digital authentication and verification, the technology removes intermediaries and reduces transaction time and fraud
  • Robotics: Robotics refers to use of robots to automate and standardize quality of work with minimal errors. It covers a large variety of robots including drones.
  • 3D Printing: 3D Printing or additive manufacturing is a process that creates a three dimensional physical objects from a digital design. It can manufacture highly customized parts, which would otherwise be difficult based on traditional manufacturing

Read the Executive Summary, courtesy of Intellecap here.