DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 7786

Nigeria 2011 S&T Budget Is Less Than Microsoft R&D Weekly Tab. A Senator Spends Monthly What Some Agencies Spend Annually

1

We will be analyzing the science and technology budget of the Federal Ministry of Science and Technology.  The budget is available here:

 

People, we are still far below. What Microsoft spends on R&D in a week covers what Nigeria budgets for the science and technology ministry.At around $9.5 billion, Microsoft puts more than in N1 trillion in R&D. One percent of that is about N10 billion. Alternatively, look at it from weekly R&D money. Let us say $19 billion per  week for Microsoft – that is what the numbers work out to be. The total budget in the S&T Nigeria is not up to that weekly figure.

 

It is just unfortunate. We will be looking at the numbers closely and telling you what we expect in the agencies. The money is small because the salaries are eating the figures. Tekedia is at work and we are looking at them..

 

Key immediate numbers are:

NATIONAL INFORMATION TECHNOLOGY DEVELOPMENT AGENCY (NITDA) gets only N6.5m for total capital

 

The big guy is NATIONAL SPACE RESEARCH AND DEVELOPMENT AGENCY – ABUJA which gets N730,165,405 for capital cost

 

But you know what? One Senator spends, monthly, what some of these agencies spend in year. Who is deceiving who?

Power Dissipation and Interconnect Roadmap

0

Since the invention of integrated circuits by Jack Kilby few decades ago, the number of transistors in a die has continuously doubled every 24 months. This is the famous Moore’s law, which is still relevant today. Sustaining this trend has been fuelled by the abilities of the chip designers to cramp more transistors together. As noted above, it has made the designs denser but has introduced problems like power dissipations and interconnects noise. From many indications, the CMOS technology remains the most elegant technology for making chips owing to its low power static dissipation and ease of integration when compared with technology like bipolar junction transistors (BJT). This implies that it would be in use in the foreseeable future and battling the associated problems provides a huge challenge to the stakeholders.

 

State of the art CMOS technologies are well below 100-nanometer transistor feature size. Products made based on 65-nanometer process have already hit the market. 45-nanometer and 32-nanometer CMOS processes are expected before 2008 and 2010 respectively. This ambitious strategy of transistor miniaturization translates to making interconnects that are thinner as well as scaling the system supply power.

 

To make these systems appealing to the customers, there are tight budgets in power consumption and other parameters. For instance, the ITRS (2005) forecasts an allowable maximum power for battery (low cost/handheld) operated systems of 2.8W in 2005 to only 3W in 2020 [This is a very tight budget considering the expected advancements and complexities in these systems].

 

Within this period, the power supply for high performance systems is expected to scale down by 36% while the allowable maximum power for high performance (with heatsink) devices will increase by only 19%. The underlining consequence of this scaling would be more dominant short channel effects (SCE) and gate leakage current partly due to shorter features sizes and thinner gate oxide thickness respectively. Besides, problems associated with controlling the threshold voltage as a result of non-uniform doping as technology scales will be a major issue.

 

Similarly, the effect of feature size reduction affects the interconnect performance. The decrease in interconnection width and thickness increases resistance while smaller spacing progressively increases the circuit capacitance. This increase in resistance and capacitance are not desirable in the chip wires. These effects have resulted to increased role of interconnect in integrated circuit design and development. As the fringing field component of wire capacitance does not vary with feature size, when the three wire dimensions are scaled by the same scaling factor, the interconnect delay is not affected. But in reality, the scalings of the wire dimensions are not unified by the same scale factor.

 

Furthermore, by packing more circuits on a single die made possible by the smaller sizes of the transistors, the numbers of long interconnections are significantly increased. The resulting effect is that the interconnect delay grows bigger and even more than the gate delay of the transistor. With continuous reduction in the feature size, interconnect noise and delay will continue to be a major issue. Aluminum, once the main material for interconnect has long been replaced by copper, which has a lower resistivity. But with time, the performance of copper/low-k interconnects will become inadequate to meet the speed and power dissipation goals of highly scaled ICs.

 

So what is the future? The ITRS proposes an early availability of high-k gate dielectrics in order to meet the stringent gate leakage especially in low power devices. It also stated that development of low dielectric constant (low-?) material together with low-resistivity metal system would become critical for signal propagation delay reduction. Nonetheless, it acknowledges that accessing the road map is a ‘red brick’, i.e. ‘no known solution’ at least within this decade. In many instances, nanotechnology has been projected to supersede CMOS technology unless the challenges are overcome.

 

Notwithstanding, the challenges of interconnect and power dissipation calls for new system architecture, new materials and innovative optimization tools that would help to accurately model the complex relationships that exist in the system at nanometer regime. The chip makers have vigorously used new materials to reduce interconnect capacitance ( eg, Intel has used low-k carbon doped oxide dielectric to obtain lower interconnect capacitance) in their newer processes. Aggressive new trends would certainly emerge in the future if the demise of Moore’s law should be delayed

Electronic Commerce in Nigeria is Making Progress

2

The landscape of the ecommerce in Nigeria has long been cemented. Gone were the days when nothing could happen online. But all that has changed.  The nation is making progress and very quick.

 

Tekedia notes that there are many Nigerian sites with full e-commerce capabilities these days with payment method denominated with Naira. In other words, they have built methods for those that operate electronic payments in Naira to participate. This industry is growing and very healthy indeed.

 

With more Nigerians online now, this business will surely improve. In most statistics, including Facebook and other top websites Nigerians are getting online and that is good news for the growth of the mobile and electronic payment. The more time they spend online, the more they will shop online.

 

With some kind of standards and unification in the payment system, there is now a big order.  Interswitch is the behemoth while etranzact works the line. Nearly all the banks have Interswitch readiness.  Having that consortium is already helping the ecosystem. The good news is that the card which is used in Nigeria could be used as in other nations – POS, Internet, ATM. Of course, not many will use their cards online because of the illusion of risk.

 

For some big websites, we have noted direct logo of international brands like Visa and Mastercard. The banks are using these cards in the country.  Though Paypal are integrated in some sites, but the banks are not local and that does not help that much.

 

The big change will come with the penetration of mobile payment. As the players juggle with their licenses, mobile and epayment in Nigeria will grow.

 

The major drawback to the faster adoption of eCommerce remains the cost of  integrating Interswitch. It is very expensive with more than N150,000 just to set it up per site. With that kind of money, people will not go in a hurry to have that included.

Power Dissipation and Noise Challenges in Ultra Deep Submicron CMOS Technology

0

The invention of complementary metal oxide semiconductor (CMOS) integrated circuit is a major milestone in the history of modern industry and commerce. It has driven revolutionary changes in computing due to its performance, cost and ease of integration. But as the size of the transistors scale down into the nanometer regime, so many challenges occur on the reliability and performance of the systems.

 

Signal integrity and power problems are noticeably among the major ones. In the past few decades, the advancement of the chip performance has come through increased integration and complexity on the number of transistors on a die. Though supply and threshold voltages have been scaled for every CMOS generation, the power dissipation and interconnect noise have continued to increase. This trend is costly in terms of shorter battery life, complex cooling and packaging methods, and degradation of system performance.

 

Power dissipation in CMOS circuits involves both static and dynamic power dissipations. In the submicron technologies, the static power dissipation, caused by leakage currents and subthreshold currents contribute a small percentage to the total power consumption, while the dynamic power dissipation, resulting from charging and discharging of parasitic capacitive loads of interconnects and devices dominates the overall power consumption.

 

But as technologies scale down to the nanometer regime (ultra deep submicron (UDSM)), the static power dissipation becomes more dominant than the dynamic power consumption. And despite the aggressive downscaling of device dimensions and reductions of supply voltages, which reduce the power consumption of the individual transistors, the exponential increase of operating frequencies results in a steady increase of the total power consumption.

 

Interconnect noise and delay emanate during distribution of on chip signals and clocks using local, intermediate and global wires. Introduction of repeaters on the interconnect paths mitigate the effect of delay at the expense of chip area and power consumption. With technology downscaling, interconnect resistance and capacitance increases the propagation delay. As the cross section of chip interconnect is reduced, the resistance per unit length is increased. Closer routing and wire reduction have increased chip interconnect capacitance and resistance effects respectively.

 

The relationships between interconnect delay and technology show that downscaling of feature size increases circuit propagation delay. It is evident that as the technology scales, the gate delay decreases but with increase in interconnect delay. At around 0.12um technology, the interconnect delay has become exceedingly dominant over the gate delay. This increase is worrisome to chip designers in the quest for continuous circuit miniaturization and denser integration in CMOS technology.

 

The International Technology Roadmap for Semiconductor (ITRS) 2005 forecasts continuous reduction in feature size to be alive and well into the future. With this continuous scaling, if interconnect noise and power dissipation, especially the static power dissipation, are not controlled and optimised, they promise to become major limiting factors for system integration and performance improvement.

Leakage Control Techniques in Ultra Deep Submicron CMOS Technology

0

Among the different leakage currents in the nanometer CMS, the subthreshold and gate leakage are the most dominant. While the latter is mainly due to electron tunnelling from the gate to the substrate, the former is caused by many other factors. As a result, the leakage control techniques to be discussed will focus more on subthreshold currents. Over the years, many techniques have been developed to reduce the subthreshold currents in both the active and standby modes in order to minimize the total power consumption of CMOS circuits.

 

While the standby leakage currents are wasted currents when the circuit is in idle mode where no computation takes place, the active leakage currents are wasted current when the circuit is in use. Generally, reduction of leakage currents involves application of different device and circuit level techniques. At the device level, it involves controlling the doping profiles and physical dimensions of transistors while at the circuit level, it involves the manipulation of threshold voltage (Vth) and source biasing of the transistor.

 

A. Circuit Level Leakage Control Techniques

i) Multi Vth Techniques
This technique involves fabrication of two types of transistors, high Vth and low Vth transistors, on a chip. The high Vth is used to lower the subthreshold leakage current, while the low Vth is used to enhance performance through faster operation. Obtaining these different types of transistors is done through controlled channel doping, different oxide thickness, multiple channel lengths or multiple body biases. Notwithstanding, with technology scaling and continuous decrease in the supply voltage, the implementation of the high Vth transistor will become a major practical challenge.

 

Dual threshold method
In logic circuits, leakage current can be reduced by assigning higher Vth to devices in non-critical paths, while maintaining performance with low Vth in the critical paths. This technique is applicable to both standby and active mode leakage power dissipation control. It ensures that the circuit operates at a high speed and reduced power dissipation.

 

Multi-Threshold Voltage Method
This method uses a high Vth device to gate supply voltage from a low Vth logic block thereby creating a virtual power rail instead of directly connecting the block to the main power rail. The high Vth switches are used to disconnect the power supplies during the standby state, resulting in very low leakage currents set by the high Vth of the series logic block. In active mode operation, the high Vth transistors are switched on and the logic block, designed with low Vth, operates at fast speed.

This enables leakage current reduction via the high Vth and enhanced performance via the low Vth block. Alternatively, this system could be implemented with a high Vth NMOS transistor connected between the GND and the low Vth block. The NMOS transistor insertion is preferred to the PMOS since it has a lower ON-resistance at the same width and consequently can be sized smaller. The use of these transistors increases circuit delay and area. Besides, to retain data during standby mode, extra high Vth memory circuit is needed.

 

Variable Vth Method
This is a method mainly used to reduce standby leakage currents by using a triple well process where the device Vth is dynamically adjusted by biasing the body terminal. Through application of maximum reverse biasing during the standby mode, Vth is increased and the subthreshold leakage current minimized. In addition, this method could be applied in active mode operation to optimize circuit performance by dynamically tuning the Vth based on workload requirements. Through this tuning capability, the circuit is able to operate at the minimal active leakage power.

 

Dynamic Vth Method
This is a method used in active mode operation to control the leakage current in a circuit based on the desired frequency of operation. The frequency is dynamically adjusted through a back-gate bias in response to workload of a system. At low workload, increasing the Vth reduces the leakage power.

 

ii) Body Bias Control
Body biasing a transistor is an effective way of reducing both the active and standby leakage through its impact of increasing the threshold voltages of the MOS transistors. By applying a reverse body bias, the Vth is increased and subsequently reduces the subthreshold leakage currents. This could be done during standby mode by applying a strong negative bias to the NMOS bulk and connecting the PMOS bulks to the VDD rail. Body biasing is also used to minimize DIBL effect and Vth-Rolloff associated with SCE. The Variable Threshold CMOS technique described above utilises body biasing to improve circuit performance. It is important to note that the Vth is related by the square root of the bias voltage implying that a significant voltage level would be needed to raise the Vth. This could be a potential challenge in the UDSM where the supply has been severely scaled down.

 

iii) Minimum Leakage Vector Method
The fundamental concept in this technique is to force the combinational logic of the circuit into a low-leakage state during standby periods. This state enables the largest number of transistors to be turned off so as to reduce leakage and make use of multiple off transistors in stacks.

 

iv) Stack Effect-based Method
The “stacking effect” is the reduction in subthreshold current when multiple transistors connected in series (in a stack) are turned off. The transistor stacking increases the source bias of the upper transistors in the stack as well as lowers the gate-source voltage (Vgs) of these transistors. All these effects contribute to lower subthreshold leakage current in the circuit. Minimizing leakage through transistor stacking depends on the pattern of the input sequence during standby periods as it determines the number of OFF transistors in the stack.

 

Finding the low leakage input vector involves either a complete enumeration of the primary inputs or random search of the primary inputs. While the former is used for small circuits, the latter is applied for more complex circuits. The idea is to use the input vector to determine the combination that results to the least leakage current. When the input vector is obtained, the circuit is evaluated and if necessary, additional leakage control transistors are inserted in series at the non-critical paths to be turned OFF during the standby mode.

 

B. Device Level Leakage Control Techniques

Silicon-on-insulator (SOI): This is a non-bulk technology that builds transistors on top of insulating layer instead of a semiconductor substrate. Using insulating layer reduces parasitic capacitance, which results to higher operational speed and lower dynamic power dissipation in integrated circuits. Though the early SOI used crystals like sapphire, emerging technologies favour the use of silicon wafer, making it economically attractive. The ITRS 2005 projects the use of Ultra-thin body (UTB) SOI by 2008 to manage the increasing effects of leakage.

 

Double Gate MOSFET (DG-MOS): In traditional bulk and SOI devices, immunity from SCE like Vth-rolloff and DIBL requires increasing the channel doping to enable reduction of the depletion depth in the substrate. The inherent drawbacks to this approach are increased substrate-bias sensitivity and degraded subthreshold swing. By replacing the substrate with another gate to form a double gate MOSFET, short channel immunity is achieved with an ideal subthreshold swing.

 

Separation by Implantation of Oxygen (SIMOX): This is a more modern and elegant technique for making the SO1 structure by implanting heavy doses of oxygen directly into a silicon substrate. The wafer is then annealed at very high temperatures, which induces oxide growth below the wafer surface and pushes a top layer of silicon on the top. The resulting SOI consumes lesser power than the bulk technologies. Other methods used in device level control include retrograde doping and halo doping.

 

In addition to the two techniques discussed above, system and architectural level techniques are also used in leakage reduction. This technique could involve designing the system architecture so that it operates at low voltage. The underlining strategy is that when the system operates at low voltage, it reduces both the static and dynamic power consumption and consequently minimizes the leakage power. One of the ways of doing this is to design the system using pipeline architecture. With pipelining, it is possible to operate the system at lower voltage without performance degradation.

 

The penalty for this technique is extra hardware required for pipelining. Another method is threshold voltage hopping. This involves the use of software to dynamically control the threshold voltage of transistors based on the workloads of the system. By adjusting the threshold voltage in this way, high percentage power savings could be realised in a system. Furthermore, reduction in supply voltage is also a good technique to reduce leakage power. By lowering supply voltage, the source-drain voltage is reduced. This has the effect of minimizing DIBL, gate and subthreshold leakage currents.