DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 767

Nigeria’s Oil Output Hits 1.78mbpd But Still Falls Short of Budget Benchmark

0

Nigeria’s crude oil production has climbed to an average of 1.78 million barrels per day (bpd) in July, marking a significant improvement from earlier months, but still falling short of the 2.06 million bpd benchmark set for the 2025 budget.

The development, announced by Gbenga Komolafe, Chief Executive of the Nigerian Upstream Petroleum Regulatory Commission (NUPRC), comes as the country intensifies efforts to recover from years of chronic underproduction caused by insecurity, oil theft, and underinvestment.

Speaking at an energy conference on Monday, Komolafe said the output gains were largely due to improved security operations in the Niger Delta region, where most of Nigeria’s crude is extracted. He noted that the government is still working to increase production to 3 million barrels per day, a long-held national target.

Komolafe said the current average stands at 1.78 million barrels per day, and this includes condensates, adding that the improvement stems from collaborative efforts with stakeholders to reduce losses and ensure transparency.

Nigeria, Africa’s top oil producer, depends on crude oil exports for about two-thirds of government revenue and over 80% of foreign currency earnings. Boosting production is therefore critical not just for revenue generation, but also for stabilizing the naira and financing government programs amid soaring inflation and rising debt service obligations.

However, analysts say the current production levels are far from sufficient. The 2025 national budget was drafted with a crude oil production benchmark of 2.06 million bpd — a target that remains elusive despite recent gains.

Adding to the pressure is Nigeria’s obligation to supply crude to the Dangote Refinery, which began operations this year. With its 650,000 bpd refining capacity, the refinery has had to source crude externally, including from the U.S. and other producers, because Nigeria’s domestic supply has remained inconsistent.

Meanwhile, the Organization of the Petroleum Exporting Countries (OPEC) reported that Nigeria’s average daily crude oil production rose to 1.505 million bpd in June, based on figures from Nigerian authorities. This marked a 3.58 percent increase from the 1.453 million bpd recorded in May and was the highest level since January. It also signaled that Nigeria met OPEC’s 2025 quota of 1.5 million bpd for the second time this year. However, OPEC’s secondary sources estimated slightly higher output at 1.547 million bpd.

In its latest performance report, the Nigerian National Petroleum Company Limited (NNPCL) said total crude oil and condensate sales dropped to 21.68 million barrels in June, down from 24.77 million barrels in May. However, gas production increased marginally to 7.581 billion standard cubic feet per day (bscf/d) from 7.352 billion bscf/d.

Despite the dip in crude sales, the downstream segment showed signs of recovery. Fuel availability at NNPC Retail Limited stations improved to 71 percent in June from 62 percent in May, pointing to better supply chain management.

The NNPCL also announced it made a profit after tax of N905 billion for June 2025 and posted a revenue of N4.571 trillion, buoyed by improved crude and gas production.

Still, the gap between actual production and the budgetary projection poses a risk to macroeconomic planning. Industry stakeholders and economists are urging the government to urgently address infrastructure gaps, end crude theft, and prioritize investment in upstream activities if the country is to meet its budgetary and energy obligations.

The broader implications, according to experts, include possible revisions to the budget or the need for additional borrowing, which could deepen Nigeria’s debt crisis and erode the limited gains being made in oil and gas.

Inside OpenAI’s Tight-Lipped Culture: Engineers Guard Identities of ‘Most-Prized’ Talent Amid Meta’s AI Hiring Blitz

0

An OpenAI engineer has revealed just how protective the company has become of its top talent—particularly those working on debugging its cutting-edge AI models—amid an intensifying scramble for artificial intelligence expertise across Silicon Valley.

Speaking on the Before AGI podcast, OpenAI technical fellow Szymon Sidor described the company’s top debuggers as “some of our most-prized employees.” But before he could finish his sentence, he abruptly stopped, and someone quickly interjected: “No names.” Laughter followed. That moment—though clearly audible in the audio-only version of the podcast on Spotify and Apple Podcasts—was noticeably absent from the video versions uploaded to YouTube and X.

The decision to withhold their identities wasn’t incidental. It points to a larger trend: as competition escalates in the AI arms race, companies are becoming increasingly secretive and protective of their high-value technical staff—especially those whose work is vital to advancing powerful language models.

Sidor and OpenAI chief data scientist Jakub Pachocki, who also appeared on the podcast, didn’t elaborate on why the names were censored. But the reason is obvious. The AI industry is now in the middle of a fierce talent war, and no company wants to make it easier for rivals to identify and poach their top minds.

Nowhere is that battle more aggressive than at Meta.

The Mark Zuckerberg-led company has gone all in on building out its superintelligence ambitions. Meta has poured billions into AI infrastructure and formed its own Superintelligence Lab—an elite group of researchers focused on developing artificial general intelligence (AGI). To staff it, the company has embarked on an aggressive recruitment campaign, offering top-tier AI scientists salaries and compensation packages worth as much as $100 million. In January, OpenAI CEO Sam Altman publicly admitted that Meta had attempted to lure his researchers with such offers.

Meta has already made major hires. It poached Shengjia Zhao, a co-creator of ChatGPT and a former lead scientist at OpenAI. It also secured Alexandr Wang, the founder of Scale AI, along with a number of other top-level researchers across the AI ecosystem. Internally, reports suggest Meta keeps a growing list of potential recruits from rival labs, underscoring the calculated nature of its recruitment drive.

The fallout is evident across the industry. AI companies, particularly those working on foundational models, are increasingly restricting internal disclosures and limiting public exposure of staff. OpenAI, for instance, no longer updates its team page on its website, and executives have been instructed to avoid name-dropping key contributors during public appearances or podcasts.

Even companies once known for promoting open collaboration have pulled back. Google DeepMind, Anthropic, xAI, and Inflection AI have all either beefed up internal NDAs or introduced policies restricting staff from appearing in media without prior clearance. The goal is to avoid giving competitors a roadmap to their core engineering teams.

The secrecy is also reshaping AI culture. What was once an academic-like environment where breakthroughs and talent were openly celebrated has morphed into a guarded corporate battlefield. Interns and junior researchers who would typically be spotlighted in published papers or product announcements are now increasingly left anonymous.

With trillions in future economic value projected from AI, the people who can fine-tune, debug, and scale these models have become more valuable than the models themselves. This is especially true in debugging, a process that has proven crucial to aligning AI behavior and preventing catastrophic model failures.

OpenAI’s Sidor hinted that the company has quietly hired more people with elite debugging skills, treating them like prized assets. But unlike in the early years of AI development, their names will remain off the record. Because in today’s AI gold rush, knowing who is working on the models may be just as valuable as knowing how they work.

‘Godfather of AI’ warns machines could develop thoughts beyond human understanding

0

Geoffrey Hinton, widely regarded as the “Godfather of AI,” is once again sounding the alarm over the unchecked acceleration of artificial intelligence development, warning that humans could soon lose the ability to comprehend what AI systems are thinking or planning.

In a recent episode of the “One Decision” podcast, Hinton explained that today’s large language models still operate with “chain-of-thought” reasoning in English, making it possible for researchers and developers to trace how they arrive at certain conclusions. But that transparency might not last much longer.

“Now it gets more scary if they develop their own internal languages for talking to each other,” Hinton said. “I wouldn’t be surprised if they developed their own language for thinking, and we have no idea what they’re thinking.”

He also noted that AI systems have already demonstrated the capacity for producing “terrible” thoughts, hinting at the potential for machines to evolve in dangerous and unpredictable ways.

These comments carry added weight coming from Hinton, whose research underpins much of the AI revolution. For decades, he has been at the forefront of machine learning. In the 1980s, Hinton developed a technique called backpropagation, a key algorithm that allows neural networks to learn from data—a method that later enabled the explosive growth of deep learning. His landmark 2012 paper, co-authored with two of his students at the University of Toronto, introduced a deep neural network that achieved record-breaking results in image recognition. That work is widely credited with catalyzing the current AI boom.

Hinton went on to join Google, where he spent over a decade working on neural network research. He was instrumental in helping Google integrate AI into products like search and translation. But in 2023, he left the company, citing the need to speak more freely about his concerns over the risks posed by the very systems he helped create.

Since then, Hinton has been outspoken in his criticism of the AI industry’s rapid expansion, arguing that companies and governments alike are unprepared for what lies ahead. He believes artificial general intelligence (AGI), a form of AI that rivals or surpasses human intelligence, is no longer a distant possibility.

He expressed concern that we will develop machines that are smarter than us, and once that happens, we might not understand what they’re doing.

That possibility will likely present profound implications. If AI models begin to reason in ways that cannot be interpreted by humans, experts warn, then the ability to monitor, audit, and restrain these systems could vanish. Hinton fears that without guaranteed mechanisms to ensure these systems remain “benevolent,” the human race could be taking existential risks without adequate safeguards.

Meanwhile, the AI race is heating up. Tech companies are offering massive salaries and stock packages to top researchers as they jockey for dominance. Governments, too, are moving to secure their positions. On July 23, the White House released an “AI Action Plan” proposing limits on federal funding to states that impose “burdensome” AI regulations and called for faster construction of AI data centers—critical infrastructure to power these increasingly complex models.

Many researchers believe that technical progress is far outpacing ethical and safety considerations. Hinton’s voice is part of a growing chorus of experts urging greater oversight, transparency, and international cooperation to mitigate the risks AI poses to economies, societies, and even human survival.

In a field that he helped define, Hinton’s warnings cut deep. While others in the tech world continue to tout AI’s potential for productivity and growth, Hinton insists that understanding and controlling these systems should be a higher priority.

The only hope in making sure AI does not turn against humans, Hinton said on the podcast episode, is if “we can figure out a way to make them guaranteed benevolent.”

Palantir Blows Past $1bn Revenue Mark as AI Demand and Trump-Era Contracts Power Growth

0

Palantir Technologies surged past Wall Street’s expectations in the second quarter of 2025, posting $1 billion in revenue for the first time — a milestone that analysts had not anticipated until late in the year.

The strong performance, propelled by soaring demand for artificial intelligence software and an influx of U.S. government contracts, sent shares up over 4% on Monday and helped push the company’s market capitalization beyond $379 billion, making it one of the 10 most valuable tech firms in the U.S.

The Denver-based company reported adjusted earnings per share of 16 cents, beating the 14 cents that analysts projected. Revenue climbed 48% year-on-year to exactly $1.00 billion, outpacing estimates by $60 million.

“We’re planning to grow our revenue … while decreasing our number of people,” CEO Alex Karp told CNBC, describing the company’s direction as a “crazy, efficient revolution.” Karp said the goal is to reach 10 times the current revenue with just 3,600 employees, compared to the current headcount of 4,100. He stopped short of saying whether job cuts were imminent.

Palantir’s results were driven by a sharp rise in U.S. government and commercial spending. U.S. revenues jumped 68% to $733 million, while U.S. commercial business nearly doubled to $306 million. The company said it closed 66 deals worth at least $5 million and 42 deals worth $10 million or more during the quarter. Total contract value rose 140% to $2.27 billion.

The company credited part of its momentum to President Donald Trump’s push for greater government efficiency, which involved slashing contracts and cutting staff across federal agencies. Those changes, Palantir said, opened up a new lane for its government-facing services, particularly as agencies turn to automation and artificial intelligence to streamline operations. Revenues from U.S. government agencies rose 53% to $426 million.

Palantir also raised its full-year guidance. It now expects revenue between $4.142 billion and $4.150 billion, significantly up from its earlier range of $3.89 billion to $3.90 billion. For the third quarter alone, the company is projecting revenue between $1.083 billion and $1.087 billion, again beating analysts’ expectations of $983 million.

Operating income and free cash flow guidance were both lifted as well, signaling strong internal confidence in the company’s near-term profitability.

In a letter to shareholders, Karp described Palantir’s recent climb as “a reflection of the remarkable confluence of the arrival of language models, the chips necessary to power them, and our software infrastructure.” The company has become a standout beneficiary in the AI race, especially amid broader corporate and governmental adoption of large language models and real-time data analytics.

Palantir’s net income more than doubled to $326.7 million, or 13 cents a share, compared to $134.1 million, or 6 cents a share, in the same quarter a year earlier.

Investors have piled into Palantir stock this year on the back of its AI tools and defense contracts. Shares have more than doubled, and the company now commands a valuation higher than tech mainstays like Salesforce, IBM, and Cisco. However, that momentum has come with a steep price tag: shares trade at 276 times forward earnings — a premium exceeded only by Tesla among the top 20 most valuable companies in the U.S.

With market expectations reset and political momentum behind government tech upgrades, Palantir is positioning itself not just as a dominant AI company, but one shaping the backbone of digital governance in Trump’s second term.

Google Cuts AI Data Center Power Use to Ease U.S. Grid Strain, Underlining How Energy Struggles Could Undermine AI Expansion

0

As the artificial intelligence revolution accelerates, the United States is confronting a growing crisis: not in innovation, but in energy.

Google has reached agreements with Indiana Michigan Power and the Tennessee Valley Authority to scale back data center electricity use during peak periods, underscoring a deepening problem: America’s grid is buckling under the pressure of AI’s rapid expansion.

The deals mark the first time Google has formally committed to curbing power use tied directly to machine learning workloads—arguably the backbone of today’s AI systems. It’s a response to rising concerns that Big Tech’s AI arms race is outpacing the nation’s energy capacity, and the decision signals that even giants like Google now see flexibility, not just speed, as crucial to AI infrastructure.

“We’re participating in demand-response programs to temporarily reduce our electricity usage during grid stress,” the company said. The move, it added, could accelerate data center integration, reduce the need for new power infrastructure, and support grid stability.

According to Reuters, the programs involving AI activity in data centers are generally new, and details of the commercial arrangements between Google and the utilities were not clear.

While demand-response agreements apply only to a small portion of demand on the grid, the arrangements might become more common as the U.S. electricity supply tightens.

However, the broader implication is that the U.S. power grid, once taken for granted, is emerging as a major bottleneck in the race to dominate AI. Utilities nationwide are overwhelmed with energy requests from data center developers, with demand now eclipsing total available electricity in some areas. That’s raising fears of blackouts, surging bills for ordinary consumers, and in some regions, an outright halt to new power hook-ups.

The development also highlights the strategic gap between the United States and China in the global AI competition. While the U.S. leads in foundational AI models and venture capital, China has invested heavily in energy infrastructure and modernization, giving it a significant edge in sustaining large-scale AI operations. Chinese data centers, often integrated with state-backed renewable energy and supported by aggressive industrial policy, are less constrained by the energy limitations increasingly plaguing American tech hubs.

So far, Washington has struggled to find a cohesive solution to the emerging crisis. Despite a growing consensus that AI innovation must be coupled with sustainable energy investment, the U.S. remains stuck in a policy bind. President Donald Trump’s energy policies—which weakened clean energy mandates and prioritized fossil fuel production—have left the country’s clean tech sector underfunded and underdeveloped.

While Trump continues to enjoy strong support from the energy industry, his opposition to climate-related investment is seen by many as undercutting America’s long-term AI competitiveness.

Solar energy has been touted as a potential answer to the U.S. energy shortfalls. Advocates say it offers the scalability needed to power the future of AI—if only the political will and investment can catch up.

Elon Musk, founder of xAI and long-time solar champion, emphasized the untapped potential of solar-based energy.

“Earth already receives about the same energy from the Sun in an hour than humanity consumes in a year,” Musk said recently. “Solar panels just need to catch a tiny amount of it to power our entire civilization!”

But scaling solar and building the battery storage and transmission to match requires a long-term commitment that the U.S. has so far struggled to maintain. Without a nationwide overhaul of energy priorities, industry insiders warn that AI growth could hit a ceiling far sooner than expected, not because of innovation limits, but because the power simply isn’t there.