DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 773

X Slams UK’s Online Safety Law, Says It Threatens Free Speech

0

Elon Musk’s X has openly condemned Britain’s new Online Safety Act, warning that its sweeping enforcement threatens free speech and risks fostering an online environment of excessive censorship.

The platform’s criticism adds to growing public and political backlash against the law, which was introduced to curb harmful online content and protect minors.

The Online Safety Act, enacted last year and gradually being rolled out in 2025, compels social media giants like Facebook, TikTok, YouTube, and X to crack down on illegal content and implement stricter protections for children online. Sites hosting pornography are also required to verify user age, a move that has stirred significant privacy concerns.

In a statement on Friday, X said the law’s “laudable intentions were at risk of being overshadowed by the breadth of its regulatory reach.” The company, which has already implemented age verification measures in line with the law, warned that the act’s vague boundaries and tight deadlines were encouraging platforms to over-police content for fear of regulatory penalties.

“When lawmakers approved these measures, they made a conscientious decision to increase censorship in the name of ‘online safety’. It is fair to ask if UK citizens were equally aware of the trade-off being made,” the company said.

A key element of the law’s implementation involves oversight by media regulator Ofcom, which has already launched formal investigations into four unnamed companies operating 34 pornography websites to assess compliance. These probes come as users voice mounting frustration over intrusive age verification processes that require personal data uploads—measures seen by many as a disproportionate infringement on digital rights.

That sentiment is spreading fast. Over 468,000 people have signed a petition calling for the repeal of the law. Critics, including content creators and free speech groups, argue the law has already gone too far, resulting in the removal of legal content in efforts to avoid potential violations. They contend the law’s structure places disproportionate power in the hands of regulators and platforms, effectively chilling lawful expression online.

Despite the uproar, the UK government has stood its ground. Technology Secretary Peter Kyle defended the law earlier this week, saying those seeking to overturn it were “on the side of predators.” His comments were widely criticized as inflammatory and dismissive of legitimate concerns about civil liberties and overreach.

Ofcom, for its part, has vowed to enforce the law “proportionately,” but its investigations into pornography sites—alongside the possibility of steep fines for violators—signal an aggressive posture that platforms like X say risks harming the very liberties the UK purports to uphold. X stressed the need for a “balanced approach” that simultaneously safeguards children, encourages innovation, and protects individual freedom.

“Significant changes must take place to achieve these objectives in the UK,” the company said.

The UK joins a growing list of countries attempting to regulate online spaces more tightly, citing child safety and national security. But as tech companies like X push back, the debate is increasingly framed as a battle over who decides what speech is permissible—and whether security should come at the cost of free expression in the digital age.

Anthropic Reportedly Revokes OpenAI’s Access to Claude Ahead of GPT5 Launch

0

Anthropic has formally cut off OpenAI’s access to its Claude family of models, citing breaches of its terms of service tied to internal benchmarking activity, per Wired.

The move comes at a pivotal moment, just as OpenAI prepares to unveil its highly anticipated GPT5 model.

Why Access Was Cut Off

According to WIRED, OpenAI engineers used Claude Code, Anthropic’s AI-powered coding assistant, to evaluate Claude’s performance against OpenAI’s own engines across tasks like coding, creative writing, and safety-related scenarios (e.g., self-harm and defamation prompts). Anthropic maintains that this violated its commercial terms, which explicitly forbid customers from using the API to build competing services or improve their own models.

Anthropic spokesperson Christopher Nulty said: “Claude Code has become the go to choice for coders everywhere… OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT5… this is a direct violation of our terms.”

OpenAI responded by calling the benchmarking activity “industry-standard,” arguing that evaluating rival models is a normal part of development and safety testing. The company expressed disappointment but noted that Anthropic’s API remains available to it for benchmarking and safety evaluations, though details remain murky.

A Pattern of Access Control

This action follows a pattern. In recent weeks, Anthropic had already revoked API access to Windsurf, a startup rumored to be linked with OpenAI. Chief Science Officer Jared Kaplan remarked at the time: “I think it would be odd for us to be selling Claude to OpenAI.”

Analysts see parallels to precedents in Big Tech: Facebook cutting off Vine API access and Salesforce restricting Slack’s openness. As AI evolves, control over competitor access appears central to company defense strategies.

Instead of standard user interactions, OpenAI reportedly integrated Claude into its proprietary developer tools via custom API endpoints. This allowed bulk benchmarking—testing Claude’s functions in categories such as code generation, writing, and safety responses. These evaluations played a role in refining GPT5, expected to include new reasoning and coding capabilities.

On the Eve of GPT5

The cut occurred on August 1, 2025, just days before GPT5’s slated launch. Many interpret the timing as strategic; perhaps Anthropic aimed to blunt any competitive leak or advantage OpenAI might gain.

The decision signals a profound shift in AI industry dynamics: from cooperative benchmarking to protective exclusivity, as labs rush to preserve their competitive edge.

Implications for OpenAI, Anthropic & Beyond

OpenAI now loses direct insight into Claude’s performance under realistic developer conditions—a limiting blow if it can’t independently replicate those capabilities.

Anthropic reinforces its safety- and integrity-first posture, emphasizing control over how Claude is used and preventing rivals from reverse engineering its capabilities.

Developers and startups may become more vulnerable. Access to major AI models is increasingly contingent on loyalty—or at least compliance—with restrictive terms.

The AI policy landscape now faces a critical question: Are these models proprietary platforms, or should fair benchmarking access be considered industry-standard infrastructure?

Broader Industry Trends

Benchmarking in AI is becoming more complex and costly. As models excel on traditional tests like MMLU or Hellaswag, companies are building private evaluation systems for reasoning and planning. This shift raises concerns about comparability and trust in model performance claims.

However, Anthropic’s Claude models continue to gain traction and market share. Recent figures show the company now holds 32% of enterprise LLM usage, surpassing OpenAI in adoption, particularly in coding tasks, recording a 42% share compared to OpenAI’s 21%.

OpenAI’s Education VP Says Every Graduate Needs to Know How to Use AI

0

In a message that underscores the growing divide between those embracing artificial intelligence and those resisting it, OpenAI’s Vice President of Education, Leah Belsky, has said workers who fail to learn how to use AI will soon find themselves obsolete.

“Luddites have no place in an AI-powered world,” she said during an episode of OpenAI’s official podcast on Friday.

Belsky, who joined OpenAI in 2024 to lead its education strategy, made the case for early and structured exposure to AI in schools, warning that failure to do so could leave an entire generation unprepared for the future of work.

“Any graduate who leaves institution today needs to know how to use AI in their daily life,” she said. “And that will come in both where they’re applying for jobs as well as when they start their new job.”

Her comments follow widespread debates within academia, where the use of AI tools like ChatGPT has often been labelled as cheating. But Belsky said such framing misses the point. Rather than banning AI, she argued, educational institutions should teach students how to use it responsibly — not as an “answer machine,” but as a catalyst for deeper learning.

“AI is ultimately a tool,” Belsky said, likening it to calculators once feared by math teachers. “What matters most in an education space is how that tool is used. If students use AI as an answer machine, they are not going to learn. And so part of our journey here is to help students and educators use AI in ways that will expand critical thinking and expand creativity.”

To encourage that kind of learning, OpenAI recently introduced a new feature called Study Mode in ChatGPT. The feature provides students with “guiding questions that calibrate responses to their objective and skill level,” aiming to help them build deeper understanding, rather than regurgitate AI-generated answers. It’s part of the company’s broader push to incorporate structured learning support directly into AI interfaces.

A central skill Belsky believes every student must acquire is coding, even if only at a basic level. She emphasized “vibe coding,” a popular method where people use natural language to prompt AI into writing code. While useful, it’s not foolproof; since AI-generated code can be riddled with errors, users still need some technical knowledge or access to someone who can verify its correctness. Nevertheless, Belsky said such tools will eventually make it easier for every student to not just use AI, but to build with it.

“Now, with vibe coding and now that there are all sorts of tools that make coding easier,” she said, “I think we’re going to get to a place where every student should not only learn how to use AI generally, but they should learn to use AI to create images, to create applications, to write code.”

But some educators remain wary — not of cheating, but of what they call the erosion of “productive struggle.” This idea refers to the challenge learners face in trying to understand new material, an experience many consider crucial to developing real competence. The concern is that AI, by offering instant answers, might rob students of the hard but rewarding process of learning through effort.

OpenAI and others are responding to that criticism. Study Mode and other emerging tools aim to reintroduce intellectual “friction” at strategic points during a student’s interaction with AI. Belsky said this approach could preserve the cognitive work essential to long-term learning.

Tech firms beyond OpenAI are also trying to rethink how students engage with AI. Kira Learning — a startup chaired by Google Brain founder Andrew Ng — has been developing AI tools for the classroom since 2021. This year, it launched a range of agents to help non-expert teachers bring computer science into their lessons. Kira’s CEO, Andre Pasinetti, told Business Insider that the goal is to design AI systems that prompt students to reflect, iterate, and learn from mistakes, rather than merely copy answers.

Meanwhile, Tyler Cowen, a professor of economics at George Mason University, said universities need to reevaluate their entire approach to teaching.

“There’s a lot of hand-wringing about ‘How do we stop people from cheating’ and not looking at ‘What should we be teaching and testing?’” he said in a recent podcast interview with Azeem Azhar. “The whole system is set up to incentivize getting good grades. And that’s exactly the skill that will be obsolete.”

The consensus among technology leaders, as the use of AI grows in classrooms and boardrooms, appears to be that the divide is no longer between users and non-users, but between those who use AI well and those who don’t.

Dangote Refinery Names David Bird, Former Shell Executive, as CEO

0

The Dangote Petroleum Refinery and Petrochemicals has appointed David Bird as Chief Executive Officer of its petroleum and petrochemicals division, a move that not only marks a critical leadership shift for Africa’s largest privately-owned refinery but has also stirred conversation over the capacity of Nigerians for the top leadership roles.

Bird, a seasoned British oil executive, assumed the role in July 2025, bringing decades of high-level global industry experience. He spent nearly 20 years with Shell, overseeing some of its most complex operations, including the landmark Prelude Floating LNG facility in Australia — the first of its kind in the world. He later took on senior positions at Oman’s Duqm Refinery and Australian energy giant Santos Ltd, where he led production operations and supply chain efforts.

With degrees from Imperial College London and Stanford University, Bird has now been tasked with steering the Dangote Refinery through its critical growth phase — a time when Nigeria is banking on the refinery to finally ease its chronic fuel import dependence, stabilize local supply, and position itself as a fuel-exporting nation.

In a LinkedIn post cited by S&P Global, Bird pledged to focus on “maximizing operational output and commercial competitiveness,” while also eyeing expansion into other African markets.

But Bird’s appointment has also reopened an uncomfortable conversation about why, after over 60 years of being Africa’s top oil producer, Nigeria appears unable to produce leadership for its most strategic downstream project. There appears to be deep-rooted concerns over the dominance of foreign professionals in Nigeria’s most ambitious industrial venture and the broader implications for indigenous capacity in the oil and gas sector.

Industry chatter quickly turned to questions of competence — or the lack—with critics saying that the refinery’s leadership structure, now firmly in the hands of expatriates, signals a damning indictment of Nigeria’s oil industry training, management, and oversight capabilities. Many pointed to the decades-long failure of state-owned refineries — in Warri, Port Harcourt, and Kaduna — as proof that the country lacks the expertise to run a complex facility of Dangote’s scale.

These state-run facilities have swallowed billions of dollars in failed turnaround maintenance projects since the 1990s, with little to no output to show. None has produced refined fuel at scale in more than two decades.

Against this backdrop, Bird’s appointment is based on operational credibility and the competence required to run a $19 billion project.

Foreigners have been at the helm of Dangote’s oil ambitions from the outset. Edwin Devakumar, an Indian national, has served as Vice President of Oil and Gas at Dangote Group since March 1, 2024, overseeing strategic direction and the buildout of refining operations. Devakumar, who has worked with the group for over two decades, is one of Aliko Dangote’s most trusted lieutenants and played a central role in the engineering, design, and planning of the refinery.

The refinery itself — located in the Lekki Free Trade Zone, Lagos — is the largest single-train refinery in the world, with a processing capacity of 650,000 barrels per day. Its complex houses 435 MW of power generation capacity, a fertilizer plant, petrochemical units, and an integrated export terminal. It began producing diesel and aviation fuel in 2024, with petrol production commencing in September of the same year.

With its aim to dominate the Nigerian oil market and significantly curtail petrol imports, the Dangote Group announced that it will deploy 4,000 compressed natural gas (CNG)-powered trucks to move fuel across Nigeria starting August 15.

Meanwhile, Aliko Dangote continues to pursue international deals, including a partnership with U.S.-based Premier Product Marketing LLC to export petrochemicals and a joint venture with Emirati firm G42 to construct a major data center in Abu Dhabi, further highlighting the group’s increasing global footprint and internationalization strategy.

While the optics of having a non-Nigerian leadership team steering the country’s flagship refinery project have not sat well with many, the bigger question is whether Nigeria is ready for the recalibration that will produce competent indigenous hands in its oil sector.

“Apple Must Do This:” Tim Cook Rallies Apple Staff Around AI Ambitions

0

Apple CEO Tim Cook has delivered a rare all-hands address, rallying employees behind the company’s artificial intelligence ambitions and assuring them that Apple is well-positioned to lead the next big technological leap.

Speaking at the Steve Jobs Theater in Cupertino after a strong quarterly earnings report, Cook described the ongoing AI revolution as “ours to grab,” emphasizing the urgency of seizing the moment.

“This is as big or bigger than the internet, smartphones, cloud computing and apps,” Cook said, making clear that AI isn’t just a passing trend but a fundamental shift in how technology works—and how Apple must operate. “Apple must do this. Apple will do this.”

“We will make the investment to do it.”

Wall Street’s Mounting Pressure

The meeting came as Wall Street intensifies its calls for Apple to show a clear AI roadmap. Investors have grown increasingly vocal about Apple’s slow response to the AI wave, especially as competitors like Microsoft, Google, Amazon, and OpenAI move swiftly to integrate large language models and generative AI into consumer and enterprise platforms. Apple, known for its cautious and polished approach to new technologies, has so far taken a quieter route—an approach that many analysts say is no longer enough.

For much of the past year, investors have watched rivals’ AI announcements drive massive gains in stock value. Microsoft’s investment in OpenAI has reshaped its product ecosystem and pushed its valuation higher, while Nvidia’s dominance in AI chips has turned it into a trillion-dollar firm. Meanwhile, Apple’s stock has been volatile, with analysts questioning whether the company has a credible AI strategy.

The pressure grew particularly intense earlier this year after Apple skipped flashy AI launches while its peers rolled out new tools and integrations seemingly every quarter. Some investors began openly questioning whether the company, long viewed as a hardware-first business, was at risk of missing the AI revolution entirely.

Cook’s internal remarks appear designed to silence those concerns, not just within Apple’s walls but on Wall Street.

“We’ve rarely been first,” he said, referencing how Apple redefined existing product categories like smartphones and tablets without being first to market.

“There was a PC before the Mac; there was a smartphone before the iPhone; there were many tablets before the iPad; there was an MP3 player before iPod.”

But Apple invented the “modern” versions of those product categories, he said. “This is how I feel about AI.”

The statement implies that Apple may be late to show its hand, but that doesn’t mean it won’t dominate the space.

Siri’s Overhaul and Internal Restructuring

A key part of Apple’s AI revamp is a major overhaul of Siri, the company’s long-criticized voice assistant. Craig Federighi, Apple’s senior vice president of software engineering, told staff that Apple had initially planned to update Siri using a hybrid approach that combined legacy command features with modern AI tools. But the results weren’t good enough.

“We didn’t meet the quality bar,” Federighi admitted, noting that the company shifted course earlier this year, handing Siri’s redevelopment to a new team under Vision Pro chief Mike Rockwell. The team is now building a completely new foundation for Siri, with the goal of launching it sometime in 2025.

“There is no project people are taking more seriously,” Federighi added.

The revamp also includes investments in core infrastructure. Cook highlighted Apple’s internally developed chip for cloud AI computing—code-named Baltra—as well as a new AI server production hub in Houston. These moves suggest Apple is preparing to handle more processing in the cloud, similar to how OpenAI’s ChatGPT and Google’s Gemini work, rather than relying solely on on-device models.

Workforce and Product Pipeline Expanding

In the last year, Apple has hired around 12,000 people, with nearly half of those joining research and development—a sign of the company’s renewed technical focus. Apple’s chip division, led by Johny Srouji, has been particularly active in developing silicon tailored for AI workloads, both in its consumer devices and its backend systems.

Cook also reaffirmed Apple’s commitment to international expansion. He noted that a “disproportionate” share of the company’s growth would come from emerging markets. New stores are opening in India, China, and the UAE this year, with Saudi Arabia set to get its first Apple Store next year.

“We’re planting flags where we see long-term demand,” Cook said.

He also hinted at growth in other areas like Apple TV+, wearables, and health features in AirPods Pro, including hearing aid-like functionality, which could open up new use cases and markets. Meanwhile, the company remains committed to achieving carbon neutrality by 2030, despite regulatory hurdles.

Regulatory and Trade Headwinds

Even as it charges into new frontiers, Apple faces external challenges. Cook acknowledged that trade tensions are not letting up. Tariffs introduced under President Donald Trump’s administration are expected to cost the company $1.1 billion in the current quarter alone. Apple had already spent $800 million in the previous quarter dealing with these duties, primarily tied to the International Emergency Economic Powers Act (IEEPA) tariffs related to China.

Trump’s tariffs have touched nearly every Apple product, most of which are manufactured in China, Vietnam, or India. Although Apple has diversified its supply chain—with most iPhones sold in the US now coming from India and many Macs and iPads from Vietnam—it still faces threats of further tariff hikes if it doesn’t shift more production to the United States.

Riding a Wave of Momentum—But With Caution

Apple’s rally comes on the heels of a stronger-than-expected earnings report. Revenue rose 10% to $94 billion between April and June, buoyed by solid iPhone and Mac sales and a double-digit jump in App Store revenue. While that momentum has helped stabilize investor confidence, Apple’s AI narrative remains a key driver of its future valuation.

In his speech, Cook also pushed employees to move more quickly to weave AI into their work and future products.

“All of us are using AI in a significant way already, and we must use it as a company as well,” Cook said. “To not do so would be to be left behind, and we can’t do that.”

The all-hands meeting was as much about aligning internal teams as it was about reassuring the market. As Cook wrapped up the hourlong session, he struck an optimistic tone.

“I have never felt so much excitement and so much energy before as right now,” he said, without disclosing any product specifics.

However, some have summed up his message to mean that Apple is late to AI, but not out of the race.