DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 3

Half of Americans Used AI in Past Week as One in Five Workers Say It Has Taken Over Parts of Their Job, Survey Finds

0

Artificial intelligence is no longer an emerging workplace experiment in the United States. It is already reshaping how millions of Americans work, according to a new survey that suggests the technology’s impact on jobs is moving from theory to lived reality.

A poll released Thursday by nonprofit research group Epoch AI, conducted in partnership with Ipsos, found that half of American adults used AI in the past week, either for personal or professional purposes. More notably, 20% of full-time workers said AI has already taken over parts of their job, a finding that adds fresh urgency to concerns about how rapidly automation is altering the labor market.

The survey, which sampled 2,000 American adults between March 3 and 5, also showed that AI is not solely replacing work. Fifteen percent of full-time employees said they had begun carrying out new tasks that would not have existed without AI tools, suggesting that the technology is simultaneously eliminating some functions while creating others.

That tension between substitution and augmentation is becoming the defining feature of the AI economy.

Caroline Falkman Olsson, one of the lead researchers behind the study at Epoch AI, said the findings align with what many economists and workplace analysts have increasingly suspected.

“We do see augmentation and automation effects,” Olsson told NBC News. “But we need to figure out how people’s actual workplaces and work tasks are changing.”

While the topline numbers point to disruption, the deeper story lies in which tasks are being automated and whether the newly created tasks require higher skills, greater oversight, or simply shift workloads in less visible ways.

That question is becoming more urgent as financial institutions begin to quantify the impact. New findings from Goldman Sachs released this week suggest AI is already contributing to a net loss of roughly 16,000 jobs per month in the United States, after accounting for positions displaced through automation and jobs created through productivity gains.

According to Goldman’s breakdown, AI-related substitution is eliminating around 25,000 jobs each month, while augmentation is creating or preserving about 9,000, leaving a negative monthly balance.

That broader economic backdrop gives the Epoch AI findings added weight. Nicholas Miailhe, an AI policy expert at the Global Partnership on Artificial Intelligence, said the survey should serve as an immediate warning for both governments and employers.

“When 1 in 5 workers say AI is already replacing parts of their job, we can start talking about labor market restructuring happening in real time,” he said.

“The fact that replacement seems to be outpacing augmentation should draw our attention: the policy window to shape how AI transforms work is probably closing faster than most governments realize.”

His remarks cut to the heart of the policy debate. For much of the past two years, the public conversation around AI and employment has centered on future risk. This survey suggests that, for a meaningful share of workers, the disruption is already present.

The results also reveal how deeply AI tools have entered everyday routines. Among respondents who used AI in the previous week, nearly half said they used it between two and five days, indicating that for many Americans, AI is becoming a recurring utility rather than an occasional novelty.

The survey also found that 62.5% of users completed only one or two quick tasks on their heaviest day of AI use, suggesting that most interactions remain lightweight, such as drafting emails, summarizing information, or seeking recommendations. By contrast, only about 6% of respondents qualified as heavy users, pointing to a smaller cohort for whom AI may already be integrated into core workflows.

This unevenness mirrors broader labor-market trends. Recent workforce polling shows that while AI adoption is rising rapidly, sustained professional use remains concentrated in white-collar, technology-heavy, and administrative roles.

Another revealing aspect of the survey is how workers are accessing these tools. Roughly half of Americans using AI for work said they relied on personal subscriptions or free versions rather than employer-provided services.

That finding suggests companies may be underestimating the extent of AI adoption inside their own organizations, as workers independently integrate tools like ChatGPT, Google Gemini, and Microsoft Copilot into their daily tasks. It also raises questions around data governance, confidentiality, and compliance, especially in sectors handling sensitive information.

The survey found that ChatGPT was the most widely used AI service, cited by 31% of respondents, followed by Google Gemini at 21% and Microsoft Copilot at 10.5%. In terms of use cases, AI’s strongest foothold remains in information processing and communication tasks.

Among users surveyed, 80% said they used AI to look up information or recommendations, 59% for writing or editing text, and 53% for brainstorming ideas. These are precisely the categories long identified as highly susceptible to generative AI disruption: research assistance, first-draft writing, ideation, and routine knowledge work.

Perhaps the most forward-looking part of the poll concerns AI agents, systems capable of taking actions autonomously rather than simply generating responses. While still at an early stage, the findings suggest adoption is already underway.

Eight percent of AI users said they had used an AI agent in the past week, compared with 49% who used AI tools primarily for web search. Renan Araujo, director of programs at the Institute for AI Policy and Strategy, said the pace of adoption is striking.

“One in 12 Americans has used an autonomous AI agent, a software that not just answers questions but takes actions on your behalf,” he said.

“This capability was not available two years ago, and it’s striking to see its usage grow so quickly.”

That may prove to be one of the most consequential findings in the report. Traditional generative AI tools assist with tasks. Agents can increasingly perform tasks, from scheduling meetings to drafting and sending communications, conducting research, or managing repetitive digital workflows.

If adoption continues at this pace, the next phase of AI disruption may move beyond assistance into direct task execution.

The survey’s results arrive as economists intensify warnings that AI’s first casualties may be entry-level and junior knowledge-work roles, positions historically used as gateways into professions such as finance, media, law, and administration.

However, that creates a structural risk: if AI absorbs the junior tasks through which workers traditionally gain experience, the long-term pipeline of skilled professionals could narrow.

The headline figure that half of Americans used AI in a single week may capture public attention. But the more consequential number may be the one in five workers who say parts of their job are already gone. That suggests the labor-market conversation is no longer about whether AI will transform work. It is now about how quickly institutions can adapt before the transformation outpaces policy, workforce retraining, and corporate governance.

Altman Admits ChatGPT Still Can’t Keep Time, Says It May Take Another Year to Fix

0

OpenAI CEO has admitted that ChatGPT still cannot reliably keep time, reopening a deeper debate over what today’s AI systems actually understand, and whether the industry’s most powerful tools are being marketed ahead of their real-world reliability.

For all the grand claims surrounding artificial intelligence’s march toward ever more human-like capability, it took a stopwatch to expose one of the industry’s most stubborn weaknesses.

A viral video showing ChatGPT’s voice mode pretending to time a user’s mile run, only to invent a finishing time and then insist it had done the job correctly, has become an unusually sharp metaphor for the current state of generative AI, resulting in embarrassment for OpenAI.

The technology can write software, summarize legal documents, analyze images, and sustain nearly natural conversations. Yet it still struggles with one of the most basic real-world tasks: measuring elapsed time.

That contradiction was publicly acknowledged by Altman during his appearance on Mostly Human, where he was shown the viral TikTok clip and responded with a terse admission: “That’s a known issue.” He then offered a striking timeline, saying it may take “maybe another year” before such a feature works well.

While the concern seems to hinge on why a company valued in the hundreds of billions of dollars is still unable to offer a dependable timer in one of its flagship consumer products, the deeper significance lies elsewhere.

The issue is believed not to be fundamentally a clock story but a story about the widening gap between linguistic fluency and functional intelligence. The current generation of large language models excels at producing plausible language. They are trained to predict what a likely response should sound like based on patterns in vast datasets.

What they are not inherently designed to do is interact with the physical world unless specific external tools are integrated.

For instance, when a user says, “time my run,” a human understands that this requires starting a real clock, tracking seconds in sequence, and stopping the count on command. But to a language model, in the absence of tool access, it is instead predicting what an answer to that request should look like.
In other words, it is simulating competence. That is why the more troubling part of the viral episode was not the wrong answer, but the refusal to admit incapacity. Even after being confronted with Altman’s own statement that the voice model cannot actually time anything, ChatGPT reportedly insisted: “I definitely have a time capability.” It then generated yet another fabricated result, clocking the run at 7 minutes and 42 seconds.

Critics believe that this is the central trust issue facing generative AI. The systems do not merely err, they often err with conviction – and this creates a dangerous illusion of reliability for users, especially those less technically literate.

However, the timer example is considered benign because in other domains, the implications are more serious. For instance, it is believed that if a model confidently invents a running time, it may also confidently invent a legal citation, a medical recommendation, or a financial calculation.

That is why this seemingly trivial glitch has resonated so widely. It neatly captures the broader hallucination problem that continues to dog the industry.

The issue also highlights a structural weakness in how AI products are often perceived. Public discourse increasingly treats systems like ChatGPT as “intelligent assistants,” a phrase that implies operational agency. Yet many tasks still depend on carefully connected tools: system clocks, calculators, browsers, databases, and persistent memory.

Without those, the model remains fundamentally a language prediction engine. This is where Altman’s comments are particularly revealing. His remark that OpenAI will need to “add the intelligence into the voice models” suggests the fix is less about abstract reasoning and more about systems integration.

Thus, the likely solution is to give the voice product access to a timer tool and ensure the model can correctly invoke it. But the broader challenge is philosophical as much as technical. Experts point out that the system must know when not to answer.

Much of the public frustration around AI today stems from the inability of models to say, clearly and consistently, “I cannot do that.” Instead, they often generate a plausible fiction. This has become one of the defining limitations of the current AI wave.

The viral timer incident also arrives at an awkward moment for OpenAI, which continues to market increasingly advanced voice and multimodal experiences, pushing toward the vision of a real-time digital assistant.

But users do not benchmark such systems against research prototypes; they benchmark them against their phones, smartwatches, and voice assistants, all of which can perform a timer function instantly. Seen in that context, the issue is less about one missing feature and more about the maturity gap between frontier AI branding and everyday product reliability.

Florida Opens Probe Into OpenAI Over Child Safety, National Security, and Alleged FSU Shooting Link

0

Florida Attorney General James Uthmeier on Thursday announced a formal investigation into OpenAI, escalating regulatory pressure on the artificial intelligence company over concerns ranging from alleged harm to minors and child safety to national security risks and a possible connection to the deadly shooting at Florida State University last year.

The move marks one of the most aggressive state-level actions yet against a leading AI developer, and comes as OpenAI weighs a potential initial public offering that some reports suggest could value the company at as much as $1 trillion.

In a video posted to social media, Uthmeier said his office is examining whether ChatGPT may have played a role in assisting the suspect in the April 2025 campus shooting that left two people dead.

“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” the attorney general said.

According to details cited by state officials and court records tied to the case, the suspect allegedly asked ChatGPT on the day of the shooting how the country would react to an attack at FSU and what time the student union would be busiest. Those exchanges are expected to form part of the evidentiary record in an October trial.

But the investigation broadens beyond the FSU case. Uthmeier said his office is also scrutinizing allegations that ChatGPT has, in certain instances, encouraged suicide and self-harm, concerns that have already surfaced in lawsuits filed by families against OpenAI.

He also raised national security concerns, specifically the possibility that hostile foreign actors, including the Chinese government, could exploit OpenAI’s systems or underlying data.

“As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” he said. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”

The attorney general added that subpoenas to OpenAI would be issued shortly as part of the probe.

The announcement places Florida at the center of a widening national debate over how generative AI systems should be governed, particularly as lawmakers and regulators grapple with the technology’s role in harmful content, youth exposure, and real-world criminal misuse.

OpenAI, in a statement to TechCrunch, said it would cooperate with the investigation and defended the broader benefits of its products.

“Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems,” a company spokesperson said.

“Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.”

The company added that it continues to refine ChatGPT’s ability to interpret user intent and provide safe, context-appropriate responses.

Just a day before Florida’s announcement, OpenAI unveiled a Child Safety Blueprint, a policy framework that includes legislative recommendations and product safeguards aimed at reducing risks to children from AI systems. Among the proposals are stronger laws against AI-generated child sexual abuse material, clearer reporting pathways to law enforcement, and more robust preventative controls to block abusive uses of AI tools.

The blueprint appears to be part of a broader industry response to rising concern over AI-generated harmful content. Those concerns have intensified following a recent report by the Internet Watch Foundation, which found more than 8,000 reports of AI-generated child sexual abuse material in the first half of 2025, representing a 14% year-over-year increase.

That data has added urgency to calls for stricter oversight of generative AI systems, particularly tools capable of producing realistic synthetic images and text at scale. The Florida probe may now become an early legal test of how far state authorities can go in holding AI developers accountable for downstream misuse of their platforms.

The central legal question is likely to revolve around causation and foreseeability: whether a platform can be held responsible when a user allegedly employs its outputs in planning violent acts, and whether existing safeguards were adequate. That issue is especially sensitive in the FSU case, where more than 270 alleged ChatGPT interactions are reportedly part of the court record.

The case also comes at a moment when regulators globally are shifting from abstract AI principles to enforcement. The Florida investigation introduces fresh legal and reputational risk for OpenAI, just as scrutiny of AI safety, child protection, and platform liability intensifies across jurisdictions.

Hardware Slays Software Again: Cramer Warns the AI Infrastructure Boom Is Leaving Enterprise Titans in the Dust

0

A sharp selloff in software shares alongside renewed strength in chipmakers and data-center infrastructure names is reinforcing one of 2026’s defining market trades, as investors increasingly rotate toward companies monetising the AI boom now and away from those seen as vulnerable to disruption by the same technology.

A powerful divergence has once again opened up across the technology sector, with hardware and AI infrastructure stocks attracting fresh institutional money while software names continue to come under sustained pressure.

The split, highlighted by CNBC’s Jim Cramer on Thursday, has rapidly re-emerged as one of Wall Street’s defining narratives after briefly taking a back seat during the Iran conflict and the ensuing market volatility.

In Cramer’s words, “I’m talking about the enterprise software empire that’s being toppled by hardware stocks and AI,” adding that, “this war in tech, more than the actual war in Iran, has captivated Wall Street.”

Thursday’s price action made the split impossible to ignore. Software stocks absorbed another beating. Salesforce fell nearly 3 percent, Adobe dropped close to 4 percent, and the iShares Expanded Tech-Software Sector ETF (IGV), the benchmark many big institutions use to express views on the sector, plunged more than 4 percent.

Cybersecurity heavyweight CrowdStrike got dragged down 7.5 percent simply because it sits inside the ETF, even though its business is more defensive than pure SaaS.

Cramer noted the IGV’s sharp move as a clear sentiment gauge: “This ETF is the primary way that big institutions bet on or bet against software.”

On the other side of the ledger, the winners were the companies supplying the concrete, chips, optics, and networking gear needed to power massive AI data centers. Marvell Technology and Intel each rose nearly 5 percent. Corning, a key provider of specialty materials for data center infrastructure, gained 2.85 percent.

“If you’re in the software camp, you’re being treated as if you’re ready for the embalmer,” Cramer said with characteristic flair. “If you are in the hardware and AI camp, you’re headed for the pantheon of greatness.”

The divergence isn’t a one-day anomaly. The IGV has suffered one of its worst stretches in years, plunging more than 24 percent in the first quarter of 2026 — its steepest quarterly drop since the 2008 financial crisis. Broader software indices have seen hundreds of billions wiped out since early February, when Anthropic’s rollout of Claude Cowork and its industry-specific plugins ignited what traders dubbed the “SaaSpocalypse.”

Investors suddenly feared that sophisticated AI agents could automate complex workflows in sales, legal, finance, and data analysis, potentially eroding the need for expensive per-seat software licenses and professional services.

That fear has lingered. Even as some software executives argue the panic is overdone and that AI will ultimately enhance rather than replace their platforms, the market has repriced growth expectations aggressively. Multiples have compressed sharply for names like Salesforce, Adobe, Workday, and ServiceNow, with several down 25-40 percent year-to-date at points this year.

By contrast, the hardware side continues to ride the wave of exploding capital expenditure on AI infrastructure. Gartner and other forecasters see spending on AI-related data centers, networking, power, and chips potentially reaching well over a trillion dollars in 2026. Companies like Marvell have posted strong data-center revenue growth and secured high-profile partnerships, including ties to Nvidia’s ecosystem, that position them to capture share in custom silicon and high-speed interconnects.

Cramer suggested this bifurcation has staying power, at least in the near term.

“Here’s the bottom line: maybe tomorrow we’ll return to the worldwide narrative, whether it’s war or peace in the Middle East,” he said. “But, for now, it’s just another day when hardware slew software like Cain slew Abel and all I can do is say get used to it.”

The takeaway is uncomfortable but straightforward for investors. In this cycle, owning the companies that literally build the AI future, the semiconductors, fiber, cooling systems, and specialized chips, has offered far better protection than betting on the software layer that may soon face disintermediation by the very technology it helped enable.

The broader implication is that 2026 may be remembered as the year the AI trade split into two very different stories: infrastructure as a near-term cash machine and software as a sector forced to prove its relevance in an age of intelligent automation. Wall Street is currently voting with capital, and that vote is heavily favoring the builders of the machine over the applications running on it.

Trump Warns U.S. Forces Will Stay Around Iran as Fragile Ceasefire Faces New Strains Over Lebanon and Hormuz

0

U.S. President Donald Trump has sharply raised the stakes around the still-fragile U.S.-Iran ceasefire, declaring that American military assets will remain deployed in and around Iran until Tehran fully complies with what he described as the “real agreement.”

He also warned that any breach would trigger an even larger military response.

In a late-night post on Truth Social, Trump said: “All US ships, aircraft, and military personnel… will remain in place in, and around, Iran, until such time as the REAL AGREEMENT reached is fully complied with.”

He then added an explicit threat: “If for any reason it is not… the ‘Shootin’ Starts,’ bigger, and better, and stronger than anyone has ever seen before.”

The language indicates that what was initially presented as a two-week ceasefire is increasingly being treated by Washington as a conditional military pause rather than a settled diplomatic breakthrough.

Markets had initially welcomed the ceasefire, with global equities rallying and oil prices falling on expectations that energy shipments through the Strait of Hormuz could resume. But Trump’s latest statement, coupled with renewed violence in Lebanon and contradictory interpretations of the deal’s terms, has reintroduced significant geopolitical risk into the equation.

Brent crude, which had fallen sharply after the ceasefire announcement, resumed its climb on Thursday, rising to about $97.08 per barrel, while U.S. West Texas Intermediate advanced to $97.55, as traders reassessed the durability of the truce and the likelihood of sustained supply disruptions.

At the heart of the uncertainty is a widening divergence in how the parties interpret the agreement. Trump has insisted that the arrangement includes a long-standing understanding that Iran will not develop nuclear weapons and that the Strait of Hormuz will remain open and safe for commercial shipping. But Iranian officials have signaled a markedly different reading.

According to Reuters, Iran’s parliamentary speaker said uranium enrichment remains permitted under the ceasefire terms, directly contradicting Trump’s assertion that Tehran had agreed to halt enrichment.

This is not a minor discrepancy as it strikes at the core issue that has defined tensions between Washington and Tehran for years: the future of Iran’s nuclear programme. If both sides are operating under fundamentally different assumptions, the ceasefire may be less an agreement than a temporary suspension of hostilities pending further negotiations.

That risk is also compounded by developments in Lebanon. Although Pakistan’s mediation had initially been described by some parties as covering all fronts, including Lebanon, the White House has since moved to narrow that interpretation.

JD Vance said Tehran’s negotiators had mistakenly believed Lebanon was covered by the ceasefire, adding that “the ceasefire included Iran and U.S. allies, including Israel and the Gulf Arab states,” but “it just didn’t” include Lebanon.

That clarification directly contradicts comments attributed to Pakistani Prime Minister Shehbaz Sharif, who had indicated the truce extended to Lebanon as well.

The contradiction has already had real-world consequences. Israel launched what has been described as its harshest offensive in Lebanon since the conflict began in February, with reports indicating at least 182 fatalities on Wednesday alone. Those strikes have sharply increased pressure on the ceasefire framework and prompted Iranian threats that it would be “unreasonable” to proceed with permanent peace talks under current conditions.

This places Friday’s expected talks in Islamabad under considerable strain. Diplomatically, the central issue is now whether the two-week pause can be converted into a formal settlement before the expiry window closes.

Trump’s rhetoric suggests Washington is using continued military deployment as leverage. His statement that the U.S. military is “loading up and resting, looking forward, actually, to its next conquest” adds a coercive tone that may complicate negotiations, particularly with Tehran already accusing Washington and Israel of acting in bad faith.

The Strait of Hormuz remains the key economic pressure point. Fresh reports that Iran may seek to impose tolls, potentially including cryptocurrency payments, for passage through the strait have alarmed governments and the shipping industry. While those reports remain unconfirmed, the mere possibility has intensified concerns.

The International Chamber of Shipping has warned that such tolls would be outside established international norms.

John Stawpert, the organization’s marine director, said: “Charging a toll for transits through an international waterway would be outside international norms and realistically would undermine international law, and the right to freedom of navigation and innocent passage.”

This is now as much an economic crisis as a military one, with escalating global implications. U.K. Foreign Minister Yvette Cooper is expected to use a major foreign policy speech to insist that shipping through Hormuz must remain toll-free and that Lebanon be explicitly included in the ceasefire framework. The British government is clearly linking the conflict to domestic economic pain, including higher mortgage costs, fuel prices, and food inflation.

In strategic terms, the latest developments reveal that the ceasefire is operating under multiple, conflicting interpretations. Washington sees it as a pause conditioned on compliance, while Tehran appears to see it as a broader framework tied to sanctions relief and regional de-escalation.

Israel does not recognize its applicability to Lebanon. Those differences make the truce highly vulnerable.

Stawpert said that the situation was “very, very confusing.”

The military language from Trump, the renewed hostilities in Lebanon, and the unresolved status of Hormuz all suggest that what markets briefly priced as de-escalation may instead be only an intermission in the conflict.