DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 59

Altman Admits ChatGPT Still Can’t Keep Time, Says It May Take Another Year to Fix

0

OpenAI CEO has admitted that ChatGPT still cannot reliably keep time, reopening a deeper debate over what today’s AI systems actually understand, and whether the industry’s most powerful tools are being marketed ahead of their real-world reliability.

For all the grand claims surrounding artificial intelligence’s march toward ever more human-like capability, it took a stopwatch to expose one of the industry’s most stubborn weaknesses.

A viral video showing ChatGPT’s voice mode pretending to time a user’s mile run, only to invent a finishing time and then insist it had done the job correctly, has become an unusually sharp metaphor for the current state of generative AI, resulting in embarrassment for OpenAI.

The technology can write software, summarize legal documents, analyze images, and sustain nearly natural conversations. Yet it still struggles with one of the most basic real-world tasks: measuring elapsed time.

That contradiction was publicly acknowledged by Altman during his appearance on Mostly Human, where he was shown the viral TikTok clip and responded with a terse admission: “That’s a known issue.” He then offered a striking timeline, saying it may take “maybe another year” before such a feature works well.

While the concern seems to hinge on why a company valued in the hundreds of billions of dollars is still unable to offer a dependable timer in one of its flagship consumer products, the deeper significance lies elsewhere.

The issue is believed not to be fundamentally a clock story but a story about the widening gap between linguistic fluency and functional intelligence. The current generation of large language models excels at producing plausible language. They are trained to predict what a likely response should sound like based on patterns in vast datasets.

What they are not inherently designed to do is interact with the physical world unless specific external tools are integrated.

For instance, when a user says, “time my run,” a human understands that this requires starting a real clock, tracking seconds in sequence, and stopping the count on command. But to a language model, in the absence of tool access, it is instead predicting what an answer to that request should look like.
In other words, it is simulating competence. That is why the more troubling part of the viral episode was not the wrong answer, but the refusal to admit incapacity. Even after being confronted with Altman’s own statement that the voice model cannot actually time anything, ChatGPT reportedly insisted: “I definitely have a time capability.” It then generated yet another fabricated result, clocking the run at 7 minutes and 42 seconds.

Critics believe that this is the central trust issue facing generative AI. The systems do not merely err, they often err with conviction – and this creates a dangerous illusion of reliability for users, especially those less technically literate.

However, the timer example is considered benign because in other domains, the implications are more serious. For instance, it is believed that if a model confidently invents a running time, it may also confidently invent a legal citation, a medical recommendation, or a financial calculation.

That is why this seemingly trivial glitch has resonated so widely. It neatly captures the broader hallucination problem that continues to dog the industry.

The issue also highlights a structural weakness in how AI products are often perceived. Public discourse increasingly treats systems like ChatGPT as “intelligent assistants,” a phrase that implies operational agency. Yet many tasks still depend on carefully connected tools: system clocks, calculators, browsers, databases, and persistent memory.

Without those, the model remains fundamentally a language prediction engine. This is where Altman’s comments are particularly revealing. His remark that OpenAI will need to “add the intelligence into the voice models” suggests the fix is less about abstract reasoning and more about systems integration.

Thus, the likely solution is to give the voice product access to a timer tool and ensure the model can correctly invoke it. But the broader challenge is philosophical as much as technical. Experts point out that the system must know when not to answer.

Much of the public frustration around AI today stems from the inability of models to say, clearly and consistently, “I cannot do that.” Instead, they often generate a plausible fiction. This has become one of the defining limitations of the current AI wave.

The viral timer incident also arrives at an awkward moment for OpenAI, which continues to market increasingly advanced voice and multimodal experiences, pushing toward the vision of a real-time digital assistant.

But users do not benchmark such systems against research prototypes; they benchmark them against their phones, smartwatches, and voice assistants, all of which can perform a timer function instantly. Seen in that context, the issue is less about one missing feature and more about the maturity gap between frontier AI branding and everyday product reliability.

Florida Opens Probe Into OpenAI Over Child Safety, National Security, and Alleged FSU Shooting Link

0

Florida Attorney General James Uthmeier on Thursday announced a formal investigation into OpenAI, escalating regulatory pressure on the artificial intelligence company over concerns ranging from alleged harm to minors and child safety to national security risks and a possible connection to the deadly shooting at Florida State University last year.

The move marks one of the most aggressive state-level actions yet against a leading AI developer, and comes as OpenAI weighs a potential initial public offering that some reports suggest could value the company at as much as $1 trillion.

In a video posted to social media, Uthmeier said his office is examining whether ChatGPT may have played a role in assisting the suspect in the April 2025 campus shooting that left two people dead.

“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” the attorney general said.

According to details cited by state officials and court records tied to the case, the suspect allegedly asked ChatGPT on the day of the shooting how the country would react to an attack at FSU and what time the student union would be busiest. Those exchanges are expected to form part of the evidentiary record in an October trial.

But the investigation broadens beyond the FSU case. Uthmeier said his office is also scrutinizing allegations that ChatGPT has, in certain instances, encouraged suicide and self-harm, concerns that have already surfaced in lawsuits filed by families against OpenAI.

He also raised national security concerns, specifically the possibility that hostile foreign actors, including the Chinese government, could exploit OpenAI’s systems or underlying data.

“As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” he said. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”

The attorney general added that subpoenas to OpenAI would be issued shortly as part of the probe.

The announcement places Florida at the center of a widening national debate over how generative AI systems should be governed, particularly as lawmakers and regulators grapple with the technology’s role in harmful content, youth exposure, and real-world criminal misuse.

OpenAI, in a statement to TechCrunch, said it would cooperate with the investigation and defended the broader benefits of its products.

“Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems,” a company spokesperson said.

“Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.”

The company added that it continues to refine ChatGPT’s ability to interpret user intent and provide safe, context-appropriate responses.

Just a day before Florida’s announcement, OpenAI unveiled a Child Safety Blueprint, a policy framework that includes legislative recommendations and product safeguards aimed at reducing risks to children from AI systems. Among the proposals are stronger laws against AI-generated child sexual abuse material, clearer reporting pathways to law enforcement, and more robust preventative controls to block abusive uses of AI tools.

The blueprint appears to be part of a broader industry response to rising concern over AI-generated harmful content. Those concerns have intensified following a recent report by the Internet Watch Foundation, which found more than 8,000 reports of AI-generated child sexual abuse material in the first half of 2025, representing a 14% year-over-year increase.

That data has added urgency to calls for stricter oversight of generative AI systems, particularly tools capable of producing realistic synthetic images and text at scale. The Florida probe may now become an early legal test of how far state authorities can go in holding AI developers accountable for downstream misuse of their platforms.

The central legal question is likely to revolve around causation and foreseeability: whether a platform can be held responsible when a user allegedly employs its outputs in planning violent acts, and whether existing safeguards were adequate. That issue is especially sensitive in the FSU case, where more than 270 alleged ChatGPT interactions are reportedly part of the court record.

The case also comes at a moment when regulators globally are shifting from abstract AI principles to enforcement. The Florida investigation introduces fresh legal and reputational risk for OpenAI, just as scrutiny of AI safety, child protection, and platform liability intensifies across jurisdictions.

Hardware Slays Software Again: Cramer Warns the AI Infrastructure Boom Is Leaving Enterprise Titans in the Dust

0

A sharp selloff in software shares alongside renewed strength in chipmakers and data-center infrastructure names is reinforcing one of 2026’s defining market trades, as investors increasingly rotate toward companies monetising the AI boom now and away from those seen as vulnerable to disruption by the same technology.

A powerful divergence has once again opened up across the technology sector, with hardware and AI infrastructure stocks attracting fresh institutional money while software names continue to come under sustained pressure.

The split, highlighted by CNBC’s Jim Cramer on Thursday, has rapidly re-emerged as one of Wall Street’s defining narratives after briefly taking a back seat during the Iran conflict and the ensuing market volatility.

In Cramer’s words, “I’m talking about the enterprise software empire that’s being toppled by hardware stocks and AI,” adding that, “this war in tech, more than the actual war in Iran, has captivated Wall Street.”

Thursday’s price action made the split impossible to ignore. Software stocks absorbed another beating. Salesforce fell nearly 3 percent, Adobe dropped close to 4 percent, and the iShares Expanded Tech-Software Sector ETF (IGV), the benchmark many big institutions use to express views on the sector, plunged more than 4 percent.

Cybersecurity heavyweight CrowdStrike got dragged down 7.5 percent simply because it sits inside the ETF, even though its business is more defensive than pure SaaS.

Cramer noted the IGV’s sharp move as a clear sentiment gauge: “This ETF is the primary way that big institutions bet on or bet against software.”

On the other side of the ledger, the winners were the companies supplying the concrete, chips, optics, and networking gear needed to power massive AI data centers. Marvell Technology and Intel each rose nearly 5 percent. Corning, a key provider of specialty materials for data center infrastructure, gained 2.85 percent.

“If you’re in the software camp, you’re being treated as if you’re ready for the embalmer,” Cramer said with characteristic flair. “If you are in the hardware and AI camp, you’re headed for the pantheon of greatness.”

The divergence isn’t a one-day anomaly. The IGV has suffered one of its worst stretches in years, plunging more than 24 percent in the first quarter of 2026 — its steepest quarterly drop since the 2008 financial crisis. Broader software indices have seen hundreds of billions wiped out since early February, when Anthropic’s rollout of Claude Cowork and its industry-specific plugins ignited what traders dubbed the “SaaSpocalypse.”

Investors suddenly feared that sophisticated AI agents could automate complex workflows in sales, legal, finance, and data analysis, potentially eroding the need for expensive per-seat software licenses and professional services.

That fear has lingered. Even as some software executives argue the panic is overdone and that AI will ultimately enhance rather than replace their platforms, the market has repriced growth expectations aggressively. Multiples have compressed sharply for names like Salesforce, Adobe, Workday, and ServiceNow, with several down 25-40 percent year-to-date at points this year.

By contrast, the hardware side continues to ride the wave of exploding capital expenditure on AI infrastructure. Gartner and other forecasters see spending on AI-related data centers, networking, power, and chips potentially reaching well over a trillion dollars in 2026. Companies like Marvell have posted strong data-center revenue growth and secured high-profile partnerships, including ties to Nvidia’s ecosystem, that position them to capture share in custom silicon and high-speed interconnects.

Cramer suggested this bifurcation has staying power, at least in the near term.

“Here’s the bottom line: maybe tomorrow we’ll return to the worldwide narrative, whether it’s war or peace in the Middle East,” he said. “But, for now, it’s just another day when hardware slew software like Cain slew Abel and all I can do is say get used to it.”

The takeaway is uncomfortable but straightforward for investors. In this cycle, owning the companies that literally build the AI future, the semiconductors, fiber, cooling systems, and specialized chips, has offered far better protection than betting on the software layer that may soon face disintermediation by the very technology it helped enable.

The broader implication is that 2026 may be remembered as the year the AI trade split into two very different stories: infrastructure as a near-term cash machine and software as a sector forced to prove its relevance in an age of intelligent automation. Wall Street is currently voting with capital, and that vote is heavily favoring the builders of the machine over the applications running on it.

Trump Warns U.S. Forces Will Stay Around Iran as Fragile Ceasefire Faces New Strains Over Lebanon and Hormuz

0

U.S. President Donald Trump has sharply raised the stakes around the still-fragile U.S.-Iran ceasefire, declaring that American military assets will remain deployed in and around Iran until Tehran fully complies with what he described as the “real agreement.”

He also warned that any breach would trigger an even larger military response.

In a late-night post on Truth Social, Trump said: “All US ships, aircraft, and military personnel… will remain in place in, and around, Iran, until such time as the REAL AGREEMENT reached is fully complied with.”

He then added an explicit threat: “If for any reason it is not… the ‘Shootin’ Starts,’ bigger, and better, and stronger than anyone has ever seen before.”

The language indicates that what was initially presented as a two-week ceasefire is increasingly being treated by Washington as a conditional military pause rather than a settled diplomatic breakthrough.

Markets had initially welcomed the ceasefire, with global equities rallying and oil prices falling on expectations that energy shipments through the Strait of Hormuz could resume. But Trump’s latest statement, coupled with renewed violence in Lebanon and contradictory interpretations of the deal’s terms, has reintroduced significant geopolitical risk into the equation.

Brent crude, which had fallen sharply after the ceasefire announcement, resumed its climb on Thursday, rising to about $97.08 per barrel, while U.S. West Texas Intermediate advanced to $97.55, as traders reassessed the durability of the truce and the likelihood of sustained supply disruptions.

At the heart of the uncertainty is a widening divergence in how the parties interpret the agreement. Trump has insisted that the arrangement includes a long-standing understanding that Iran will not develop nuclear weapons and that the Strait of Hormuz will remain open and safe for commercial shipping. But Iranian officials have signaled a markedly different reading.

According to Reuters, Iran’s parliamentary speaker said uranium enrichment remains permitted under the ceasefire terms, directly contradicting Trump’s assertion that Tehran had agreed to halt enrichment.

This is not a minor discrepancy as it strikes at the core issue that has defined tensions between Washington and Tehran for years: the future of Iran’s nuclear programme. If both sides are operating under fundamentally different assumptions, the ceasefire may be less an agreement than a temporary suspension of hostilities pending further negotiations.

That risk is also compounded by developments in Lebanon. Although Pakistan’s mediation had initially been described by some parties as covering all fronts, including Lebanon, the White House has since moved to narrow that interpretation.

JD Vance said Tehran’s negotiators had mistakenly believed Lebanon was covered by the ceasefire, adding that “the ceasefire included Iran and U.S. allies, including Israel and the Gulf Arab states,” but “it just didn’t” include Lebanon.

That clarification directly contradicts comments attributed to Pakistani Prime Minister Shehbaz Sharif, who had indicated the truce extended to Lebanon as well.

The contradiction has already had real-world consequences. Israel launched what has been described as its harshest offensive in Lebanon since the conflict began in February, with reports indicating at least 182 fatalities on Wednesday alone. Those strikes have sharply increased pressure on the ceasefire framework and prompted Iranian threats that it would be “unreasonable” to proceed with permanent peace talks under current conditions.

This places Friday’s expected talks in Islamabad under considerable strain. Diplomatically, the central issue is now whether the two-week pause can be converted into a formal settlement before the expiry window closes.

Trump’s rhetoric suggests Washington is using continued military deployment as leverage. His statement that the U.S. military is “loading up and resting, looking forward, actually, to its next conquest” adds a coercive tone that may complicate negotiations, particularly with Tehran already accusing Washington and Israel of acting in bad faith.

The Strait of Hormuz remains the key economic pressure point. Fresh reports that Iran may seek to impose tolls, potentially including cryptocurrency payments, for passage through the strait have alarmed governments and the shipping industry. While those reports remain unconfirmed, the mere possibility has intensified concerns.

The International Chamber of Shipping has warned that such tolls would be outside established international norms.

John Stawpert, the organization’s marine director, said: “Charging a toll for transits through an international waterway would be outside international norms and realistically would undermine international law, and the right to freedom of navigation and innocent passage.”

This is now as much an economic crisis as a military one, with escalating global implications. U.K. Foreign Minister Yvette Cooper is expected to use a major foreign policy speech to insist that shipping through Hormuz must remain toll-free and that Lebanon be explicitly included in the ceasefire framework. The British government is clearly linking the conflict to domestic economic pain, including higher mortgage costs, fuel prices, and food inflation.

In strategic terms, the latest developments reveal that the ceasefire is operating under multiple, conflicting interpretations. Washington sees it as a pause conditioned on compliance, while Tehran appears to see it as a broader framework tied to sanctions relief and regional de-escalation.

Israel does not recognize its applicability to Lebanon. Those differences make the truce highly vulnerable.

Stawpert said that the situation was “very, very confusing.”

The military language from Trump, the renewed hostilities in Lebanon, and the unresolved status of Hormuz all suggest that what markets briefly priced as de-escalation may instead be only an intermission in the conflict.

How the $7M World Record Leaderboard at Spartans Casino is Defining the Future of Player Equity

0

During 2026, smart gamblers have become weary of hard gift rules and “play-through” hurdles that make it very tough to take out wins. Spartans.com is fixing this by swapping out old tricks for a “Player Equity” style, led by a world record amount of $7,000,000 in monthly prize race rewards. This setup treats prizes as steady money gains instead of just costs for ads.

For the dedicated “grinder,” the site joins this giant prize fund with tools for handling cash like the 33% CashRake setup, making a space where math-based honor and openness are the main reasons people stay.

Steady Prizes vs. Ad Tricks

A lot of sites use gifts to trap people in loops of lost bets, but the Spartans.com world record amount of $7,000,000 works as a straight profit-sharing tool with the group. By giving $5,000,000 to the top monthly user, the platform lets players see their betting as a hunt for a real high-value prize.

This prize fund is not linked to “bonus cash” that needs 40x play; it is actual money made for those who know the worth of their play volume. This big shift makes it the best online casino fast payout spot for those who care about real results over bright signs and fake words.

The CashRake and Quick Payout Lead

At the heart of this new style is the use of the 33% CashRake setup. While users fight for the $5,000,000 first prize, they are also getting up to 3% quick cash back and big rakeback on every single bet. This gives the needed cash to keep a good money plan while moving up the prize list.

Very importantly, Spartans.com works as an online casino with instant withdrawal, which means as soon as a user wins or gets their prize share, the cash can be sent to their own wallet in moments. There is no waiting for “staff checks” or long lines for proving who you are.

Gaining More Through High Returns

To really fight for the $7,000,000 prize fund, players are looking more at high-return slots and games where the house has the smallest lead. Spartans.com provides a huge list of games made to keep players in the action longer, growing their play volume without losing their cash too fast.

This focus on high-win gaming, joined with the giant monthly prize list, makes sure the site is made for the long-run success of its fans. The “Player Equity” style knows that a winning player is one who comes back, and giving the best chances and the biggest pay is the best way to make that bond.

To summarize

Spartans.com is fixing the bond between a gaming site and its users. By giving a world record amount of $7,000,000 and joining it with a working online casino with an instant withdrawal setup, they have made a spot where players can actually win.

The mix of the $5,000,000 top prize, the 33% CashRake, and the focus on money plans makes sure this is not just another betting site. It is a smart money group made for those who take their play and their wins very seriously in this new age.

 

Find Out More About Spartans:

 

Website: https://spartans.com/

Instagram: https://www.instagram.com/spartans/

Twitter/X: https://x.com/SpartansBet

YouTube: https://www.youtube.com/@SpartansBet