68618 stories
·
3 followers

Do Cellphones Cause Cancer? RFK Jr.'s HHS Is Suppressing FDA Data Confirming Cellphone Safety.

1 Share
Robert F. Kennedy Jr. holding a cell phone to his ear | Aaron Schwartz/Sipa USA/Newscom

The Food and Drug Administration's webpages reporting that cellphones don't cause cancer and other health hazards have been taken down. This comes as the Department of Health and Human Services, under Secretary Robert F. Kennedy Jr., is launching a new "study" on the health effects of cellphone usage. Under Kennedy's leadership, the anti-vaccination advocacy group Children's Health Defense sued in 2022 and lost a case against the Federal Communications Commission asserting that cellphone towers caused deleterious health effects. In 2022, Kennedy tweeted that "a growing body of research that calls cellphone safety into question."

Since cellphones tend to be held close to users' heads, brain cancer is one of the main concerns for alarmists like the current HHS secretary. The vast majority of research has concluded that there is essentially no correlation between cellphone use and cancer incidence.

The National Cancer Institute has a great summary of these studies. Given what's happened to the FDA webpages, you might want to read the data while you still can.

Let's just take short statistical journey comparing U.S. cellphone usage and cancer incidence trends. In 1995, only about 33.8 million Americans used cellphones. By 2025, 98 percent of adult Americans owned a cellphone. As cellphone usage skyrocketed, the overall cancer incidence rate for Americans has fortunately marked a slow but steady decline.

What about brain cancers? A June 2025 study in Environmental Research and Public Health citing U.S. brain cancer incidence data from 2000 to 2021 reports that "mobile phone use does not appear to be associated with an increased risk of brain cancer, either malignant or benign." Another 2025 study parsing data over the same period in Neurology also found that "brain tumor incidence has shown a gradual decline since its peak in the early 2000s." (The only exception is that brain cancer incidence has slightly increased among American Indians.)

Next up: Federally funded research on the efficacy of beating dead horses?

The post Do Cellphones Cause Cancer? RFK Jr.'s HHS Is Suppressing FDA Data Confirming Cellphone Safety. appeared first on Reason.com.

Read the whole story
gangsterofboats
6 minutes ago
reply
Share this story
Delete

Medical Guidance Shouldn’t Come From Washington

1 Share
If there’s a silver lining, it’s that controversies like this may finally encourage clinicians, researchers, and patients to rely less on federal pronouncements and more on diverse, independent medical expertise.
Read the whole story
gangsterofboats
8 minutes ago
reply
Share this story
Delete

The Warmth of Collectivism

1 Share
“Replace the frigidity of rugged individualism with the warmth of collectivism!” says my new socialist mayor. Sounds so nice … No more greedy capitalists hoarding wealth. People share. It’s the socialist dream. What will replace capitalism and individualism? One model is the commune – that socialist system where people share, rather than greedily chasing money. […]

Read the whole story
gangsterofboats
31 minutes ago
reply
Share this story
Delete

Britain’s bid to police the world’s internet

1 Share

The post Britain’s bid to police the world’s internet appeared first on spiked.

Read the whole story
gangsterofboats
32 minutes ago
reply
Share this story
Delete

Why are groceries so expensive in NYC?

1 Share

The lowest-hanging fruit is to simply legalize selling groceries in more of the city. The most egregious planning barrier is that grocery stores over 10,000 square feet are not generally allowed as-of-right in so-called “M” districts, which are the easiest places to find sites large enough to accommodate the large stores that national grocers are used to. Many of these districts are mapped in places that are not what people have in mind when they think “industrial” — mixed-use neighborhoods with lots of housing like stretches of Williamsburg’s Bedford Avenue and almost all of Gowanus, even post-rezoning, are in fact mapped as industrial districts.

To open a full-sized grocery store in these areas, a developer must seek a “special permit,” which requires the full City Council to get together and vote for an exception to the rules. This is a long, uncertain process, and has in the past even been an invitation to corruption.

Most famously, the City Council uses this power to keep out Walmart at the behest of unions and community groups. Thwarted in its plans to open a store in East New York — a low-income Brooklyn neighborhood that could desperately use more grocery options — the nation’s largest grocer instead serves New Yorkers with a store just beyond the Queens/Nassau line in Valley Stream, rumored to be the busiest Walmart in the country. New Yorkers with a car and the willingness to schlep beyond city limits — or pay the Instacart premium — get access to cheaper groceries; the rest get locked out.

When politicians are willing to approve a grocery store, the price can be high.

That is by Stephen Smith, via Josh Barro.

The post Why are groceries so expensive in NYC? appeared first on Marginal REVOLUTION.

Read the whole story
gangsterofboats
37 minutes ago
reply
Share this story
Delete

AI and the Art of Judgment

1 Share

A New York magazine article titled “Everyone Is Cheating Their Way Through College” made the rounds in mid-2025. I think about it often, and especially when I get targeted ads that are basically variations on “if you use our AI tool, you’ll be able to cheat without getting caught.” Suffice it to say it’s dispiriting.

But the problem is not that students are “using AI.” I “use AI,” and it’s something everyone needs to learn how to do. The problem arises when students represent AI’s work as their own.  At a fundamental level, the question of academic integrity and the use of artificial intelligence in higher education is not technological. It’s ethical. 

I love generative artificial intelligence and use it for many, many things. Workouts. Recipes. Outlining and revising articles and lectures. Multiple-choice questions. Getting the code I need to tell R to turn a spreadsheet into a bunch of graphs. Tracking down citations. And much more.  The possibilities are endless. Used wisely, it multiplies productivity. Used foolishly, it multiplies folly. Debates about academic integrity and artificial intelligence force us to really reckon with who we are and what we’re doing.

The debate has split into unhelpful camps. One compares AI to a calculator. Another sees AI as the end of human thought. Both miss the point. The “just a calculator” crowd ignores how calculators and related software tools, as useful as they are, have relieved us of many of the burdens that come with thinking quantitatively. “It’s just like a calculator” is (kind of) true, but it’s not reassuring. Knowing which buttons to press to make a parabola appear is not the same thing as knowing what a parabola actually is and why it’s meaningful. The “end of thought” crowd ignores how generative AI is a powerful tool that can be used wisely. Is it an assistant? That’s great. Is it a substitute? That’s not.

The problem, though, is not the tool. It’s the user. People can use AI wisely or wickedly, just like they can any other tool. In the hands of Manly Dan from Gravity Falls or Paul Bunyan, an axe is a tool used to fell trees and provide shelter. In the hands of Jason Voorhees from the Friday the 13th horror franchise, it’s a tool for something else entirely.

In 2023, just as we were meeting and getting to know our new AI overlords, I wrote an article responding to the cynical student asking, “when am I ever gonna use this?” about the humanities and other studies that aren’t strictly vocational. My answer was (and is) “literally every time you make a decision.” Why? The decisions you make are a product of the person you are, and the person you are is shaped by the company you keep. Studying history, philosophy, literature, economics, and the liberal arts more generally is an exercise in keeping good company and becoming a certain kind of person: one who has spent sufficient time grappling with the best that has ever been thought and written to be trusted with important decisions. It is to become a person who has cultivated the art of judgment.

It’s an art we can practice poorly in a world where it’s trivially easy to outsource our thinking to ChatGPT and Gemini. Here’s an analogy. If you’ve never seen the movie Aliens, drop everything and watch it. It’s a classic among classics. If you have seen it, consider the end of the movie, when Sigourney Weaver’s character, Ellen Ripley, dons a P-5000 power loader suit to defeat the alien queen. She uses a tool that amplifies her strength, enabling her to accomplish what would otherwise be impossible.

The way many students use AI is much like wearing Ripley’s power loader suit to the gym. You might be able to “lift” 5000 pounds in the power loader suit, but it’s a mistake to think the suit is making you any stronger, a laughable self-deception to think you could lift 5000 pounds without it, and a laughable lie to anyone you’re trying to deceive into thinking you can lift 5000 pounds. When you hand in work that’s mostly AI-generated, you’re not building muscle, learning to lift, or getting stronger. You’re racking up huge numbers while your muscles atrophy.

Sometimes, of course, using AI is like having a spotter when you’re doing squats or bench press. I use AI in the gym as a trainer of sorts that tells me which exercise to do next. That’s one way to use AI, but the way too many students use AI is like going to the gym and having the AI tool–the power loader suit–lift the weights for me.

Tools like ChatGPT, Gemini, Grok, and Claude should free up our time and energy to do higher-order work, not hide the fact that we can’t. Technology has made me significantly more productive: I dictated the original version of this essay into Google Docs on my phone using wireless earbuds, and then revised it using Gemini and Grammarly. What’s the difference between that and submitting AI-generated work? Using dictation tools and AI to generate and clean up an essay like this is like using Ripley’s power loader to move heavy stuff. Using AI to create text and trying to pass it off as your own is like using Ripley’s power loader suit to fake a workout.

 

I thank ChatGPT, Gemini, Grammarly Pro, and GPTZero.me for editorial assistance.

The post AI and the Art of Judgment appeared first on Econlib.

Read the whole story
gangsterofboats
38 minutes ago
reply
Share this story
Delete
Next Page of Stories