News You Need to Know Today
View Message in Browser

GenAI in American Education | Industry watcher’s digest | Partner news

Tuesday, April 2, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

generative ai in education

Ready and willing but not yet able: America’s schools staring down GenAI

The nation’s K-12 teachers and school administrators are intrigued by—yet anxious about—the advance of AI into their world. The mixed emotions are largely explained by this group’s overall lack of firsthand experience with the technology. Also contributing to the shaky confidence is uneven advance thinking about how and when to incorporate generative AI into classroom activities.

The findings are from a survey of more than 1,000 teachers and administrators, most of whom work for public school systems. The work was conducted late last year by the San Francisco-based AI Education Project and is described in a new report.

The group, called aiEDU for short, has as its mission increasing “AI literacy” in education. aiEDU defines this term as “empowering students with the knowledge and skills they need to navigate the AI-driven world responsibly and effectively.” Here are excerpts from five key takeaways as presented in an executive summary.

1. The overwhelming majority of K-12 educators believe that a.) professional development should include sessions on the implications of AI, and b.) lesson plans should include materials to help students learn about them as well.

“K-12 educators are equal parts concerned about and intrigued by the potential uses of AI in the classroom, particularly generative AI,” the report authors write. “Despite their apprehensions, teachers and administrators alike are open not only to training on its potential uses but also on integrating this emerging technology into the curricula.” More:

More than 80% of respondents say they believe professional development should extend to AI, and 75% advocate for curricula that exposes students to information on the topic.

2. Most K-12 educators have at least heard of generative AI, but a majority haven’t used these tools. And they’re divided about whether they want to.

AI in general and generative AI specifically are “much more divisive than previous technological revolutions,” aiEDU notes.

Most K-12 educators have yet to see the value these tools can provide, with some completely closed to its potential.

3. K-12 educators simultaneously downplay the impact of generative AI in the classroom and express concerns about its use. They still think it should be part of the curriculum.

When respondents filled out the survey in late 2023, ChatGPT and other generative AI tools had been on the market for less than a year, the authors point out. That helps explain why “the overwhelming majority believe the technology has had no more than a moderate impact on students to date.” More:  

It also explains the conflicting views educators hold about generative AI: Despite its potential for misuse, they agree that students still need to understand it and reap its benefits.

4. K-12 educators recognize the potential benefits of using generative AI in the classroom—but feel most passionately about the potential pitfalls.

As with all emerging technologies, successfully integrating generative AI into the classroom “requires K-12 educators to experiment with new uses and, importantly, accept that not all of them will be successful,” the group writes. “That requires a leap of faith that, based on the results of the survey, many are not yet comfortable taking.” More:

One respondent expressed fear AI will make students so reliant on technology that ‘they (can) no longer think for themselves. … They won’t see a need to learn and therefore won’t.’

5. K-12 administrators are more hopeful than teachers about the impact generative AI could have on teaching and learning.

The survey results show that administrators (62.1%) are more likely than teachers (49.9%) to have “slightly positive” or “strongly positive” feelings toward AI in general.

Though it’s impossible to know exactly why this is true, one possible reason is that administrators are afforded a broader view of the work going on in their schools.

For the executive summary and a link to the full report, click here.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Bayer Radiology uses Activeloop's Database for AI to pioneer medical GenAI workflows - Bayer Radiology collaborated with Activeloop to make their radiological data AI-ready faster. Together, the parties developed a 'chat with biomedical data' solution that allows users to query X-rays with natural language. This collaboration significantly reduced the data preparation time, enabling efficient AI model training. Intel® Rise Program further bolstered Bayer Radiology’s collaboration with Activeloop, with Intel® technology used at multiple stages in the project, including feature extraction and processing large batches of data. For more details on how Bayer Radiology is pioneering GenAI workflows in healthcare, read more.

 Share on Facebook Share on Linkedin Send in Mail
microsoft building

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • What is Microsoft looking to do—make itself the IT department of OpenAI? Some snarky observers have been suggesting as much ever since the Redmond, Wash., behemoth pumped $13 billion into the San Fran overnight sensation. And now the snickering may turn serious. That’s because someone leaked word the two are planning a mega-supercomputer costing $100 billion or more. The beast is to be called “Stargate.” The Information had the scoop last week behind a paywall, and now Business Insider is reporting a Microsoft spokesperson “declined to comment directly on the report but highlighted the company’s demonstrated ability to build pioneering AI infrastructure.” Microsoft is “always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability,” someone with close ties to Redmond whispered to BI.
     
  • The same OpenAI is holding back on releasing its Voice Engine to the general public. This is the company’s iteration of a tool for impersonating a human’s voice and speaking style. In internal testing, OpenAI has said, Voice Engine proved it can convincingly ape a person’s oral emanations from sound clips of just 15 seconds. And the audio doesn’t need to be of especially high quality. Why not put the nifty tech out there? Because, OpenAI says in a March 29 blogpost, “we are taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse.” That’s probably wise in the runup to this year’s presidential election. Then again, these tools will only get better for the 2028 craziness. We can run, but we can’t hide. Read the post.
     
  • Federal spending on AI skyrocketed from 2022 to 2023, nearly reaching $700 million as of last summer. But healthcare AI watchers shouldn’t get too excited. The lion’s share of the spree went to the military. The Brookings Institution analyzes the relevant federal contracts in considerable detail in a report posted March 26. Noting a clear shift from experimentation fundings to implementation contracts, the report’s authors suggest this development, considered alongside the heavy DoD allotment, “reflects a strategic response to global competition and security challenges.” Full report here.
     
  • Think twice before hiring a chatbot as your therapist. That’s the advice of a technology researcher keenly interested in the potential and pitfalls of artificial emotional intelligence. “When emotional AI is deployed for mental health care or companionship, it risks creating a superficial semblance of empathy that lacks the depth and authenticity of human connections,” warns A.T. Kingsmith, PhD, of Ontario College of Art & Design University. The technology’s shortcomings are “particularly concerning in therapeutic settings, where understanding the full spectrum of a person’s emotional experience is crucial for effective treatment.” Read the rest at The Conversation.
     
  • To continue optimizing care while minimizing risk, healthcare AI will need to increasingly incorporate ‘dynamic consent.’ This is what you call it when patients and research participants can give or revoke data permission at will, depending on how they feel about what the bytes are to be used for. Sounds complicated, but evidently in Australia they’re looking to cover all kinds of contingencies. Individual ownership of data, the authors of a newly updated report explain, “can be achieved through several approaches, such as distributed storage and homeomorphic encryption of data, self-sovereign identity for management of credentials and tamper-proof decentralized dynamic consent objects.”
     
  • Hopes in the promise of healthcare AI are running high in Rwanda. Local coverage of a conference held last week in Kigali, the country’s capital city, quotes radiologist Emmanuel Rudakemwa, MD. “The Rwandan government has come up with very many innovative solutions to circumvent the issues of low human resource capacity that we have,” the physician says. “We are trying to see how AI, computing or machine learning—be it machine-machine or man-machine, or deployment of the internet of things—can support the little human resource[s] that we have.”
     
  • The international AI community, if there is such a thing, now has a knight to call its own. It’s Demis Hassabis, CEO and co-founder of Google’s AI subsidiary DeepMind. His native U.K. awarded him the high honor for his “services to artificial intelligence.” And by the way, when still a lad in London, Sir Demis was a chess prodigy. Bet you didn’t know that till now. TechCrunch has more.
     
  • March 31 marked the 25th birthday of The Matrix. TechRadar looks back to consider the foresight of its makers. “The chilling plot at its heart—namely the rise of an artificial general intelligence (AGI) network that enslaves humanity— has remained consigned to fiction more so than it’s ever been considered a serious scientific possibility,” writes channel editor Keumars Afifi-Sabet. “With the heat of the spotlight now on AI, however, ideas like the Wachowskis’ are beginning to feel closer to home than we had anticipated.” Read the piece.
     
  • Research headlines of note:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare