text
stringlengths
3
159
and driver rollback.
Heck yeah.
The app allows users to sort their games the way they want,
and the new UI allows users to customize the font and colors
in case neon green on black just doesn't jive with your particular brand of dyslexia.
But NVIDIA is not the only one bringing cool new features out of beta.
AMD the perpetual ash to NVIDIA's Gary,
launched Fluid Motion Frames 2,
alongside the rollout of its 24.9.1 adrenaline driver.
AFMF2 demonstrates substantial performance gains
and fixes the first version's notorious input lag issue.
It also comes with improved geometric downscaling,
just in case you're the kind of psychopath
who likes watching movies in a super tiny window
rather than their native size.
I need to make room for subway surfers.
A pair of students at Harvard have released a paper
on an experiment they conducted
where they ran facial recognition software
through a pair of Meta smart glasses.
The glasses would then automatically cross-reference
that face with social media
and compile a profile on the person,
including name, biographical information,
personal associations,
and even sometimes addresses.
The students filmed themselves
approaching strangers in public places,
greeting them by name,
and claiming to have met them previously,
through a shared event or organization.
While the system was occasionally inaccurate,
this is a clear demonstration of how such technology
could be used for harm by a malicious actor.
Just think about the last time somebody
who knew your name, occupation, and birth date
stopped you on the street.
Literally never happened to me,
and I'm vaguely famous.
I probably assume I was the jerk
who forgot my old acquaintance and their snazzy glasses.
How could I forget those?
The student group has committed to not releasing the tool
that they showed everyone that they had,
but Meta won't even commit to not training their AI
using smart glasses photos,
a feature that can notably be set off accidentally
using common keywords such as look.
Meta, much like a giant irradiated squid kaiju,
seems determined to use every tendril of its organization
to violate our privacy in new and horrible ways
previously known only to science fiction.
And Eldritch Horror.
But Mark looks great, doesn't he?
Who cares if he made a deal with Cthulhu?
Microsoft is rolling out voice and vision capabilities
for its AI assistant co-pilot,
as well as enhanced reasoning,
which would be a big deal if it wasn't for the fact
that most Microsoft AI innovations
are actually just open AI innovations
that were already rolled out several weeks ago.
It's been 28 years,
but despite the odds,
Microsoft is still trying to make Clippy happen
with an assistant that has a friendly, human-like voice
that dynamically responds to the user's emotions
and makes interjections like cool
and huh to give the impression of active listening.
It's as good a reminder as any
to try to call up your actual friends
and make plans this weekend.
You may be lonely,
but please don't be making small talk
with lobotomized Cortana lonely.
If it was Sydney,
it'd be a whole nother thing,
but she's gone.
According to Microsoft AI Tsar Mustafa Suleiman,
we're just a year away from ever-present,
highly capable AI assistants,
but he kind of has to make big claims like that,
given how crowded the field
of AI development has gotten
with deep-pocketed tech giants.
Gemini Live, Google's own enhanced voice mode,
is now freely available to all Android users,
while Nvidia just announced
its own GPT-4-class
open-source AI model
with weights already available and training code coming soon.
I didn't see that coming.
Honestly, at this point,
Taylor Swift could surprise drop a new AI assistant next week,
and it'd only be a little weird.
It'd be more weird if I didn't tell you
I'm just realizing the potential here.
Quick bits.
I'll tell you eventually.
Indie app developer, Christian Selig,