Workshop ENSAD DG

university

AI & ML interests

None defined yet.

Recent Activity

ENSAD-DG's activity

fffiloni 
posted an update about 1 month ago
fffiloni 
posted an update 3 months ago
view post
Post
14745
Visionary Walter Murch (editor for Francis Ford Coppola), in 1999:

“ So let's suppose a technical apotheosis some time in the middle of the 21st century, when it somehow becomes possible for one person to make an entire feature film, with virtual actors. Would this be a good thing?

If the history of oil painting is any guide, the broadest answer would be yes, with the obvious caution to keep a wary eye on the destabilizing effect of following too intently a hermetically personal vision. One need only look at the unraveling of painting or classical music in the 20th century to see the risks.

Let's go even further, and force the issue to its ultimate conclusion by supposing the diabolical invention of a black box that could directly convert a single person's thoughts into a viewable cinematic reality. You would attach a series of electrodes to various points on your skull and simply think the film into existence.

And since we are time-traveling, let us present this hypothetical invention as a Faustian bargain to the future filmmakers of the 21st century. If this box were offered by some mysterious cloaked figure in exchange for your eternal soul, would you take it?

The kind of filmmakers who would accept, even leap, at the offer are driven by the desire to see their own vision on screen in as pure a form as possible. They accept present levels of collaboration as the evil necessary to achieve this vision. Alfred Hitchcock, I imagine, would be one of them, judging from his description of the creative process: "The film is already made in my head before we start shooting."”

Read "A Digital Cinema of the Mind? Could Be" by Walter Murch: https://archive.nytimes.com/www.nytimes.com/library/film/050299future-film.html

  • 1 reply
·
fffiloni 
posted an update 7 months ago
view post
Post
19504
🇫🇷
Quel impact de l’IA sur les filières du cinéma, de l’audiovisuel et du jeu vidéo?
Etude prospective à destination des professionnels
— CNC & BearingPoint | 09/04/2024

Si l’Intelligence Artificielle (IA) est utilisée de longue date dans les secteurs du cinéma, de l’audiovisuel et du jeu vidéo, les nouvelles applications de l’IA générative bousculent notre vision de ce dont est capable une machine et possèdent un potentiel de transformation inédit. Elles impressionnent par la qualité de leurs productions et suscitent par conséquent de nombreux débats, entre attentes et appréhensions.

Le CNC a donc décider de lancer un nouvel Observatoire de l’IA Afin de mieux comprendre les usages de l’IA et ses impacts réels sur la filière de l’image. Dans le cadre de cet Observatoire, le CNC a souhaité dresser un premier état des lieux à travers la cartographie des usages actuels ou potentiels de l’IA à chaque étape du processus de création et de diffusion d’une œuvre, en identifiant les opportunités et risques associés, notamment en termes de métiers et d’emploi. Cette étude CNC / Bearing Point en a présenté les principaux enseignements le 6 mars, lors de la journée CNC « Créer, produire, diffuser à l’heure de l’intelligence artificielle ».

Le CNC publie la version augmentée de la cartographie des usages de l’IA dans les filières du cinéma, de l’audiovisuel et du jeu vidéo.

Lien vers la cartographie complète: https://www.cnc.fr/documents/36995/2097582/Cartographie+des+usages+IA_rapport+complet.pdf/96532829-747e-b85e-c74b-af313072cab7?t=1712309387891
·
fffiloni 
posted an update 10 months ago
view post
Post
"The principle of explainability of ai and its application in organizations"
Louis Vuarin, Véronique Steyer
—› 📔 https://doi.org/10.3917/res.240.0179

ABSTRACT: The explainability of Artificial Intelligence (AI) is cited in the literature as a pillar of AI ethics, yet few studies explore its organizational reality. This study proposes to remedy this shortcoming, based on interviews with actors in charge of designing and implementing AI in 17 organizations. Our results highlight: the massive substitution of explainability by the emphasis on performance indicators; the substitution of the requirement of understanding by a requirement of accountability; and the ambiguous place of industry experts within design processes, where they are employed to validate the apparent coherence of ‘black-box’ algorithms rather than to open and understand them. In organizational practice, explainability thus appears sufficiently undefined to reconcile contradictory injunctions. Comparing prescriptions in the literature and practices in the field, we discuss the risk of crystallizing these organizational issues via the standardization of management tools used as part of (or instead of) AI explainability.

Vuarin, Louis, et Véronique Steyer. « Le principe d’explicabilité de l’IA et son application dans les organisations », Réseaux, vol. 240, no. 4, 2023, pp. 179-210.

#ArtificialIntelligence #AIEthics #Explainability #Accountability
fffiloni 
posted an update 11 months ago
view post
Post
I'm happy to announce that ✨ Image to Music v2 ✨ is ready for you to try and i hope you'll like it too ! 😌

This new version has been crafted with transparency in mind,
so you can understand the process of translating an image to a musical equivalent.

How does it works under the hood ? 🤔

First, we get a very literal caption from microsoft/kosmos-2-patch14-224; this caption is then given to a LLM Agent (currently HuggingFaceH4/zephyr-7b-beta )which task is to translate the image caption to a musical and inspirational prompt for the next step.

Once we got a nice musical text from the LLM, we can send it to the text-to-music model of your choice:
MAGNet, MusicGen, AudioLDM-2, Riffusion or Mustango

Instead of the previous version of Image to Music which used Mubert API, and could output curious and obscure combinations, we only provide open sourced models available on the hub, called via the gradio API.

Also i guess the music result should be more accurate to the atmosphere of the image input, thanks to the LLM Agent step.

Pro tip, you can adjust the inspirational prompt to match your expectations, according to the chosen model and specific behavior of each one 👌

Try it, explore different models and tell me which one is your favorite 🤗
—› fffiloni/image-to-music-v2
·
fffiloni 
posted an update 11 months ago
view post
Post
InstantID-2V is out ! ✨

It's like InstantID, but you get a video instead. Nothing crazy here, it's simply a shortcut between two demos.

Let's see how it does work with gradio API:

1. We call InstantX/InstantID with a conditional pose from cinematic camera shot (example provided in the demo)
2. Then we send the previous generated image to ali-vilab/i2vgen-xl

Et voilà 🤗 Try it : fffiloni/InstantID-2V


Note that generation can be quite long, so take the opportunity to brew you some coffee 😌
If you want to skip the queue, you can of course reproduce this pipeline manually
  • 1 reply
·
fffiloni 
posted an update 11 months ago
view post
Post
Quick build of the day: LCM Supa Fast Image Variation

We take the opportunity to combine moondream1 vision and LCM SDXL fast abilities to generate a variation from the subject of the image input.
All that thanks to gradio APIs 🤗

Try the space: https://huggingface.co/spaces/fffiloni/lcm-img-variations
·
fffiloni 
posted an update 11 months ago
view post
Post
Just published a quick community blog post mainly aimed at Art and Design students, but which is also an attempt to nudge AI researchers who would like to better consider benefits from collaboration with designers and artists 😉
Feel free to share your thoughts !

"Breaking Barriers: The Critical Role of Art and Design in Advancing AI Capabilities" 📄 https://huggingface.co/blog/fffiloni/the-critical-role-of-art-and-design-in-advancing-a


This short publication follows the results of two AI Workshops that took place at École des Arts Décoratifs - Paris, lead by Etienne Mineur, Vadim Bernard, Martin de Bie, Antoine Pintout & Sylvain Filoni.
·
fffiloni 
updated a Space 12 months ago
fffiloni 
posted an update 12 months ago
view post
Post
I just published a Gradio demo for AliBaba's DreamTalk 🤗

Try it now: fffiloni/dreamtalk
Paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models (2312.09767)

DreamTalk is a diffusion-based audio-driven expressive talking head generation framework that can produce high-quality talking head videos across diverse speaking styles. DreamTalk exhibits robust performance with a diverse array of inputs, including songs, speech in multiple languages, noisy audio, and out-of-domain portraits.
fffiloni 
posted an update 12 months ago
view post
Post
just setting up my new hf social posts account feature 🤗
  • 1 reply
·