![Colab IPython Interactive Demo Notebook: Natural Language Visual Search Of Television News Using OpenAI's CLIP – The GDELT Project Colab IPython Interactive Demo Notebook: Natural Language Visual Search Of Television News Using OpenAI's CLIP – The GDELT Project](https://blog.gdeltproject.org/wp-content/uploads/2022-tve-semantic-search-clip-nuclear-1064x410.png)
Colab IPython Interactive Demo Notebook: Natural Language Visual Search Of Television News Using OpenAI's CLIP – The GDELT Project
![P] I made an open-source demo of OpenAI's CLIP model running completely in the browser - no server involved. Compute embeddings for (and search within) a local directory of images, or search P] I made an open-source demo of OpenAI's CLIP model running completely in the browser - no server involved. Compute embeddings for (and search within) a local directory of images, or search](https://external-preview.redd.it/W9YcFgBnfZDMlabAtrfk4CNq8IjFz7gmrlOz2NkSIKs.png?format=pjpg&auto=webp&s=7617eef5cbad7a9c0399650933d416ae43c14740)
P] I made an open-source demo of OpenAI's CLIP model running completely in the browser - no server involved. Compute embeddings for (and search within) a local directory of images, or search
![OpenAI and the road to text-guided image generation: DALL·E, CLIP, GLIDE, DALL·E 2 (unCLIP) | by Grigory Sapunov | Intento OpenAI and the road to text-guided image generation: DALL·E, CLIP, GLIDE, DALL·E 2 (unCLIP) | by Grigory Sapunov | Intento](https://miro.medium.com/v2/resize:fit:1400/1*2j6zuWD2P54gMysikKaoXg.png)
OpenAI and the road to text-guided image generation: DALL·E, CLIP, GLIDE, DALL·E 2 (unCLIP) | by Grigory Sapunov | Intento
![Makeshift CLIP vision for GPT-4, image-to-language > GPT-4 prompting Shap-E vs. Shap-E image-to-3D - API - OpenAI Developer Forum Makeshift CLIP vision for GPT-4, image-to-language > GPT-4 prompting Shap-E vs. Shap-E image-to-3D - API - OpenAI Developer Forum](https://global.discourse-cdn.com/openai1/original/3X/a/a/aa8d0e3d9174bd76b770e7e440f8add35a5b3a12.jpeg)