Clip-image-search is a huggingface gradio app that will give back a matching image corresponding to the text query that we supply. Clip model used here is trained on flickr30k dataset. Model will search the flickr30k image embeddings based on input embedded text query provided and return the most matching embedding. This embedding is then converted back to image and displayed as output.