Aitrepreneur commited on
Commit
a5868c6
1 Parent(s): bdb1b4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -13
README.md CHANGED
@@ -1,13 +1,36 @@
1
- ---
2
- title: FLUX Prompt Generator
3
- emoji: 😻
4
- colorFrom: blue
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 4.40.0
8
- app_file: app.py
9
- pinned: true
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: FLUX Prompt Generator
3
+ emoji: 😻
4
+ colorFrom: blue
5
+ colorTo: gray
6
+ sdk: gradio
7
+ sdk_version: 4.40.0
8
+ app_file: app.py
9
+ pinned: true
10
+ license: apache-2.0
11
+ ---
12
+
13
+ Windows installation (Watch my youtube video here for more info: )
14
+
15
+ git lfs install
16
+ git clone https://huggingface.co/Aitrepreneur/FLUX-Prompt-Generator
17
+
18
+ Inside the FLUX-Prompt-Generator create a new virtual env
19
+ python -m venv env
20
+ env\Scripts\activate
21
+
22
+ Install the requirements
23
+ pip install -r requirements.txt
24
+
25
+ Then run the application
26
+ python app.py
27
+
28
+
29
+ Inside the app.py file, On line 337: self.groq_client = Groq(api_key="YOUR-GROQ-API-KEY")
30
+ replace YOUR-GROQ-API-KEY by your API GROQ Key if you wanna use Groq for the text generation.
31
+
32
+ IF YOU WANT TO USE EVERYTHING LOCALLY:
33
+
34
+ 1) Download and install ollama: https://ollama.com/
35
+ 2) Once Ollama is running in the background, download the Llama 3 8B model by running this command in a new cmd window: ollama run llama3
36
+ 3) Once the llama 3 8B model is donwloaded go in the FLux prompter Webui, check the "Use Ollama (local)" checkbox and you are good to go