xmrt commited on
Commit
c227693
1 Parent(s): 11b00b8
Files changed (1) hide show
  1. main.py +4 -5
main.py CHANGED
@@ -157,8 +157,7 @@ def pose2dhand(video, kpt_threshold):
157
  video = check_extension(video)
158
  print(device)
159
  # ultraltics
160
- if torch.cuda.is_available():
161
- device = "cuda"
162
  # Define new unique folder
163
  add_dir = str(uuid.uuid4())
164
  vis_out_dir = os.path.join("/".join(video.split("/")[:-1]), add_dir)
@@ -186,7 +185,7 @@ def run_UI():
186
  with gr.Column():
187
  video_input = gr.Video(source="upload", type="filepath", height=612)
188
  # Insert slider with kpt_thr
189
- file_kpthr = gr.Slider(minimum=0.1, maximum=1, step=1e3, default=0.3, label='Keypoint threshold')
190
 
191
  submit_pose_file = gr.Button("Make 2d pose estimation")
192
  submit_pose3d_file = gr.Button("Make 3d pose estimation")
@@ -205,7 +204,7 @@ def run_UI():
205
  with gr.Column():
206
  webcam_input = gr.Video(source="webcam", height=612)
207
 
208
- web_kpthr = gr.Slider(minimum=0.1, maximum=1, step=1e3, default=0.3, label='Keypoint threshold')
209
 
210
  submit_pose_web = gr.Button("Make 2d pose estimation")
211
  submit_pose3d_web = gr.Button("Make 3d pose estimation")
@@ -236,7 +235,7 @@ def run_UI():
236
  The 3D pose model is used for estimating the 3D coordinates of human body joints from an image or a video frame. The model uses a combination of 2D pose estimation and depth estimation to infer the 3D joint locations.
237
  All of these models are pre-trained on large datasets and can be fine-tuned on custom datasets for specific applications.
238
 
239
- Ultralight detection and tracking model: The `track()` method in the Ultralight model is used for object tracking in videos. It takes a video file or a camera stream as input and returns the tracked objects in each frame. The method uses the COCO dataset classes for object detection and tracking. The COCO dataset contains 80 classes of objects such as person, car, bicycle, etc. The `track()` method uses the COCO classes to detect and track the objects in the video frames.
240
  The tracked objects are represented as bounding boxes with labels indicating the class of the object. The Ultralight model is designed to be fast and efficient, making it suitable for real-time object tracking applications.""")
241
 
242
 
 
157
  video = check_extension(video)
158
  print(device)
159
  # ultraltics
160
+
 
161
  # Define new unique folder
162
  add_dir = str(uuid.uuid4())
163
  vis_out_dir = os.path.join("/".join(video.split("/")[:-1]), add_dir)
 
185
  with gr.Column():
186
  video_input = gr.Video(source="upload", type="filepath", height=612)
187
  # Insert slider with kpt_thr
188
+ file_kpthr = gr.Slider(minimum=0.1, maximum=1, step=20, default=0.3, label='Keypoint threshold')
189
 
190
  submit_pose_file = gr.Button("Make 2d pose estimation")
191
  submit_pose3d_file = gr.Button("Make 3d pose estimation")
 
204
  with gr.Column():
205
  webcam_input = gr.Video(source="webcam", height=612)
206
 
207
+ web_kpthr = gr.Slider(minimum=0.1, maximum=1, step=20, default=0.3, label='Keypoint threshold')
208
 
209
  submit_pose_web = gr.Button("Make 2d pose estimation")
210
  submit_pose3d_web = gr.Button("Make 3d pose estimation")
 
235
  The 3D pose model is used for estimating the 3D coordinates of human body joints from an image or a video frame. The model uses a combination of 2D pose estimation and depth estimation to infer the 3D joint locations.
236
  All of these models are pre-trained on large datasets and can be fine-tuned on custom datasets for specific applications.
237
 
238
+ Ultralight detection and tracking model: The `track()` method in the Ultralight model is used for object tracking in videos. It takes a video file or a camera stream as input and returns the tracked objects in each frame. The method uses the COCO dataset classes for object detection and tracking. The COCO dataset contains 80 classes of objects such as person, car, bicycle, etc. See https://docs.ultralytics.com/datasets/detect/coco/ for all available classes. The `track()` method uses the COCO classes to detect and track the objects in the video frames.
239
  The tracked objects are represented as bounding boxes with labels indicating the class of the object. The Ultralight model is designed to be fast and efficient, making it suitable for real-time object tracking applications.""")
240
 
241