modes/train/ #8075
Replies: 75 comments 193 replies
-
how to print IOU and f-score with the training result? |
Beta Was this translation helpful? Give feedback.
-
How are we able to save sample labls and predictions on the validation set during training? I remember it being easy from yolov5 but I have not been able to figure it out with yolov8. |
Beta Was this translation helpful? Give feedback.
-
If I am not mistaken, the logs shown during training also contain the box(P,R,mAP@0.5 and mAP@0.5:0.95) and mask(P,R,mAP@0.5 and mAP@0.5:0.95) for validation set during each epoch. Then why is it happening that during model.val() using the best.pt, I am getting worse metrics. From the training and validation curves, it is clear that the model is overfitting for the segmentation task but that is separate issue of overfitting. Can you please help me out in this? |
Beta Was this translation helpful? Give feedback.
-
So, imgsz works different when training than when predicting? For train: if it's an Is this right? |
Beta Was this translation helpful? Give feedback.
-
Hi all, I have a segment model with customed data with single class, but there is a trend to overfit in the recent several training results, I tried adding more data in the training set with reduce box_loss and cls_loss in val, but dfl_loss is increasing. Is there suggestion to tuing the model. Thanks a lot. |
Beta Was this translation helpful? Give feedback.
-
I have a question for training the segmentation model. I have objects in my dataset that screen each other, such that the top object separates the segmentation mask of the bottom object into two independent parts. as far as I can see, the coordinates of each point are listed sequentially in the label file. If I add the points of the two masks one after the other in the coordinates of the same object, will I solve the problem? |
Beta Was this translation helpful? Give feedback.
-
Hello there! |
Beta Was this translation helpful? Give feedback.
-
Hello, I am working on a project for android devices. The gpu and cpu powers of the device I have are weak. Will it speed up if I make the imgsz value 320 for train? Or what are your recommendations? What happens if the imgsz parameter for training is 640 and the imgsz parameter for prediction is 320? Or what changes if imgsz for training is 320 and imgsz for prediction is 320? Sorry for my English Note: I converted it to tflite model. Thanks. You are amazing |
Beta Was this translation helpful? Give feedback.
-
I've come to rely on YOLOv8 in my daily work; it's remarkably user-friendly. Thank you to the Ultralytics team for your excellent work on these models! I'm currently tackling a project focused on detecting minor defects on automobile engine parts. As the defects will be a smaller object in a given frame ,could you offer guidance on training arguments or techniques while training a model that might improve performance for this type of data? I'm also interested in exploring attention mechanisms to enhance the model performance, but I'd appreciate help understanding how to implement this. Special appreciation to Ultralytics team. |
Beta Was this translation helpful? Give feedback.
-
Running this provided example Which lead me to this Stackoverflow: https://stackoverflow.com/q/75111196/815507 There are solutions from Stackoverflow: I wonder if you could help and update the guide to provide the best resolution? |
Beta Was this translation helpful? Give feedback.
-
We need to disable blur augmentation. I have filed an issue, Glenn suggested me to use blur=0, but it is not a valid argument. #8824 |
Beta Was this translation helpful? Give feedback.
-
How can I train YOLOv8 with my custom dataset? |
Beta Was this translation helpful? Give feedback.
-
Hey, Was trying out training custom object detection model using pretrained YOLO-v8 model.
0% 0/250 [00:00<?, ?it/s] |
Beta Was this translation helpful? Give feedback.
-
Hi! I'm working on a project where I plan to use YOLOv8 as the backbone for object detection, but I need a more hands-on approach during the training phase. How to I train the model manually, looping through epochs, perform forward propagation, calculate loss functions, backpropagate, and update weights? At the moment the model.train() seems to handle all of this automatically in the background. The end goal is knowledge distillation, but for a start I need to access these things. I haven't been able to find any examples of YOLOv8 being used in this way, some code and tips would be helpful. |
Beta Was this translation helpful? Give feedback.
-
Im trying to understand concept of training. I would like to extend default classes with helmet, gloves, etc.
Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
It woult be super helpful to have a link here to resources describing what |
Beta Was this translation helpful? Give feedback.
-
Good afternoon, please advise what arguments to use when training the segmentation model for the most accurate results (so that the segmentation mask is as close as possible to the way I marked the objects). |
Beta Was this translation helpful? Give feedback.
-
Dear YOLO team, I am currently working on a custom YOLO model for a specific object detection task, and I would like to improve its accuracy and performance by fine-tuning the model on my own dataset. I am aware that one way to achieve this is to freeze some of the layers in the model so that they are not updated during fine-tuning, and focus on modifying the layers that have the most impact on my specific task. Could you please advise me on how I can view the layers in my custom YOLO model that are suitable for freezing during fine-tuning? Are there any tools, libraries, or methods that you would recommend for this purpose? I would greatly appreciate any help or guidance that you can provide. Thank you for your time. |
Beta Was this translation helpful? Give feedback.
-
I have trained yolov8 on a dataset. Which was working good so i have that best.pt to annotate the additional images for the project an the results are good and i want to train the model with the updates images but i am encountering some errors hep me with this. |
Beta Was this translation helpful? Give feedback.
-
Hello developer, when the training with epoch=300 is finished, I want to increase the training epoch, such as epoch=600, that is, train 300 epochs extra, no new project appears, how to do it |
Beta Was this translation helpful? Give feedback.
-
Hi, i want to change my directory where i save my output using the save_dir argument in the train command. However, it still uses the default runs folder in the same location as my script. How can i fix this? |
Beta Was this translation helpful? Give feedback.
-
Dear Team: |
Beta Was this translation helpful? Give feedback.
-
Hey, I am new to object detection. I trained yolov8 baseline model. Now I am trying to modify the backbone and the neck with different attention module to see their different performance. I can see that YOLOV8 has CBAM. Now how do i integrate this into yolo. what are the files I need to adjust. The How will I also integrate other attention module not found in YOLOv8. Also, I want to add Multihead attention in the last layer of the backbone. Please if you can guild how to go about it I will really appreciate. Thanks |
Beta Was this translation helpful? Give feedback.
-
Hi there, I' m new to this field and I'm trying to train a YOLO model for detection and tracking of video streams for security. Until now I have managed to create my pipeline for training and I want to understand if it's a correct/possible approach:
The questions are several:
Here the code of the pipeline: from ultralytics import YOLO load pretrained model from coco datasetmodel:YOLO = YOLO('yolov8n.yaml').load('yolov8n.pt') train_classes = [] NOTE if you're using COCO format, the new dataset should also be in COCO format.operate transfer learning with my custom dataset, narrowing the classes to the ones I want to detectmodel.train(data='custom_data.yaml',classes=train_classes,epochs=200,imgsz=640,optimizer='AdamW',device=[0,1],augment=True,name='custom_model') load custom modelcustom_model = YOLO('path/to/best.pt') validate custom modelcustom_model.val() try tuning to find the best hyperparameters for my custom modelcustom_model.tune(data='custom_data.yaml', epochs=50, iterations=100,optimizer="AdamW", plots=True,save=True,val=True) load best hyperparameterscfg = yaml.load(open('path/to/best_hyperparameters.yaml')) retrain custom model from pretrained on coco with best hyperparametersmodel.train(data='custom_data.yaml',classes=train_classes,epochs=200,imgsz=640,device=[0,1],name='custom_model',exist_ok=True,optimizer='AdamW',**cfg) load and validatetuned_custom_model = YOLO('path/to/best.pt') tuned_custom_model.val() Thank you for your effort and time in this project! |
Beta Was this translation helpful? Give feedback.
-
Hey how do you change the albumentation def settings ? |
Beta Was this translation helpful? Give feedback.
-
train: Scanning /content/drive/MyDrive/New_data/v5/train/labels... 220 images, 0 backgrounds, 0 corrupt: 100% 220/220 [02:06<00:00, 1.74it/s] i got this warnings during training what may be the issue |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm a new to YOLO V8, I trained a model to detect a Laser pointer‘s light spot and the projector's screen, the screen is the white clean home wall. There are about 400 images in different room brightness, angles, screen contents, labeled by X-Anylabeling. Every image has two labels inside, corresponding to two classes, Screen and Laser. The class Screen is the projector's casted screen on wall, marked by a polygon , and the class Laser was marked by a rectangle to the red light spot. After 1000 epochs train, the results are here: In the following there are two inferences : From above results, I'm unable to see something wrong, but the inferenced images showed the detect of the Laser label is wrong, one corner of the big bounding box is in the center of the laser pointer light spot, the bounding box did not surrounded it. The Screen bounding box well surround the screen. I checked the label txt exported by X-Anylabeling, it is correct. Here is one: These are my train code: Could you help me to figure out the reason and how to correct the wrong bounding of the light spot? |
Beta Was this translation helpful? Give feedback.
-
i have pretrained weigths which is of vit architecture, now i want to fine tune in other dataset but with yolov8 model . |
Beta Was this translation helpful? Give feedback.
-
Traceback (most recent call last): Facing above error while trying to train: |
Beta Was this translation helpful? Give feedback.
-
I trained a yolov8x model on a custom dataset (having a train, test and validation split), and then tested it on the test split of the custom dataset to get mAP value as a metric. I used the code for the CLI to both these steps. Does the YOLOv8 code do cross-validation internally when I do either of these steps? |
Beta Was this translation helpful? Give feedback.
-
modes/train/
Step-by-step guide to train YOLOv8 models with Ultralytics YOLO including examples of single-GPU and multi-GPU training
https://docs.ultralytics.com/modes/train/
Beta Was this translation helpful? Give feedback.
All reactions