Skip to content

Self-supervised learning loss doesn't come down #7

@Elody-07

Description

@Elody-07

Hi Wan,

Thanks for your great work. I am running your code with command

python3 network/run_engine.py --initial_model ./pretrained/synthetic.pth --mode Train --model_dir ./output --tag self-supervised --lr 0.0001

I've noticed this triggers joint training of real and synthetic dataset. I have this problem that during each epoch, the metric keeps going back and forth and is basically over 40mm. Here's the log example during training:

[12-2900]: metric: avg_joint_error: 60.2542 , loss: synt_uv: 15.4588 synt_d: 26.9756 mv_projection: 9740.4551 mv_consistency: 0.4032 uv_hm_mean: 0.0002 pose_prior: 0.6047 collision: 4.4668 bone_length: 122.4030 domain_loss: 0.0000 , lr: 0.0001, time: 39.56s

Did I do someting wrong or miss anything? It would be help a lot if you could help me figure it out!

Best wishes

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions