2023-03-15 21:33:38,558 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 8, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 0}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'luna': 0}, 'model_dir': './logs\\44k'} 2023-03-15 21:33:40,067 44k INFO emb_g.weight is not in the checkpoint 2023-03-15 21:33:40,121 44k INFO Loaded checkpoint './logs\44k\G_0.pth' (iteration 0) 2023-03-15 21:33:40,213 44k INFO Loaded checkpoint './logs\44k\D_0.pth' (iteration 0) 2023-03-15 21:33:57,204 44k INFO Train Epoch: 1 [0%] 2023-03-15 21:33:57,205 44k INFO Losses: [2.7630879878997803, 2.4202029705047607, 11.983572006225586, 32.31467819213867, 4.623593330383301], step: 0, lr: 0.0001 2023-03-15 21:34:02,978 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth 2023-03-15 21:34:07,747 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth 2023-03-15 21:34:36,026 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 0}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'luna': 0}, 'model_dir': './logs\\44k'} 2023-03-15 21:34:37,990 44k INFO Loaded checkpoint './logs\44k\G_0.pth' (iteration 1) 2023-03-15 21:34:38,344 44k INFO Loaded checkpoint './logs\44k\D_0.pth' (iteration 1) 2023-03-15 21:34:54,224 44k INFO Train Epoch: 1 [0%] 2023-03-15 21:34:54,225 44k INFO Losses: [2.359994888305664, 2.7599587440490723, 8.258697509765625, 24.2158145904541, 3.0630788803100586], step: 0, lr: 0.0001 2023-03-15 21:34:59,840 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth 2023-03-15 21:35:04,076 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth 2023-03-15 21:37:33,840 44k INFO Train Epoch: 1 [23%] 2023-03-15 21:37:33,840 44k INFO Losses: [2.8343312740325928, 2.141096830368042, 8.479353904724121, 17.959815979003906, 1.5620585680007935], step: 200, lr: 0.0001 2023-03-15 21:40:00,294 44k INFO Train Epoch: 1 [46%] 2023-03-15 21:40:00,294 44k INFO Losses: [2.2941248416900635, 2.4668307304382324, 10.759581565856934, 19.119029998779297, 1.2855116128921509], step: 400, lr: 0.0001 2023-03-15 21:42:26,131 44k INFO Train Epoch: 1 [70%] 2023-03-15 21:42:26,132 44k INFO Losses: [2.7787556648254395, 2.0030388832092285, 9.975791931152344, 18.47311019897461, 1.449719786643982], step: 600, lr: 0.0001 2023-03-15 21:44:52,044 44k INFO Train Epoch: 1 [93%] 2023-03-15 21:44:52,044 44k INFO Losses: [2.6186447143554688, 2.0991873741149902, 10.390167236328125, 20.75912094116211, 1.756481409072876], step: 800, lr: 0.0001 2023-03-15 21:44:55,997 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_800.pth 2023-03-15 21:45:01,070 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_800.pth 2023-03-15 21:45:49,825 44k INFO ====> Epoch: 1, cost 673.80 s 2023-03-15 21:47:40,073 44k INFO Train Epoch: 2 [16%] 2023-03-15 21:47:40,074 44k INFO Losses: [2.4866132736206055, 2.143646717071533, 13.897932052612305, 19.377948760986328, 1.6471227407455444], step: 1000, lr: 9.99875e-05 2023-03-15 21:50:01,056 44k INFO Train Epoch: 2 [39%] 2023-03-15 21:50:01,056 44k INFO Losses: [2.5471744537353516, 2.042377471923828, 12.199019432067871, 21.59926414489746, 1.2530696392059326], step: 1200, lr: 9.99875e-05 2023-03-15 21:52:22,092 44k INFO Train Epoch: 2 [62%] 2023-03-15 21:52:22,093 44k INFO Losses: [2.414365530014038, 2.257920503616333, 12.222509384155273, 21.440336227416992, 1.3348948955535889], step: 1400, lr: 9.99875e-05 2023-03-15 21:54:43,208 44k INFO Train Epoch: 2 [86%] 2023-03-15 21:54:43,208 44k INFO Losses: [2.572298526763916, 2.2890148162841797, 13.738212585449219, 18.487319946289062, 1.4824713468551636], step: 1600, lr: 9.99875e-05 2023-03-15 21:54:47,170 44k INFO Saving model and optimizer state at iteration 2 to ./logs\44k\G_1600.pth 2023-03-15 21:54:52,124 44k INFO Saving model and optimizer state at iteration 2 to ./logs\44k\D_1600.pth 2023-03-15 21:56:22,139 44k INFO ====> Epoch: 2, cost 632.31 s 2023-03-15 21:57:28,099 44k INFO Train Epoch: 3 [9%] 2023-03-15 21:57:28,100 44k INFO Losses: [2.808074951171875, 2.1457250118255615, 6.578037738800049, 17.48270606994629, 1.44097900390625], step: 1800, lr: 9.99750015625e-05 2023-03-15 21:59:49,436 44k INFO Train Epoch: 3 [32%] 2023-03-15 21:59:49,437 44k INFO Losses: [2.515749216079712, 2.3951027393341064, 8.448807716369629, 17.62101936340332, 1.5340524911880493], step: 2000, lr: 9.99750015625e-05 2023-03-15 22:02:10,856 44k INFO Train Epoch: 3 [55%] 2023-03-15 22:02:10,856 44k INFO Losses: [2.2999343872070312, 2.6952545642852783, 8.933745384216309, 16.092304229736328, 1.2824602127075195], step: 2200, lr: 9.99750015625e-05 2023-03-15 22:04:32,520 44k INFO Train Epoch: 3 [78%] 2023-03-15 22:04:32,520 44k INFO Losses: [2.668524742126465, 2.2720837593078613, 10.21893310546875, 20.029315948486328, 0.9141016602516174], step: 2400, lr: 9.99750015625e-05 2023-03-15 22:04:36,413 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\G_2400.pth 2023-03-15 22:04:41,042 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\D_2400.pth 2023-03-15 22:06:54,764 44k INFO ====> Epoch: 3, cost 632.63 s 2023-03-15 22:07:17,059 44k INFO Train Epoch: 4 [2%] 2023-03-15 22:07:17,060 44k INFO Losses: [2.6755104064941406, 1.8500185012817383, 7.0456223487854, 15.10184097290039, 1.375386118888855], step: 2600, lr: 9.996250468730469e-05 2023-03-15 22:09:38,482 44k INFO Train Epoch: 4 [25%] 2023-03-15 22:09:38,483 44k INFO Losses: [2.4754865169525146, 2.0182747840881348, 8.242473602294922, 20.84014892578125, 1.6214202642440796], step: 2800, lr: 9.996250468730469e-05 2023-03-15 22:11:59,909 44k INFO Train Epoch: 4 [48%] 2023-03-15 22:11:59,909 44k INFO Losses: [2.483423948287964, 2.320873975753784, 10.137140274047852, 21.496545791625977, 1.4179471731185913], step: 3000, lr: 9.996250468730469e-05 2023-03-15 22:14:22,053 44k INFO Train Epoch: 4 [71%] 2023-03-15 22:14:22,054 44k INFO Losses: [2.423818826675415, 2.652487277984619, 6.957798480987549, 17.808197021484375, 1.3406299352645874], step: 3200, lr: 9.996250468730469e-05 2023-03-15 22:14:25,976 44k INFO Saving model and optimizer state at iteration 4 to ./logs\44k\G_3200.pth 2023-03-15 22:14:30,633 44k INFO Saving model and optimizer state at iteration 4 to ./logs\44k\D_3200.pth 2023-03-15 22:16:55,975 44k INFO Train Epoch: 4 [94%] 2023-03-15 22:16:55,976 44k INFO Losses: [2.750946521759033, 2.093818187713623, 6.8611650466918945, 20.23285675048828, 1.1110515594482422], step: 3400, lr: 9.996250468730469e-05 2023-03-15 22:17:29,808 44k INFO ====> Epoch: 4, cost 635.04 s 2023-03-15 22:19:30,282 44k INFO Train Epoch: 5 [18%] 2023-03-15 22:19:30,282 44k INFO Losses: [2.2964367866516113, 2.7428600788116455, 11.233068466186523, 16.619295120239258, 1.5903334617614746], step: 3600, lr: 9.995000937421877e-05 2023-03-15 22:21:52,924 44k INFO Train Epoch: 5 [41%] 2023-03-15 22:21:52,925 44k INFO Losses: [2.281683921813965, 2.3244380950927734, 13.21329116821289, 18.938825607299805, 1.5910505056381226], step: 3800, lr: 9.995000937421877e-05 2023-03-15 22:24:16,373 44k INFO Train Epoch: 5 [64%] 2023-03-15 22:24:16,373 44k INFO Losses: [2.7741613388061523, 2.2883241176605225, 11.48084831237793, 21.464269638061523, 1.0414572954177856], step: 4000, lr: 9.995000937421877e-05 2023-03-15 22:24:20,405 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\G_4000.pth 2023-03-15 22:24:25,771 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\D_4000.pth 2023-03-15 22:26:51,222 44k INFO Train Epoch: 5 [87%] 2023-03-15 22:26:51,222 44k INFO Losses: [2.634819746017456, 2.284086227416992, 9.26298999786377, 16.560056686401367, 1.5715028047561646], step: 4200, lr: 9.995000937421877e-05 2023-03-15 22:28:08,993 44k INFO ====> Epoch: 5, cost 639.18 s 2023-03-15 22:29:25,343 44k INFO Train Epoch: 6 [10%] 2023-03-15 22:29:25,344 44k INFO Losses: [2.560356378555298, 2.234076499938965, 7.0815958976745605, 16.43915557861328, 1.1255873441696167], step: 4400, lr: 9.993751562304699e-05 2023-03-15 22:31:47,935 44k INFO Train Epoch: 6 [34%] 2023-03-15 22:31:47,936 44k INFO Losses: [2.522918939590454, 1.941066026687622, 12.101066589355469, 22.177255630493164, 1.4737335443496704], step: 4600, lr: 9.993751562304699e-05 2023-03-15 22:34:10,307 44k INFO Train Epoch: 6 [57%] 2023-03-15 22:34:10,307 44k INFO Losses: [2.1711645126342773, 2.521559238433838, 6.719629764556885, 15.985298156738281, 1.3072476387023926], step: 4800, lr: 9.993751562304699e-05 2023-03-15 22:34:14,319 44k INFO Saving model and optimizer state at iteration 6 to ./logs\44k\G_4800.pth 2023-03-15 22:34:18,514 44k INFO Saving model and optimizer state at iteration 6 to ./logs\44k\D_4800.pth 2023-03-15 22:36:43,781 44k INFO Train Epoch: 6 [80%] 2023-03-15 22:36:43,781 44k INFO Losses: [2.631204843521118, 2.241793632507324, 10.147503852844238, 19.555187225341797, 1.097834825515747], step: 5000, lr: 9.993751562304699e-05 2023-03-15 22:38:45,776 44k INFO ====> Epoch: 6, cost 636.78 s 2023-03-15 22:39:18,033 44k INFO Train Epoch: 7 [3%] 2023-03-15 22:39:18,033 44k INFO Losses: [2.6141042709350586, 2.094193935394287, 8.699785232543945, 17.717557907104492, 1.2766138315200806], step: 5200, lr: 9.99250234335941e-05 2023-03-15 22:41:40,697 44k INFO Train Epoch: 7 [26%] 2023-03-15 22:41:40,698 44k INFO Losses: [2.3711929321289062, 2.81101655960083, 7.681107997894287, 15.111063003540039, 1.1922814846038818], step: 5400, lr: 9.99250234335941e-05 2023-03-15 22:44:03,063 44k INFO Train Epoch: 7 [50%] 2023-03-15 22:44:03,063 44k INFO Losses: [2.826953172683716, 2.373408794403076, 9.941184043884277, 20.299964904785156, 1.239027976989746], step: 5600, lr: 9.99250234335941e-05 2023-03-15 22:44:07,065 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\G_5600.pth 2023-03-15 22:44:12,013 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\D_5600.pth 2023-03-15 22:46:39,975 44k INFO Train Epoch: 7 [73%] 2023-03-15 22:46:39,975 44k INFO Losses: [2.6116786003112793, 2.282010316848755, 8.302881240844727, 20.26808738708496, 1.2276978492736816], step: 5800, lr: 9.99250234335941e-05 2023-03-15 22:49:02,403 44k INFO Train Epoch: 7 [96%] 2023-03-15 22:49:02,404 44k INFO Losses: [2.379059076309204, 2.2208218574523926, 11.726834297180176, 20.848173141479492, 1.390799880027771], step: 6000, lr: 9.99250234335941e-05 2023-03-15 22:49:26,305 44k INFO ====> Epoch: 7, cost 640.53 s 2023-03-15 22:51:37,012 44k INFO Train Epoch: 8 [19%] 2023-03-15 22:51:37,013 44k INFO Losses: [2.8753011226654053, 2.083233594894409, 6.428502082824707, 15.448431015014648, 1.1728848218917847], step: 6200, lr: 9.991253280566489e-05 2023-03-15 22:53:59,290 44k INFO Train Epoch: 8 [42%] 2023-03-15 22:53:59,290 44k INFO Losses: [2.7437286376953125, 2.027693033218384, 6.247323989868164, 14.471820831298828, 1.1278842687606812], step: 6400, lr: 9.991253280566489e-05 2023-03-15 22:54:03,314 44k INFO Saving model and optimizer state at iteration 8 to ./logs\44k\G_6400.pth 2023-03-15 22:54:08,304 44k INFO Saving model and optimizer state at iteration 8 to ./logs\44k\D_6400.pth 2023-03-15 22:56:34,081 44k INFO Train Epoch: 8 [66%] 2023-03-15 22:56:34,081 44k INFO Losses: [2.522012233734131, 2.4174132347106934, 9.828531265258789, 19.096431732177734, 0.919116735458374], step: 6600, lr: 9.991253280566489e-05 2023-03-15 22:58:56,716 44k INFO Train Epoch: 8 [89%] 2023-03-15 22:58:56,716 44k INFO Losses: [2.435863971710205, 2.2997751235961914, 12.224359512329102, 17.535295486450195, 1.3483388423919678], step: 6800, lr: 9.991253280566489e-05 2023-03-15 23:00:04,585 44k INFO ====> Epoch: 8, cost 638.28 s 2023-03-15 23:01:31,205 44k INFO Train Epoch: 9 [12%] 2023-03-15 23:01:31,205 44k INFO Losses: [2.793332576751709, 1.8760831356048584, 7.035763740539551, 14.893486022949219, 1.5130894184112549], step: 7000, lr: 9.990004373906418e-05 2023-03-15 23:03:53,789 44k INFO Train Epoch: 9 [35%] 2023-03-15 23:03:53,790 44k INFO Losses: [2.488835334777832, 2.4769461154937744, 10.045790672302246, 17.507667541503906, 1.1089245080947876], step: 7200, lr: 9.990004373906418e-05 2023-03-15 23:03:57,767 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\G_7200.pth 2023-03-15 23:04:02,779 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\D_7200.pth 2023-03-15 23:06:28,051 44k INFO Train Epoch: 9 [58%] 2023-03-15 23:06:28,051 44k INFO Losses: [2.560542106628418, 2.037553310394287, 8.532022476196289, 18.587997436523438, 1.3075299263000488], step: 7400, lr: 9.990004373906418e-05 2023-03-15 23:08:50,780 44k INFO Train Epoch: 9 [82%] 2023-03-15 23:08:50,780 44k INFO Losses: [2.1858620643615723, 2.463200807571411, 12.25607681274414, 20.776758193969727, 1.52699613571167], step: 7600, lr: 9.990004373906418e-05 2023-03-15 23:10:42,717 44k INFO ====> Epoch: 9, cost 638.13 s 2023-03-15 23:11:24,927 44k INFO Train Epoch: 10 [5%] 2023-03-15 23:11:24,927 44k INFO Losses: [2.767472743988037, 2.3794407844543457, 9.016433715820312, 16.121545791625977, 1.3916562795639038], step: 7800, lr: 9.98875562335968e-05 2023-03-15 23:13:47,292 44k INFO Train Epoch: 10 [28%] 2023-03-15 23:13:47,293 44k INFO Losses: [2.521395444869995, 2.4054670333862305, 8.744661331176758, 17.487924575805664, 1.3928142786026], step: 8000, lr: 9.98875562335968e-05 2023-03-15 23:13:51,234 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\G_8000.pth 2023-03-15 23:13:55,647 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\D_8000.pth 2023-03-15 23:16:20,766 44k INFO Train Epoch: 10 [51%] 2023-03-15 23:16:20,766 44k INFO Losses: [2.455172300338745, 1.9186352491378784, 13.374556541442871, 19.81981086730957, 1.245939016342163], step: 8200, lr: 9.98875562335968e-05 2023-03-15 23:18:43,439 44k INFO Train Epoch: 10 [74%] 2023-03-15 23:18:43,440 44k INFO Losses: [2.4717278480529785, 2.3420374393463135, 9.890249252319336, 17.06365966796875, 1.4003887176513672], step: 8400, lr: 9.98875562335968e-05 2023-03-15 23:21:06,344 44k INFO Train Epoch: 10 [98%] 2023-03-15 23:21:06,344 44k INFO Losses: [2.858380079269409, 1.783084750175476, 7.207481861114502, 11.209378242492676, 1.386827826499939], step: 8600, lr: 9.98875562335968e-05 2023-03-15 23:21:20,231 44k INFO ====> Epoch: 10, cost 637.51 s 2023-03-15 23:23:40,670 44k INFO Train Epoch: 11 [21%] 2023-03-15 23:23:40,671 44k INFO Losses: [2.6278300285339355, 1.9125964641571045, 4.9928975105285645, 13.856396675109863, 1.1418709754943848], step: 8800, lr: 9.987507028906759e-05 2023-03-15 23:23:44,580 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\G_8800.pth 2023-03-15 23:23:49,502 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\D_8800.pth 2023-03-15 23:26:18,551 44k INFO Train Epoch: 11 [44%] 2023-03-15 23:26:18,551 44k INFO Losses: [2.885807514190674, 1.8871378898620605, 10.606431007385254, 20.48639488220215, 1.681467056274414], step: 9000, lr: 9.987507028906759e-05 2023-03-15 23:28:55,954 44k INFO Train Epoch: 11 [67%] 2023-03-15 23:28:55,955 44k INFO Losses: [2.620832920074463, 2.0641565322875977, 6.704383850097656, 14.286115646362305, 1.231693148612976], step: 9200, lr: 9.987507028906759e-05 2023-03-15 23:31:30,454 44k INFO Train Epoch: 11 [90%] 2023-03-15 23:31:30,455 44k INFO Losses: [2.6368300914764404, 2.1023027896881104, 11.062491416931152, 20.047260284423828, 1.1818411350250244], step: 9400, lr: 9.987507028906759e-05 2023-03-15 23:32:33,978 44k INFO ====> Epoch: 11, cost 673.75 s 2023-03-15 23:34:31,352 44k INFO Train Epoch: 12 [14%] 2023-03-15 23:34:31,353 44k INFO Losses: [2.435600757598877, 2.1602511405944824, 12.35584831237793, 17.341718673706055, 1.0493886470794678], step: 9600, lr: 9.986258590528146e-05 2023-03-15 23:34:35,541 44k INFO Saving model and optimizer state at iteration 12 to ./logs\44k\G_9600.pth 2023-03-15 23:34:40,055 44k INFO Saving model and optimizer state at iteration 12 to ./logs\44k\D_9600.pth 2023-03-15 23:37:22,134 44k INFO Train Epoch: 12 [37%] 2023-03-15 23:37:22,134 44k INFO Losses: [2.5114614963531494, 2.4217467308044434, 11.515277862548828, 20.152433395385742, 1.5467609167099], step: 9800, lr: 9.986258590528146e-05 2023-03-15 23:40:41,600 44k INFO Train Epoch: 12 [60%] 2023-03-15 23:40:41,601 44k INFO Losses: [2.6761112213134766, 1.9438706636428833, 6.033017635345459, 15.303173065185547, 0.8633329272270203], step: 10000, lr: 9.986258590528146e-05 2023-03-15 23:43:18,519 44k INFO Train Epoch: 12 [83%] 2023-03-15 23:43:18,520 44k INFO Losses: [2.7175655364990234, 2.122885227203369, 9.944016456604004, 16.213428497314453, 1.2758935689926147], step: 10200, lr: 9.986258590528146e-05 2023-03-15 23:45:02,941 44k INFO ====> Epoch: 12, cost 748.96 s 2023-03-15 23:45:54,598 44k INFO Train Epoch: 13 [6%] 2023-03-15 23:45:54,598 44k INFO Losses: [2.4771299362182617, 2.5106639862060547, 12.602730751037598, 18.86856460571289, 1.2382102012634277], step: 10400, lr: 9.98501030820433e-05 2023-03-15 23:45:58,534 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\G_10400.pth 2023-03-15 23:46:02,820 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\D_10400.pth 2023-03-15 23:48:27,187 44k INFO Train Epoch: 13 [30%] 2023-03-15 23:48:27,187 44k INFO Losses: [2.7256357669830322, 2.1349029541015625, 7.904260158538818, 19.846895217895508, 1.0586509704589844], step: 10600, lr: 9.98501030820433e-05 2023-03-15 23:50:47,742 44k INFO Train Epoch: 13 [53%] 2023-03-15 23:50:47,743 44k INFO Losses: [2.82505464553833, 2.2192225456237793, 8.297184944152832, 17.733400344848633, 1.0015524625778198], step: 10800, lr: 9.98501030820433e-05 2023-03-15 23:53:08,754 44k INFO Train Epoch: 13 [76%] 2023-03-15 23:53:08,754 44k INFO Losses: [2.4406673908233643, 2.3215839862823486, 12.28913688659668, 19.672630310058594, 1.410600185394287], step: 11000, lr: 9.98501030820433e-05 2023-03-15 23:55:29,631 44k INFO Train Epoch: 13 [99%] 2023-03-15 23:55:29,632 44k INFO Losses: [2.512133836746216, 2.0228796005249023, 12.333993911743164, 20.909624099731445, 1.3709779977798462], step: 11200, lr: 9.98501030820433e-05 2023-03-15 23:55:33,648 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\G_11200.pth 2023-03-15 23:55:38,487 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\D_11200.pth 2023-03-15 23:55:45,460 44k INFO ====> Epoch: 13, cost 642.52 s 2023-03-15 23:58:14,413 44k INFO Train Epoch: 14 [23%] 2023-03-15 23:58:14,414 44k INFO Losses: [2.546581745147705, 2.137359380722046, 8.87495231628418, 17.015426635742188, 1.0378954410552979], step: 11400, lr: 9.983762181915804e-05 2023-03-16 00:00:35,031 44k INFO Train Epoch: 14 [46%] 2023-03-16 00:00:35,031 44k INFO Losses: [2.7097973823547363, 1.991132140159607, 6.331585884094238, 16.4729061126709, 1.0975165367126465], step: 11600, lr: 9.983762181915804e-05 2023-03-16 00:02:56,136 44k INFO Train Epoch: 14 [69%] 2023-03-16 00:02:56,137 44k INFO Losses: [2.1286847591400146, 2.6911470890045166, 11.54920768737793, 19.315832138061523, 1.4190285205841064], step: 11800, lr: 9.983762181915804e-05 2023-03-16 00:05:16,862 44k INFO Train Epoch: 14 [92%] 2023-03-16 00:05:16,862 44k INFO Losses: [2.548015594482422, 2.1432695388793945, 10.133501052856445, 17.651119232177734, 1.5942550897598267], step: 12000, lr: 9.983762181915804e-05 2023-03-16 00:05:20,892 44k INFO Saving model and optimizer state at iteration 14 to ./logs\44k\G_12000.pth 2023-03-16 00:05:26,175 44k INFO Saving model and optimizer state at iteration 14 to ./logs\44k\D_12000.pth 2023-03-16 00:06:16,563 44k INFO ====> Epoch: 14, cost 631.10 s 2023-03-16 00:08:01,715 44k INFO Train Epoch: 15 [15%] 2023-03-16 00:08:01,715 44k INFO Losses: [2.550679922103882, 2.2674386501312256, 9.556438446044922, 19.613842010498047, 1.0090858936309814], step: 12200, lr: 9.982514211643064e-05 2023-03-16 00:10:22,475 44k INFO Train Epoch: 15 [39%] 2023-03-16 00:10:22,476 44k INFO Losses: [2.743793249130249, 2.3563232421875, 11.072367668151855, 22.685895919799805, 1.4280868768692017], step: 12400, lr: 9.982514211643064e-05 2023-03-16 00:12:43,220 44k INFO Train Epoch: 15 [62%] 2023-03-16 00:12:43,221 44k INFO Losses: [2.4690353870391846, 2.3200523853302, 6.597771644592285, 16.857955932617188, 1.3373311758041382], step: 12600, lr: 9.982514211643064e-05 2023-03-16 00:15:04,212 44k INFO Train Epoch: 15 [85%] 2023-03-16 00:15:04,213 44k INFO Losses: [2.499682903289795, 2.3979334831237793, 6.928187847137451, 14.86440658569336, 1.0536296367645264], step: 12800, lr: 9.982514211643064e-05 2023-03-16 00:15:08,163 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\G_12800.pth 2023-03-16 00:15:12,349 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\D_12800.pth 2023-03-16 00:16:46,656 44k INFO ====> Epoch: 15, cost 630.09 s 2023-03-16 00:17:48,260 44k INFO Train Epoch: 16 [8%] 2023-03-16 00:17:48,261 44k INFO Losses: [2.6141366958618164, 2.2753355503082275, 12.091139793395996, 21.50872230529785, 1.4288793802261353], step: 13000, lr: 9.981266397366609e-05 2023-03-16 00:20:09,172 44k INFO Train Epoch: 16 [31%] 2023-03-16 00:20:09,173 44k INFO Losses: [2.2828731536865234, 2.310922622680664, 9.955845832824707, 17.854888916015625, 1.1089527606964111], step: 13200, lr: 9.981266397366609e-05 2023-03-16 00:22:29,825 44k INFO Train Epoch: 16 [55%] 2023-03-16 00:22:29,826 44k INFO Losses: [2.4581756591796875, 2.229063034057617, 9.601153373718262, 20.237041473388672, 1.00374174118042], step: 13400, lr: 9.981266397366609e-05 2023-03-16 00:24:50,755 44k INFO Train Epoch: 16 [78%] 2023-03-16 00:24:50,755 44k INFO Losses: [2.37076473236084, 1.9644975662231445, 10.232067108154297, 19.46710205078125, 1.4916046857833862], step: 13600, lr: 9.981266397366609e-05 2023-03-16 00:24:54,665 44k INFO Saving model and optimizer state at iteration 16 to ./logs\44k\G_13600.pth 2023-03-16 00:24:59,370 44k INFO Saving model and optimizer state at iteration 16 to ./logs\44k\D_13600.pth 2023-03-16 00:27:16,865 44k INFO ====> Epoch: 16, cost 630.21 s 2023-03-16 00:27:34,844 44k INFO Train Epoch: 17 [1%] 2023-03-16 00:27:34,845 44k INFO Losses: [2.145705223083496, 2.581650972366333, 12.380220413208008, 19.935022354125977, 1.393144965171814], step: 13800, lr: 9.980018739066937e-05 2023-03-16 00:29:55,832 44k INFO Train Epoch: 17 [24%] 2023-03-16 00:29:55,832 44k INFO Losses: [2.4179086685180664, 2.0988194942474365, 9.206463813781738, 18.29871368408203, 1.4387435913085938], step: 14000, lr: 9.980018739066937e-05 2023-03-16 00:32:16,401 44k INFO Train Epoch: 17 [47%] 2023-03-16 00:32:16,402 44k INFO Losses: [2.493062734603882, 2.3729169368743896, 10.042740821838379, 16.676647186279297, 1.6415916681289673], step: 14200, lr: 9.980018739066937e-05 2023-03-16 00:34:37,316 44k INFO Train Epoch: 17 [71%] 2023-03-16 00:34:37,317 44k INFO Losses: [2.602033853530884, 2.1623055934906006, 8.615880966186523, 16.63046646118164, 1.062331199645996], step: 14400, lr: 9.980018739066937e-05 2023-03-16 00:34:41,305 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\G_14400.pth 2023-03-16 00:34:46,008 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\D_14400.pth 2023-03-16 00:37:09,764 44k INFO Train Epoch: 17 [94%] 2023-03-16 00:37:09,764 44k INFO Losses: [2.6982665061950684, 2.4935927391052246, 7.7550811767578125, 14.526800155639648, 0.9867305159568787], step: 14600, lr: 9.980018739066937e-05 2023-03-16 00:37:47,411 44k INFO ====> Epoch: 17, cost 630.55 s 2023-03-16 00:39:42,454 44k INFO Train Epoch: 18 [17%] 2023-03-16 00:39:42,454 44k INFO Losses: [2.591986656188965, 2.1827948093414307, 8.816061019897461, 16.3369083404541, 1.0858097076416016], step: 14800, lr: 9.978771236724554e-05 2023-03-16 00:42:03,135 44k INFO Train Epoch: 18 [40%] 2023-03-16 00:42:03,135 44k INFO Losses: [2.6440529823303223, 2.0023038387298584, 9.808303833007812, 19.42787742614746, 0.9729630947113037], step: 15000, lr: 9.978771236724554e-05 2023-03-16 00:44:24,205 44k INFO Train Epoch: 18 [63%] 2023-03-16 00:44:24,205 44k INFO Losses: [2.269355058670044, 2.2194883823394775, 13.32441520690918, 19.750988006591797, 1.4286381006240845], step: 15200, lr: 9.978771236724554e-05 2023-03-16 00:44:28,106 44k INFO Saving model and optimizer state at iteration 18 to ./logs\44k\G_15200.pth 2023-03-16 00:44:33,224 44k INFO Saving model and optimizer state at iteration 18 to ./logs\44k\D_15200.pth 2023-03-16 00:46:57,057 44k INFO Train Epoch: 18 [87%] 2023-03-16 00:46:57,058 44k INFO Losses: [2.433856964111328, 2.2884624004364014, 9.324146270751953, 18.25444984436035, 1.2728453874588013], step: 15400, lr: 9.978771236724554e-05 2023-03-16 00:48:18,176 44k INFO ====> Epoch: 18, cost 630.76 s 2023-03-16 00:49:29,658 44k INFO Train Epoch: 19 [10%] 2023-03-16 00:49:29,658 44k INFO Losses: [2.444383382797241, 2.2150559425354004, 12.519225120544434, 20.22508430480957, 1.0245003700256348], step: 15600, lr: 9.977523890319963e-05 2023-03-16 00:51:50,530 44k INFO Train Epoch: 19 [33%] 2023-03-16 00:51:50,530 44k INFO Losses: [2.582207202911377, 2.0005531311035156, 11.921225547790527, 18.685184478759766, 1.694409966468811], step: 15800, lr: 9.977523890319963e-05 2023-03-16 00:54:11,217 44k INFO Train Epoch: 19 [56%] 2023-03-16 00:54:11,217 44k INFO Losses: [2.331789970397949, 2.508754014968872, 10.962759017944336, 18.601144790649414, 1.2970365285873413], step: 16000, lr: 9.977523890319963e-05 2023-03-16 00:54:15,225 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\G_16000.pth 2023-03-16 00:54:19,957 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\D_16000.pth 2023-03-16 00:56:43,830 44k INFO Train Epoch: 19 [79%] 2023-03-16 00:56:43,830 44k INFO Losses: [2.2806410789489746, 2.684731960296631, 8.239702224731445, 17.45183563232422, 1.3521194458007812], step: 16200, lr: 9.977523890319963e-05 2023-03-16 00:58:48,484 44k INFO ====> Epoch: 19, cost 630.31 s 2023-03-16 00:59:16,344 44k INFO Train Epoch: 20 [3%] 2023-03-16 00:59:16,344 44k INFO Losses: [2.698763132095337, 2.2583720684051514, 9.842967987060547, 19.965408325195312, 1.259946346282959], step: 16400, lr: 9.976276699833672e-05 2023-03-16 01:01:37,376 44k INFO Train Epoch: 20 [26%] 2023-03-16 01:01:37,376 44k INFO Losses: [2.5521702766418457, 2.2601287364959717, 7.731616020202637, 18.281116485595703, 1.2152737379074097], step: 16600, lr: 9.976276699833672e-05 2023-03-16 01:03:57,931 44k INFO Train Epoch: 20 [49%] 2023-03-16 01:03:57,931 44k INFO Losses: [2.575326681137085, 2.4220943450927734, 9.240870475769043, 21.15948486328125, 1.4561127424240112], step: 16800, lr: 9.976276699833672e-05 2023-03-16 01:04:01,901 44k INFO Saving model and optimizer state at iteration 20 to ./logs\44k\G_16800.pth 2023-03-16 01:04:07,002 44k INFO Saving model and optimizer state at iteration 20 to ./logs\44k\D_16800.pth 2023-03-16 01:06:30,839 44k INFO Train Epoch: 20 [72%] 2023-03-16 01:06:30,839 44k INFO Losses: [2.616678237915039, 2.1874992847442627, 11.910429000854492, 17.798166275024414, 1.2681174278259277], step: 17000, lr: 9.976276699833672e-05 2023-03-16 01:08:51,454 44k INFO Train Epoch: 20 [95%] 2023-03-16 01:08:51,454 44k INFO Losses: [2.3440380096435547, 2.325277328491211, 12.878499984741211, 21.19704818725586, 1.2372229099273682], step: 17200, lr: 9.976276699833672e-05 2023-03-16 01:09:19,331 44k INFO ====> Epoch: 20, cost 630.85 s 2023-03-16 01:11:25,214 44k INFO Train Epoch: 21 [19%] 2023-03-16 01:11:25,214 44k INFO Losses: [2.294248580932617, 2.5179877281188965, 9.40160846710205, 17.003673553466797, 1.2186837196350098], step: 17400, lr: 9.975029665246193e-05 2023-03-16 01:13:45,930 44k INFO Train Epoch: 21 [42%] 2023-03-16 01:13:45,931 44k INFO Losses: [2.9561305046081543, 1.7214298248291016, 4.954794883728027, 13.54641056060791, 1.291691780090332], step: 17600, lr: 9.975029665246193e-05 2023-03-16 01:13:49,905 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\G_17600.pth 2023-03-16 01:13:54,899 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\D_17600.pth 2023-03-16 01:16:18,521 44k INFO Train Epoch: 21 [65%] 2023-03-16 01:16:18,521 44k INFO Losses: [3.088279962539673, 1.8390194177627563, 7.694116592407227, 17.92612075805664, 1.314424991607666], step: 17800, lr: 9.975029665246193e-05 2023-03-16 01:18:39,350 44k INFO Train Epoch: 21 [88%] 2023-03-16 01:18:39,351 44k INFO Losses: [2.3933024406433105, 2.092694044113159, 12.331317901611328, 20.074682235717773, 0.8205050826072693], step: 18000, lr: 9.975029665246193e-05 2023-03-16 01:19:50,622 44k INFO ====> Epoch: 21, cost 631.29 s 2023-03-16 01:21:11,935 44k INFO Train Epoch: 22 [11%] 2023-03-16 01:21:11,935 44k INFO Losses: [2.615518093109131, 2.3167545795440674, 8.881902694702148, 20.360605239868164, 0.8116071224212646], step: 18200, lr: 9.973782786538036e-05 2023-03-16 01:23:32,830 44k INFO Train Epoch: 22 [35%] 2023-03-16 01:23:32,830 44k INFO Losses: [2.473619222640991, 2.0928754806518555, 10.076347351074219, 17.327531814575195, 1.189102292060852], step: 18400, lr: 9.973782786538036e-05 2023-03-16 01:23:36,742 44k INFO Saving model and optimizer state at iteration 22 to ./logs\44k\G_18400.pth 2023-03-16 01:23:40,715 44k INFO Saving model and optimizer state at iteration 22 to ./logs\44k\D_18400.pth 2023-03-16 01:26:04,423 44k INFO Train Epoch: 22 [58%] 2023-03-16 01:26:04,423 44k INFO Losses: [2.5523898601531982, 2.7454590797424316, 8.686205863952637, 20.228925704956055, 1.3921515941619873], step: 18600, lr: 9.973782786538036e-05 2023-03-16 01:28:25,412 44k INFO Train Epoch: 22 [81%] 2023-03-16 01:28:25,412 44k INFO Losses: [2.1441400051116943, 3.125256061553955, 10.10117244720459, 14.158695220947266, 1.3019906282424927], step: 18800, lr: 9.973782786538036e-05 2023-03-16 01:30:20,259 44k INFO ====> Epoch: 22, cost 629.64 s 2023-03-16 01:30:57,884 44k INFO Train Epoch: 23 [4%] 2023-03-16 01:30:57,885 44k INFO Losses: [2.3834729194641113, 2.079442262649536, 10.483260154724121, 16.458885192871094, 1.5064617395401], step: 19000, lr: 9.972536063689719e-05 2023-03-16 01:33:18,786 44k INFO Train Epoch: 23 [27%] 2023-03-16 01:33:18,786 44k INFO Losses: [2.539854049682617, 2.5715172290802, 9.316402435302734, 12.872879981994629, 1.198974370956421], step: 19200, lr: 9.972536063689719e-05 2023-03-16 01:33:22,713 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\G_19200.pth 2023-03-16 01:33:27,846 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\D_19200.pth 2023-03-16 01:35:51,503 44k INFO Train Epoch: 23 [51%] 2023-03-16 01:35:51,503 44k INFO Losses: [2.7855114936828613, 1.6975175142288208, 12.702230453491211, 17.07855796813965, 1.518345832824707], step: 19400, lr: 9.972536063689719e-05 2023-03-16 01:38:12,414 44k INFO Train Epoch: 23 [74%] 2023-03-16 01:38:12,414 44k INFO Losses: [2.43784761428833, 2.392583131790161, 7.544654369354248, 17.182422637939453, 1.245615839958191], step: 19600, lr: 9.972536063689719e-05 2023-03-16 01:40:33,438 44k INFO Train Epoch: 23 [97%] 2023-03-16 01:40:33,439 44k INFO Losses: [2.4725594520568848, 2.452878952026367, 7.707283020019531, 18.398019790649414, 1.2731066942214966], step: 19800, lr: 9.972536063689719e-05 2023-03-16 01:40:51,423 44k INFO ====> Epoch: 23, cost 631.16 s 2023-03-16 01:43:06,202 44k INFO Train Epoch: 24 [20%] 2023-03-16 01:43:06,202 44k INFO Losses: [2.808661699295044, 2.049024820327759, 6.952273368835449, 14.101797103881836, 1.2182196378707886], step: 20000, lr: 9.971289496681757e-05 2023-03-16 01:43:10,102 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\G_20000.pth 2023-03-16 01:43:14,738 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\D_20000.pth 2023-03-16 01:45:38,382 44k INFO Train Epoch: 24 [43%] 2023-03-16 01:45:38,383 44k INFO Losses: [2.7623894214630127, 2.13728404045105, 5.595903396606445, 13.272449493408203, 0.9967997670173645], step: 20200, lr: 9.971289496681757e-05 2023-03-16 01:47:59,280 44k INFO Train Epoch: 24 [67%] 2023-03-16 01:47:59,281 44k INFO Losses: [2.420402765274048, 2.481660842895508, 12.158041954040527, 22.0040283203125, 1.0831242799758911], step: 20400, lr: 9.971289496681757e-05 2023-03-16 01:50:20,185 44k INFO Train Epoch: 24 [90%] 2023-03-16 01:50:20,186 44k INFO Losses: [2.7315421104431152, 2.373446464538574, 8.706191062927246, 18.44219970703125, 1.0841771364212036], step: 20600, lr: 9.971289496681757e-05 2023-03-16 01:51:21,552 44k INFO ====> Epoch: 24, cost 630.13 s 2023-03-16 01:52:52,753 44k INFO Train Epoch: 25 [13%] 2023-03-16 01:52:52,754 44k INFO Losses: [2.566246747970581, 1.9644880294799805, 7.5266008377075195, 15.633254051208496, 0.948923647403717], step: 20800, lr: 9.970043085494672e-05 2023-03-16 01:52:56,661 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\G_20800.pth 2023-03-16 01:53:00,944 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\D_20800.pth 2023-03-16 01:55:24,613 44k INFO Train Epoch: 25 [36%] 2023-03-16 01:55:24,614 44k INFO Losses: [2.4936013221740723, 2.488572597503662, 8.447824478149414, 21.389589309692383, 1.0742056369781494], step: 21000, lr: 9.970043085494672e-05 2023-03-16 01:57:45,276 44k INFO Train Epoch: 25 [59%] 2023-03-16 01:57:45,277 44k INFO Losses: [2.596569299697876, 2.053237199783325, 10.004366874694824, 20.607969284057617, 1.3179874420166016], step: 21200, lr: 9.970043085494672e-05 2023-03-16 02:00:06,169 44k INFO Train Epoch: 25 [83%] 2023-03-16 02:00:06,169 44k INFO Losses: [2.474005699157715, 2.2094924449920654, 8.743647575378418, 19.38444709777832, 1.2334219217300415], step: 21400, lr: 9.970043085494672e-05 2023-03-16 02:01:51,061 44k INFO ====> Epoch: 25, cost 629.51 s 2023-03-16 02:02:38,606 44k INFO Train Epoch: 26 [6%] 2023-03-16 02:02:38,606 44k INFO Losses: [2.385444164276123, 2.2398784160614014, 12.254326820373535, 17.304244995117188, 1.1656335592269897], step: 21600, lr: 9.968796830108985e-05 2023-03-16 02:02:42,724 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\G_21600.pth 2023-03-16 02:02:46,695 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\D_21600.pth 2023-03-16 02:05:10,494 44k INFO Train Epoch: 26 [29%] 2023-03-16 02:05:10,494 44k INFO Losses: [2.626939535140991, 2.084868907928467, 11.428565979003906, 20.141183853149414, 1.2776230573654175], step: 21800, lr: 9.968796830108985e-05 2023-03-16 02:07:31,503 44k INFO Train Epoch: 26 [52%] 2023-03-16 02:07:31,503 44k INFO Losses: [2.4139912128448486, 2.349252700805664, 12.027121543884277, 19.910202026367188, 1.1904593706130981], step: 22000, lr: 9.968796830108985e-05 2023-03-16 02:09:52,305 44k INFO Train Epoch: 26 [75%] 2023-03-16 02:09:52,306 44k INFO Losses: [2.70457124710083, 2.0999701023101807, 6.277225017547607, 14.494149208068848, 1.3441191911697388], step: 22200, lr: 9.968796830108985e-05 2023-03-16 02:12:12,947 44k INFO Train Epoch: 26 [99%] 2023-03-16 02:12:12,948 44k INFO Losses: [2.495934009552002, 2.122039318084717, 11.230051040649414, 18.352825164794922, 1.1787523031234741], step: 22400, lr: 9.968796830108985e-05 2023-03-16 02:12:16,964 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\G_22400.pth 2023-03-16 02:12:21,499 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\D_22400.pth 2023-03-16 02:12:32,515 44k INFO ====> Epoch: 26, cost 641.45 s 2023-03-16 02:14:56,863 44k INFO Train Epoch: 27 [22%] 2023-03-16 02:14:56,863 44k INFO Losses: [2.607412338256836, 1.888242244720459, 10.328024864196777, 15.682567596435547, 1.4965571165084839], step: 22600, lr: 9.967550730505221e-05 2023-03-16 02:17:17,328 44k INFO Train Epoch: 27 [45%] 2023-03-16 02:17:17,329 44k INFO Losses: [2.8666746616363525, 2.0624637603759766, 9.125121116638184, 17.65096092224121, 1.1552131175994873], step: 22800, lr: 9.967550730505221e-05 2023-03-16 02:19:37,988 44k INFO Train Epoch: 27 [68%] 2023-03-16 02:19:37,988 44k INFO Losses: [2.7731521129608154, 2.441741943359375, 7.241718292236328, 15.242803573608398, 0.9172007441520691], step: 23000, lr: 9.967550730505221e-05 2023-03-16 02:21:58,497 44k INFO Train Epoch: 27 [91%] 2023-03-16 02:21:58,497 44k INFO Losses: [2.4379732608795166, 2.096461534500122, 8.71212100982666, 16.061676025390625, 1.299199104309082], step: 23200, lr: 9.967550730505221e-05 2023-03-16 02:22:02,519 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\G_23200.pth 2023-03-16 02:22:07,210 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\D_23200.pth 2023-03-16 02:23:01,977 44k INFO ====> Epoch: 27, cost 629.46 s 2023-03-16 02:24:42,854 44k INFO Train Epoch: 28 [15%] 2023-03-16 02:24:42,854 44k INFO Losses: [2.429565668106079, 2.361985206604004, 9.226119995117188, 20.984317779541016, 1.1905992031097412], step: 23400, lr: 9.966304786663908e-05 2023-03-16 02:27:03,464 44k INFO Train Epoch: 28 [38%] 2023-03-16 02:27:03,465 44k INFO Losses: [2.182185649871826, 3.0123226642608643, 10.151270866394043, 14.9700345993042, 1.0540744066238403], step: 23600, lr: 9.966304786663908e-05 2023-03-16 02:29:24,081 44k INFO Train Epoch: 28 [61%] 2023-03-16 02:29:24,082 44k INFO Losses: [2.6491360664367676, 1.913266658782959, 10.339127540588379, 17.114498138427734, 1.540541410446167], step: 23800, lr: 9.966304786663908e-05 2023-03-16 02:31:44,701 44k INFO Train Epoch: 28 [84%] 2023-03-16 02:31:44,701 44k INFO Losses: [2.7960634231567383, 2.084629535675049, 10.011807441711426, 20.787256240844727, 1.2083172798156738], step: 24000, lr: 9.966304786663908e-05 2023-03-16 02:31:48,786 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\G_24000.pth 2023-03-16 02:31:53,647 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\D_24000.pth 2023-03-16 02:33:31,571 44k INFO ====> Epoch: 28, cost 629.59 s 2023-03-16 02:34:28,946 44k INFO Train Epoch: 29 [7%] 2023-03-16 02:34:28,946 44k INFO Losses: [2.491614580154419, 2.1842799186706543, 8.313817024230957, 16.375612258911133, 1.0089161396026611], step: 24200, lr: 9.965058998565574e-05 2023-03-16 02:36:50,659 44k INFO Train Epoch: 29 [31%] 2023-03-16 02:36:50,659 44k INFO Losses: [2.5439770221710205, 2.197850227355957, 10.543730735778809, 17.997026443481445, 1.154215693473816], step: 24400, lr: 9.965058998565574e-05 2023-03-16 02:39:11,186 44k INFO Train Epoch: 29 [54%] 2023-03-16 02:39:11,186 44k INFO Losses: [2.50681209564209, 2.313572406768799, 4.755190849304199, 8.415090560913086, 1.2585084438323975], step: 24600, lr: 9.965058998565574e-05 2023-03-16 02:41:32,045 44k INFO Train Epoch: 29 [77%] 2023-03-16 02:41:32,045 44k INFO Losses: [2.5351929664611816, 2.275634765625, 9.348855018615723, 15.969225883483887, 1.2705525159835815], step: 24800, lr: 9.965058998565574e-05 2023-03-16 02:41:36,050 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\G_24800.pth 2023-03-16 02:41:40,945 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\D_24800.pth 2023-03-16 02:44:02,445 44k INFO ====> Epoch: 29, cost 630.87 s 2023-03-16 02:44:16,386 44k INFO Train Epoch: 30 [0%] 2023-03-16 02:44:16,387 44k INFO Losses: [2.467650890350342, 2.2971153259277344, 10.749378204345703, 19.568599700927734, 1.191040277481079], step: 25000, lr: 9.963813366190753e-05 2023-03-16 02:46:37,237 44k INFO Train Epoch: 30 [23%] 2023-03-16 02:46:37,237 44k INFO Losses: [2.392204523086548, 2.3401966094970703, 9.5015230178833, 13.58256721496582, 1.335119605064392], step: 25200, lr: 9.963813366190753e-05 2023-03-16 02:48:57,877 44k INFO Train Epoch: 30 [47%] 2023-03-16 02:48:57,878 44k INFO Losses: [2.5223231315612793, 2.2982637882232666, 7.33641242980957, 14.452993392944336, 1.141780138015747], step: 25400, lr: 9.963813366190753e-05 2023-03-16 02:51:18,633 44k INFO Train Epoch: 30 [70%] 2023-03-16 02:51:18,633 44k INFO Losses: [2.661156415939331, 2.248528242111206, 10.169857025146484, 15.66061019897461, 1.3379757404327393], step: 25600, lr: 9.963813366190753e-05 2023-03-16 02:51:22,552 44k INFO Saving model and optimizer state at iteration 30 to ./logs\44k\G_25600.pth 2023-03-16 02:51:27,287 44k INFO Saving model and optimizer state at iteration 30 to ./logs\44k\D_25600.pth 2023-03-16 02:53:50,756 44k INFO Train Epoch: 30 [93%] 2023-03-16 02:53:50,757 44k INFO Losses: [2.3899364471435547, 1.9946284294128418, 13.339884757995605, 17.123361587524414, 1.3372011184692383], step: 25800, lr: 9.963813366190753e-05 2023-03-16 02:54:32,708 44k INFO ====> Epoch: 30, cost 630.26 s 2023-03-16 02:56:23,634 44k INFO Train Epoch: 31 [16%] 2023-03-16 02:56:23,634 44k INFO Losses: [2.4005982875823975, 2.484386444091797, 7.842770576477051, 19.523204803466797, 1.043355107307434], step: 26000, lr: 9.962567889519979e-05 2023-03-16 02:58:44,342 44k INFO Train Epoch: 31 [39%] 2023-03-16 02:58:44,342 44k INFO Losses: [2.6491734981536865, 2.308323860168457, 7.801671504974365, 16.288909912109375, 0.994064211845398], step: 26200, lr: 9.962567889519979e-05 2023-03-16 03:01:05,024 44k INFO Train Epoch: 31 [63%] 2023-03-16 03:01:05,025 44k INFO Losses: [2.27449107170105, 2.6658718585968018, 5.132890701293945, 10.934639930725098, 1.21943998336792], step: 26400, lr: 9.962567889519979e-05 2023-03-16 03:01:09,010 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\G_26400.pth 2023-03-16 03:01:14,055 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\D_26400.pth 2023-03-16 03:03:37,922 44k INFO Train Epoch: 31 [86%] 2023-03-16 03:03:37,923 44k INFO Losses: [2.4863085746765137, 1.999206781387329, 10.952353477478027, 19.733848571777344, 1.417864203453064], step: 26600, lr: 9.962567889519979e-05 2023-03-16 03:05:03,162 44k INFO ====> Epoch: 31, cost 630.45 s 2023-03-16 03:06:10,347 44k INFO Train Epoch: 32 [9%] 2023-03-16 03:06:10,348 44k INFO Losses: [2.846637725830078, 1.92255437374115, 5.894870758056641, 12.708348274230957, 1.2397171258926392], step: 26800, lr: 9.961322568533789e-05 2023-03-16 03:08:31,078 44k INFO Train Epoch: 32 [32%] 2023-03-16 03:08:31,078 44k INFO Losses: [2.28175687789917, 2.841280460357666, 10.58755111694336, 20.311485290527344, 1.2063570022583008], step: 27000, lr: 9.961322568533789e-05 2023-03-16 03:10:51,615 44k INFO Train Epoch: 32 [55%] 2023-03-16 03:10:51,615 44k INFO Losses: [2.7459840774536133, 2.0232462882995605, 6.449638366699219, 19.634986877441406, 1.3266931772232056], step: 27200, lr: 9.961322568533789e-05 2023-03-16 03:10:55,739 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\G_27200.pth 2023-03-16 03:11:00,082 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\D_27200.pth 2023-03-16 03:13:23,951 44k INFO Train Epoch: 32 [79%] 2023-03-16 03:13:23,951 44k INFO Losses: [2.5202064514160156, 2.1386451721191406, 8.989642143249512, 22.03737449645996, 1.3058375120162964], step: 27400, lr: 9.961322568533789e-05 2023-03-16 03:15:32,883 44k INFO ====> Epoch: 32, cost 629.72 s 2023-03-16 03:15:56,601 44k INFO Train Epoch: 33 [2%] 2023-03-16 03:15:56,602 44k INFO Losses: [2.584780693054199, 2.26330304145813, 8.34976863861084, 17.010421752929688, 1.5097609758377075], step: 27600, lr: 9.960077403212722e-05 2023-03-16 03:18:17,582 44k INFO Train Epoch: 33 [25%] 2023-03-16 03:18:17,582 44k INFO Losses: [2.8916923999786377, 2.1206181049346924, 6.845468521118164, 15.957723617553711, 1.6273925304412842], step: 27800, lr: 9.960077403212722e-05 2023-03-16 03:20:38,117 44k INFO Train Epoch: 33 [48%] 2023-03-16 03:20:38,118 44k INFO Losses: [2.6899142265319824, 1.9501187801361084, 5.572917461395264, 12.964237213134766, 1.2838226556777954], step: 28000, lr: 9.960077403212722e-05 2023-03-16 03:20:42,224 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\G_28000.pth 2023-03-16 03:20:47,247 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\D_28000.pth 2023-03-16 03:23:11,115 44k INFO Train Epoch: 33 [71%] 2023-03-16 03:23:11,116 44k INFO Losses: [2.496995687484741, 2.130931854248047, 9.426998138427734, 17.43515968322754, 1.3810482025146484], step: 28200, lr: 9.960077403212722e-05 2023-03-16 03:25:31,586 44k INFO Train Epoch: 33 [95%] 2023-03-16 03:25:31,586 44k INFO Losses: [2.5002846717834473, 2.4750053882598877, 10.284574508666992, 18.11180305480957, 1.014971137046814], step: 28400, lr: 9.960077403212722e-05 2023-03-16 03:26:03,698 44k INFO ====> Epoch: 33, cost 630.81 s 2023-03-16 03:28:04,336 44k INFO Train Epoch: 34 [18%] 2023-03-16 03:28:04,337 44k INFO Losses: [2.5918307304382324, 2.5323596000671387, 10.658393859863281, 15.760563850402832, 1.2637988328933716], step: 28600, lr: 9.95883239353732e-05 2023-03-16 03:30:25,429 44k INFO Train Epoch: 34 [41%] 2023-03-16 03:30:25,429 44k INFO Losses: [2.6172962188720703, 2.20560359954834, 8.789875030517578, 17.764558792114258, 1.231347918510437], step: 28800, lr: 9.95883239353732e-05 2023-03-16 03:30:29,361 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\G_28800.pth 2023-03-16 03:30:33,488 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\D_28800.pth 2023-03-16 03:32:57,434 44k INFO Train Epoch: 34 [64%] 2023-03-16 03:32:57,435 44k INFO Losses: [2.6522512435913086, 2.0008366107940674, 7.80265474319458, 17.698692321777344, 1.1695736646652222], step: 29000, lr: 9.95883239353732e-05 2023-03-16 03:35:18,235 44k INFO Train Epoch: 34 [87%] 2023-03-16 03:35:18,236 44k INFO Losses: [2.780667543411255, 1.976601004600525, 4.946590900421143, 13.144062995910645, 1.2670704126358032], step: 29200, lr: 9.95883239353732e-05 2023-03-16 03:36:33,720 44k INFO ====> Epoch: 34, cost 630.02 s 2023-03-16 03:37:50,829 44k INFO Train Epoch: 35 [11%] 2023-03-16 03:37:50,829 44k INFO Losses: [2.5062317848205566, 1.8952958583831787, 10.908663749694824, 20.4866943359375, 1.2451364994049072], step: 29400, lr: 9.957587539488128e-05 2023-03-16 03:40:11,560 44k INFO Train Epoch: 35 [34%] 2023-03-16 03:40:11,560 44k INFO Losses: [2.5363059043884277, 2.1009933948516846, 15.046977043151855, 20.890012741088867, 1.0345100164413452], step: 29600, lr: 9.957587539488128e-05 2023-03-16 03:40:15,556 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\G_29600.pth 2023-03-16 03:40:20,241 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\D_29600.pth 2023-03-16 03:42:43,831 44k INFO Train Epoch: 35 [57%] 2023-03-16 03:42:43,831 44k INFO Losses: [2.5509517192840576, 2.1264853477478027, 7.332380771636963, 13.148984909057617, 1.212276577949524], step: 29800, lr: 9.957587539488128e-05 2023-03-16 03:45:04,601 44k INFO Train Epoch: 35 [80%] 2023-03-16 03:45:04,601 44k INFO Losses: [2.523164749145508, 2.336437702178955, 10.830221176147461, 19.905424118041992, 1.3957927227020264], step: 30000, lr: 9.957587539488128e-05 2023-03-16 03:47:03,640 44k INFO ====> Epoch: 35, cost 629.92 s 2023-03-16 03:47:37,151 44k INFO Train Epoch: 36 [3%] 2023-03-16 03:47:37,151 44k INFO Losses: [2.470731496810913, 2.2674591541290283, 8.563957214355469, 15.420655250549316, 1.1303443908691406], step: 30200, lr: 9.956342841045691e-05 2023-03-16 03:49:58,063 44k INFO Train Epoch: 36 [27%] 2023-03-16 03:49:58,063 44k INFO Losses: [2.645909309387207, 2.1464650630950928, 8.197700500488281, 16.968324661254883, 1.272623896598816], step: 30400, lr: 9.956342841045691e-05 2023-03-16 03:50:01,973 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\G_30400.pth 2023-03-16 03:50:06,839 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\D_30400.pth 2023-03-16 03:52:30,533 44k INFO Train Epoch: 36 [50%] 2023-03-16 03:52:30,533 44k INFO Losses: [2.5915043354034424, 2.052953004837036, 8.663422584533691, 17.196107864379883, 0.9660559296607971], step: 30600, lr: 9.956342841045691e-05 2023-03-16 03:54:51,432 44k INFO Train Epoch: 36 [73%] 2023-03-16 03:54:51,432 44k INFO Losses: [2.2421348094940186, 2.51168155670166, 11.137078285217285, 20.013750076293945, 1.197601318359375], step: 30800, lr: 9.956342841045691e-05 2023-03-16 03:57:12,073 44k INFO Train Epoch: 36 [96%] 2023-03-16 03:57:12,073 44k INFO Losses: [2.6444756984710693, 2.026712656021118, 9.440808296203613, 19.635557174682617, 1.2382951974868774], step: 31000, lr: 9.956342841045691e-05 2023-03-16 03:57:34,273 44k INFO ====> Epoch: 36, cost 630.63 s 2023-03-16 03:59:45,468 44k INFO Train Epoch: 37 [19%] 2023-03-16 03:59:45,468 44k INFO Losses: [2.629931926727295, 2.108628511428833, 8.413933753967285, 18.527938842773438, 1.135683298110962], step: 31200, lr: 9.95509829819056e-05 2023-03-16 03:59:49,460 44k INFO Saving model and optimizer state at iteration 37 to ./logs\44k\G_31200.pth 2023-03-16 03:59:53,981 44k INFO Saving model and optimizer state at iteration 37 to ./logs\44k\D_31200.pth 2023-03-16 04:02:17,300 44k INFO Train Epoch: 37 [43%] 2023-03-16 04:02:17,301 44k INFO Losses: [2.666492223739624, 2.1456568241119385, 8.177578926086426, 19.041101455688477, 1.4916480779647827], step: 31400, lr: 9.95509829819056e-05 2023-03-16 04:04:38,626 44k INFO Train Epoch: 37 [66%] 2023-03-16 04:04:38,626 44k INFO Losses: [2.3911197185516357, 2.0497288703918457, 9.499975204467773, 16.05766487121582, 1.1395161151885986], step: 31600, lr: 9.95509829819056e-05 2023-03-16 04:06:59,524 44k INFO Train Epoch: 37 [89%] 2023-03-16 04:06:59,525 44k INFO Losses: [2.484107494354248, 2.084519386291504, 9.091943740844727, 18.85395622253418, 1.2878482341766357], step: 31800, lr: 9.95509829819056e-05 2023-03-16 04:08:05,125 44k INFO ====> Epoch: 37, cost 630.85 s 2023-03-16 04:09:31,919 44k INFO Train Epoch: 38 [12%] 2023-03-16 04:09:31,919 44k INFO Losses: [2.588472604751587, 2.2763264179229736, 11.178894996643066, 21.818592071533203, 1.3247570991516113], step: 32000, lr: 9.953853910903285e-05 2023-03-16 04:09:35,896 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\G_32000.pth 2023-03-16 04:09:40,028 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\D_32000.pth 2023-03-16 04:12:03,693 44k INFO Train Epoch: 38 [35%] 2023-03-16 04:12:03,693 44k INFO Losses: [2.3396801948547363, 2.539722442626953, 8.741209030151367, 16.147171020507812, 0.939778745174408], step: 32200, lr: 9.953853910903285e-05 2023-03-16 04:14:24,298 44k INFO Train Epoch: 38 [59%] 2023-03-16 04:14:24,298 44k INFO Losses: [3.19575834274292, 1.9386084079742432, 3.7073934078216553, 10.049609184265137, 1.437783122062683], step: 32400, lr: 9.953853910903285e-05 2023-03-16 04:16:45,038 44k INFO Train Epoch: 38 [82%] 2023-03-16 04:16:45,038 44k INFO Losses: [2.7948760986328125, 2.157989025115967, 7.51399040222168, 16.192846298217773, 1.0238171815872192], step: 32600, lr: 9.953853910903285e-05 2023-03-16 04:18:34,227 44k INFO ====> Epoch: 38, cost 629.10 s 2023-03-16 04:19:17,530 44k INFO Train Epoch: 39 [5%] 2023-03-16 04:19:17,531 44k INFO Losses: [2.576876163482666, 2.0756654739379883, 13.110472679138184, 19.17611312866211, 1.4470683336257935], step: 32800, lr: 9.952609679164422e-05 2023-03-16 04:19:21,548 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\G_32800.pth 2023-03-16 04:19:25,827 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\D_32800.pth 2023-03-16 04:21:49,401 44k INFO Train Epoch: 39 [28%] 2023-03-16 04:21:49,402 44k INFO Losses: [2.4315903186798096, 2.0649232864379883, 11.210330963134766, 21.011646270751953, 1.2529443502426147], step: 33000, lr: 9.952609679164422e-05 2023-03-16 04:24:10,069 44k INFO Train Epoch: 39 [52%] 2023-03-16 04:24:10,069 44k INFO Losses: [2.539980173110962, 2.3782131671905518, 8.732582092285156, 17.44808006286621, 1.356327772140503], step: 33200, lr: 9.952609679164422e-05 2023-03-16 04:26:30,903 44k INFO Train Epoch: 39 [75%] 2023-03-16 04:26:30,903 44k INFO Losses: [2.700183391571045, 2.1703073978424072, 11.774406433105469, 19.580537796020508, 1.3001630306243896], step: 33400, lr: 9.952609679164422e-05 2023-03-16 04:28:51,931 44k INFO Train Epoch: 39 [98%] 2023-03-16 04:28:51,931 44k INFO Losses: [2.3465139865875244, 2.5570602416992188, 10.467639923095703, 19.52950668334961, 1.2539504766464233], step: 33600, lr: 9.952609679164422e-05 2023-03-16 04:28:55,854 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\G_33600.pth 2023-03-16 04:29:00,920 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\D_33600.pth 2023-03-16 04:29:16,236 44k INFO ====> Epoch: 39, cost 642.01 s 2023-03-16 04:31:36,407 44k INFO Train Epoch: 40 [21%] 2023-03-16 04:31:36,407 44k INFO Losses: [2.746856451034546, 1.9400231838226318, 8.48538875579834, 19.317323684692383, 1.0229159593582153], step: 33800, lr: 9.951365602954526e-05 2023-03-16 04:33:56,767 44k INFO Train Epoch: 40 [44%] 2023-03-16 04:33:56,768 44k INFO Losses: [2.469702959060669, 2.0838565826416016, 8.263635635375977, 16.58014678955078, 1.2991070747375488], step: 34000, lr: 9.951365602954526e-05 2023-03-16 04:36:17,600 44k INFO Train Epoch: 40 [68%] 2023-03-16 04:36:17,600 44k INFO Losses: [2.233914375305176, 2.468085765838623, 12.018889427185059, 19.969524383544922, 1.1011557579040527], step: 34200, lr: 9.951365602954526e-05 2023-03-16 04:38:38,329 44k INFO Train Epoch: 40 [91%] 2023-03-16 04:38:38,330 44k INFO Losses: [2.343113899230957, 2.2695024013519287, 7.669289588928223, 14.323527336120605, 1.394254207611084], step: 34400, lr: 9.951365602954526e-05 2023-03-16 04:38:42,236 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\G_34400.pth 2023-03-16 04:38:47,191 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\D_34400.pth 2023-03-16 04:39:46,052 44k INFO ====> Epoch: 40, cost 629.82 s 2023-03-16 04:41:22,911 44k INFO Train Epoch: 41 [14%] 2023-03-16 04:41:22,911 44k INFO Losses: [2.641541004180908, 2.116180181503296, 8.272361755371094, 17.3891544342041, 1.3940420150756836], step: 34600, lr: 9.950121682254156e-05 2023-03-16 04:43:43,625 44k INFO Train Epoch: 41 [37%] 2023-03-16 04:43:43,625 44k INFO Losses: [2.6787145137786865, 2.077061176300049, 4.890179634094238, 15.238072395324707, 1.2279645204544067], step: 34800, lr: 9.950121682254156e-05 2023-03-16 04:46:04,361 44k INFO Train Epoch: 41 [60%] 2023-03-16 04:46:04,361 44k INFO Losses: [2.619894027709961, 2.083570957183838, 10.05743408203125, 16.610305786132812, 1.2705297470092773], step: 35000, lr: 9.950121682254156e-05 2023-03-16 04:48:25,222 44k INFO Train Epoch: 41 [84%] 2023-03-16 04:48:25,223 44k INFO Losses: [2.5610852241516113, 2.136617660522461, 13.243945121765137, 21.89145851135254, 1.2063173055648804], step: 35200, lr: 9.950121682254156e-05 2023-03-16 04:48:29,217 44k INFO Saving model and optimizer state at iteration 41 to ./logs\44k\G_35200.pth 2023-03-16 04:48:34,251 44k INFO Saving model and optimizer state at iteration 41 to ./logs\44k\D_35200.pth 2023-03-16 04:50:16,452 44k INFO ====> Epoch: 41, cost 630.40 s 2023-03-16 04:51:09,569 44k INFO Train Epoch: 42 [7%] 2023-03-16 04:51:09,570 44k INFO Losses: [2.455902576446533, 2.646679401397705, 10.238178253173828, 17.043800354003906, 0.9079168438911438], step: 35400, lr: 9.948877917043875e-05 2023-03-16 04:53:30,382 44k INFO Train Epoch: 42 [30%] 2023-03-16 04:53:30,382 44k INFO Losses: [2.7213470935821533, 2.049485683441162, 6.495115756988525, 19.139436721801758, 1.1537516117095947], step: 35600, lr: 9.948877917043875e-05 2023-03-16 04:55:51,199 44k INFO Train Epoch: 42 [53%] 2023-03-16 04:55:51,200 44k INFO Losses: [2.544001817703247, 2.129960060119629, 7.027856826782227, 13.133383750915527, 1.4411994218826294], step: 35800, lr: 9.948877917043875e-05 2023-03-16 04:58:12,092 44k INFO Train Epoch: 42 [76%] 2023-03-16 04:58:12,092 44k INFO Losses: [2.5904393196105957, 2.4093260765075684, 7.381767272949219, 17.211151123046875, 1.535307765007019], step: 36000, lr: 9.948877917043875e-05 2023-03-16 04:58:16,080 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\G_36000.pth 2023-03-16 04:58:19,970 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\D_36000.pth 2023-03-16 05:00:43,613 44k INFO Train Epoch: 42 [100%] 2023-03-16 05:00:43,613 44k INFO Losses: [2.3666634559631348, 2.3244967460632324, 12.037243843078613, 19.221891403198242, 0.9786768555641174], step: 36200, lr: 9.948877917043875e-05 2023-03-16 05:00:46,203 44k INFO ====> Epoch: 42, cost 629.75 s 2023-03-16 05:03:16,342 44k INFO Train Epoch: 43 [23%] 2023-03-16 05:03:16,342 44k INFO Losses: [2.6317670345306396, 2.094402551651001, 10.768045425415039, 20.303518295288086, 1.2009350061416626], step: 36400, lr: 9.947634307304244e-05 2023-03-16 05:05:37,020 44k INFO Train Epoch: 43 [46%] 2023-03-16 05:05:37,021 44k INFO Losses: [2.4301586151123047, 2.1994683742523193, 11.484366416931152, 21.424745559692383, 1.06465482711792], step: 36600, lr: 9.947634307304244e-05 2023-03-16 05:07:57,784 44k INFO Train Epoch: 43 [69%] 2023-03-16 05:07:57,784 44k INFO Losses: [2.693804979324341, 2.3116695880889893, 10.904422760009766, 18.62671661376953, 1.1741441488265991], step: 36800, lr: 9.947634307304244e-05 2023-03-16 05:08:01,772 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\G_36800.pth 2023-03-16 05:08:06,995 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\D_36800.pth 2023-03-16 05:10:30,674 44k INFO Train Epoch: 43 [92%] 2023-03-16 05:10:30,674 44k INFO Losses: [2.43847918510437, 2.625917673110962, 8.821423530578613, 15.966132164001465, 1.2110753059387207], step: 37000, lr: 9.947634307304244e-05 2023-03-16 05:11:16,828 44k INFO ====> Epoch: 43, cost 630.63 s 2023-03-16 05:13:03,462 44k INFO Train Epoch: 44 [16%] 2023-03-16 05:13:03,463 44k INFO Losses: [2.758525848388672, 2.33878231048584, 7.9185404777526855, 18.035717010498047, 1.1596654653549194], step: 37200, lr: 9.94639085301583e-05 2023-03-16 05:15:24,065 44k INFO Train Epoch: 44 [39%] 2023-03-16 05:15:24,065 44k INFO Losses: [2.730088949203491, 1.810584545135498, 5.902186870574951, 11.594051361083984, 1.4000799655914307], step: 37400, lr: 9.94639085301583e-05 2023-03-16 05:17:44,662 44k INFO Train Epoch: 44 [62%] 2023-03-16 05:17:44,662 44k INFO Losses: [2.550595998764038, 2.4691200256347656, 10.483214378356934, 19.379661560058594, 1.331870198249817], step: 37600, lr: 9.94639085301583e-05 2023-03-16 05:17:48,769 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\G_37600.pth 2023-03-16 05:17:53,541 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\D_37600.pth 2023-03-16 05:20:17,195 44k INFO Train Epoch: 44 [85%] 2023-03-16 05:20:17,196 44k INFO Losses: [2.6503379344940186, 2.0720181465148926, 9.963491439819336, 19.095918655395508, 1.381271243095398], step: 37800, lr: 9.94639085301583e-05 2023-03-16 05:21:46,588 44k INFO ====> Epoch: 44, cost 629.76 s 2023-03-16 05:22:49,609 44k INFO Train Epoch: 45 [8%] 2023-03-16 05:22:49,609 44k INFO Losses: [2.6218836307525635, 2.060814619064331, 9.153338432312012, 18.51890754699707, 1.1506366729736328], step: 38000, lr: 9.945147554159202e-05 2023-03-16 05:25:11,098 44k INFO Train Epoch: 45 [32%] 2023-03-16 05:25:11,099 44k INFO Losses: [2.6401312351226807, 2.2561211585998535, 8.368671417236328, 16.620540618896484, 1.1838682889938354], step: 38200, lr: 9.945147554159202e-05 2023-03-16 05:27:31,800 44k INFO Train Epoch: 45 [55%] 2023-03-16 05:27:31,800 44k INFO Losses: [2.5380568504333496, 2.345262050628662, 8.294782638549805, 18.75756072998047, 1.3791409730911255], step: 38400, lr: 9.945147554159202e-05 2023-03-16 05:27:35,706 44k INFO Saving model and optimizer state at iteration 45 to ./logs\44k\G_38400.pth 2023-03-16 05:27:40,168 44k INFO Saving model and optimizer state at iteration 45 to ./logs\44k\D_38400.pth 2023-03-16 05:30:04,301 44k INFO Train Epoch: 45 [78%] 2023-03-16 05:30:04,301 44k INFO Losses: [2.1673951148986816, 2.299222707748413, 11.011890411376953, 20.177631378173828, 1.229280948638916], step: 38600, lr: 9.945147554159202e-05 2023-03-16 05:32:17,376 44k INFO ====> Epoch: 45, cost 630.79 s 2023-03-16 05:32:36,831 44k INFO Train Epoch: 46 [1%] 2023-03-16 05:32:36,832 44k INFO Losses: [2.81834077835083, 1.918961524963379, 10.190044403076172, 19.730724334716797, 1.2012158632278442], step: 38800, lr: 9.943904410714931e-05 2023-03-16 05:34:57,693 44k INFO Train Epoch: 46 [24%] 2023-03-16 05:34:57,694 44k INFO Losses: [2.469430446624756, 2.2374072074890137, 11.068530082702637, 20.865427017211914, 1.2137172222137451], step: 39000, lr: 9.943904410714931e-05 2023-03-16 05:37:18,362 44k INFO Train Epoch: 46 [48%] 2023-03-16 05:37:18,363 44k INFO Losses: [2.353072166442871, 2.1719956398010254, 6.979969024658203, 16.300952911376953, 1.3061497211456299], step: 39200, lr: 9.943904410714931e-05 2023-03-16 05:37:22,429 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\G_39200.pth 2023-03-16 05:37:27,171 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\D_39200.pth 2023-03-16 05:39:50,883 44k INFO Train Epoch: 46 [71%] 2023-03-16 05:39:50,884 44k INFO Losses: [2.526729106903076, 2.299793243408203, 9.516493797302246, 20.59066390991211, 1.0706140995025635], step: 39400, lr: 9.943904410714931e-05 2023-03-16 05:42:11,534 44k INFO Train Epoch: 46 [94%] 2023-03-16 05:42:11,534 44k INFO Losses: [2.4643659591674805, 1.9660428762435913, 9.549762725830078, 17.71923065185547, 1.2904680967330933], step: 39600, lr: 9.943904410714931e-05 2023-03-16 05:42:47,811 44k INFO ====> Epoch: 46, cost 630.44 s 2023-03-16 05:44:44,189 44k INFO Train Epoch: 47 [17%] 2023-03-16 05:44:44,190 44k INFO Losses: [2.5767502784729004, 2.194761037826538, 9.203609466552734, 18.87983512878418, 1.0819473266601562], step: 39800, lr: 9.942661422663591e-05 2023-03-16 05:47:04,863 44k INFO Train Epoch: 47 [40%] 2023-03-16 05:47:04,864 44k INFO Losses: [2.366131544113159, 2.5237412452697754, 12.204126358032227, 20.18090057373047, 1.2546101808547974], step: 40000, lr: 9.942661422663591e-05 2023-03-16 05:47:08,776 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\G_40000.pth 2023-03-16 05:47:13,499 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\D_40000.pth 2023-03-16 05:49:37,275 44k INFO Train Epoch: 47 [64%] 2023-03-16 05:49:37,275 44k INFO Losses: [2.291883945465088, 2.3660717010498047, 11.340193748474121, 21.607946395874023, 1.2803057432174683], step: 40200, lr: 9.942661422663591e-05 2023-03-16 05:51:58,379 44k INFO Train Epoch: 47 [87%] 2023-03-16 05:51:58,380 44k INFO Losses: [2.1512961387634277, 2.6143980026245117, 10.118165016174316, 21.296052932739258, 1.2664437294006348], step: 40400, lr: 9.942661422663591e-05 2023-03-16 05:53:18,034 44k INFO ====> Epoch: 47, cost 630.22 s 2023-03-16 05:54:30,988 44k INFO Train Epoch: 48 [10%] 2023-03-16 05:54:30,989 44k INFO Losses: [2.7611403465270996, 2.0555365085601807, 7.159406661987305, 15.026434898376465, 1.075055480003357], step: 40600, lr: 9.941418589985758e-05 2023-03-16 05:56:51,714 44k INFO Train Epoch: 48 [33%] 2023-03-16 05:56:51,714 44k INFO Losses: [2.471757173538208, 2.33428955078125, 8.075584411621094, 15.107114791870117, 1.1758148670196533], step: 40800, lr: 9.941418589985758e-05 2023-03-16 05:56:55,655 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\G_40800.pth 2023-03-16 05:56:59,965 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\D_40800.pth 2023-03-16 05:59:23,571 44k INFO Train Epoch: 48 [56%] 2023-03-16 05:59:23,572 44k INFO Losses: [2.8029210567474365, 2.094527006149292, 10.321728706359863, 17.696121215820312, 1.205525279045105], step: 41000, lr: 9.941418589985758e-05 2023-03-16 06:01:44,380 44k INFO Train Epoch: 48 [80%] 2023-03-16 06:01:44,380 44k INFO Losses: [2.2520086765289307, 2.2717437744140625, 14.170517921447754, 16.355480194091797, 1.3137779235839844], step: 41200, lr: 9.941418589985758e-05 2023-03-16 06:03:47,638 44k INFO ====> Epoch: 48, cost 629.60 s 2023-03-16 06:04:16,894 44k INFO Train Epoch: 49 [3%] 2023-03-16 06:04:16,895 44k INFO Losses: [2.7255561351776123, 1.9397213459014893, 9.425223350524902, 18.29865837097168, 1.2285233736038208], step: 41400, lr: 9.940175912662009e-05 2023-03-16 06:06:37,829 44k INFO Train Epoch: 49 [26%] 2023-03-16 06:06:37,830 44k INFO Losses: [2.5196421146392822, 2.360919237136841, 6.720976829528809, 18.560775756835938, 1.3308581113815308], step: 41600, lr: 9.940175912662009e-05 2023-03-16 06:06:41,828 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\G_41600.pth 2023-03-16 06:06:47,119 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\D_41600.pth 2023-03-16 06:09:10,471 44k INFO Train Epoch: 49 [49%] 2023-03-16 06:09:10,472 44k INFO Losses: [2.819308042526245, 1.940409779548645, 7.438920974731445, 16.002960205078125, 0.8700133562088013], step: 41800, lr: 9.940175912662009e-05 2023-03-16 06:11:31,404 44k INFO Train Epoch: 49 [72%] 2023-03-16 06:11:31,405 44k INFO Losses: [2.658114194869995, 2.116206645965576, 7.55081033706665, 18.015640258789062, 1.4928420782089233], step: 42000, lr: 9.940175912662009e-05 2023-03-16 06:13:52,003 44k INFO Train Epoch: 49 [96%] 2023-03-16 06:13:52,004 44k INFO Losses: [2.7171857357025146, 2.017658233642578, 7.810293197631836, 14.713618278503418, 0.8685610294342041], step: 42200, lr: 9.940175912662009e-05 2023-03-16 06:14:18,471 44k INFO ====> Epoch: 49, cost 630.83 s 2023-03-16 06:16:24,789 44k INFO Train Epoch: 50 [19%] 2023-03-16 06:16:24,789 44k INFO Losses: [2.82660174369812, 1.8905882835388184, 8.393243789672852, 14.401327133178711, 1.546816349029541], step: 42400, lr: 9.938933390672926e-05 2023-03-16 06:16:28,781 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\G_42400.pth 2023-03-16 06:16:32,511 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\D_42400.pth 2023-03-16 06:18:56,076 44k INFO Train Epoch: 50 [42%] 2023-03-16 06:18:56,076 44k INFO Losses: [2.578792095184326, 2.117264986038208, 10.949488639831543, 16.964635848999023, 1.4917035102844238], step: 42600, lr: 9.938933390672926e-05 2023-03-16 06:21:16,857 44k INFO Train Epoch: 50 [65%] 2023-03-16 06:21:16,857 44k INFO Losses: [2.4920520782470703, 2.2400906085968018, 12.003613471984863, 20.793302536010742, 1.1753475666046143], step: 42800, lr: 9.938933390672926e-05 2023-03-16 06:23:37,302 44k INFO Train Epoch: 50 [88%] 2023-03-16 06:23:37,303 44k INFO Losses: [3.0732288360595703, 1.9132713079452515, 6.702621936798096, 13.354013442993164, 1.321865200996399], step: 43000, lr: 9.938933390672926e-05 2023-03-16 06:24:47,096 44k INFO ====> Epoch: 50, cost 628.62 s 2023-03-16 06:26:09,663 44k INFO Train Epoch: 51 [12%] 2023-03-16 06:26:09,663 44k INFO Losses: [2.6696951389312744, 1.841801404953003, 5.094601154327393, 15.367595672607422, 1.162354826927185], step: 43200, lr: 9.937691023999092e-05 2023-03-16 06:26:13,693 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\G_43200.pth 2023-03-16 06:26:18,627 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\D_43200.pth 2023-03-16 06:28:42,270 44k INFO Train Epoch: 51 [35%] 2023-03-16 06:28:42,270 44k INFO Losses: [2.375000238418579, 2.607823610305786, 12.153164863586426, 19.123878479003906, 1.0792299509048462], step: 43400, lr: 9.937691023999092e-05 2023-03-16 06:31:02,649 44k INFO Train Epoch: 51 [58%] 2023-03-16 06:31:02,650 44k INFO Losses: [2.632014274597168, 2.4310402870178223, 6.05234432220459, 12.548331260681152, 1.0417481660842896], step: 43600, lr: 9.937691023999092e-05 2023-03-16 06:33:23,220 44k INFO Train Epoch: 51 [81%] 2023-03-16 06:33:23,220 44k INFO Losses: [2.773710250854492, 2.221810817718506, 5.888792037963867, 16.05984115600586, 1.1168749332427979], step: 43800, lr: 9.937691023999092e-05 2023-03-16 06:35:16,416 44k INFO ====> Epoch: 51, cost 629.32 s 2023-03-16 06:35:55,606 44k INFO Train Epoch: 52 [4%] 2023-03-16 06:35:55,606 44k INFO Losses: [2.397761344909668, 2.269282817840576, 9.763917922973633, 18.647907257080078, 0.8863999247550964], step: 44000, lr: 9.936448812621091e-05 2023-03-16 06:35:59,632 44k INFO Saving model and optimizer state at iteration 52 to ./logs\44k\G_44000.pth 2023-03-16 06:36:04,160 44k INFO Saving model and optimizer state at iteration 52 to ./logs\44k\D_44000.pth 2023-03-16 06:38:28,206 44k INFO Train Epoch: 52 [28%] 2023-03-16 06:38:28,206 44k INFO Losses: [2.682070732116699, 2.198436737060547, 7.9274396896362305, 20.39393424987793, 0.8276645541191101], step: 44200, lr: 9.936448812621091e-05 2023-03-16 06:40:49,110 44k INFO Train Epoch: 52 [51%] 2023-03-16 06:40:49,110 44k INFO Losses: [2.3466713428497314, 2.4330508708953857, 9.012943267822266, 16.17315101623535, 1.3476622104644775], step: 44400, lr: 9.936448812621091e-05 2023-03-16 06:43:10,227 44k INFO Train Epoch: 52 [74%] 2023-03-16 06:43:10,228 44k INFO Losses: [2.619353771209717, 1.9924864768981934, 8.462141990661621, 16.185848236083984, 1.1850770711898804], step: 44600, lr: 9.936448812621091e-05 2023-03-16 06:45:31,133 44k INFO Train Epoch: 52 [97%] 2023-03-16 06:45:31,134 44k INFO Losses: [2.439828634262085, 2.4035587310791016, 7.989241123199463, 17.326374053955078, 1.109092116355896], step: 44800, lr: 9.936448812621091e-05 2023-03-16 06:45:35,056 44k INFO Saving model and optimizer state at iteration 52 to ./logs\44k\G_44800.pth 2023-03-16 06:45:39,195 44k INFO Saving model and optimizer state at iteration 52 to ./logs\44k\D_44800.pth 2023-03-16 06:45:58,667 44k INFO ====> Epoch: 52, cost 642.25 s 2023-03-16 06:48:15,295 44k INFO Train Epoch: 53 [20%] 2023-03-16 06:48:15,296 44k INFO Losses: [2.696899890899658, 2.0240426063537598, 5.840864658355713, 16.013195037841797, 1.2153449058532715], step: 45000, lr: 9.935206756519513e-05 2023-03-16 06:50:36,064 44k INFO Train Epoch: 53 [44%] 2023-03-16 06:50:36,065 44k INFO Losses: [2.2250165939331055, 2.35406494140625, 12.060111045837402, 18.51128387451172, 1.2481907606124878], step: 45200, lr: 9.935206756519513e-05 2023-03-16 06:52:57,012 44k INFO Train Epoch: 53 [67%] 2023-03-16 06:52:57,013 44k INFO Losses: [2.699676275253296, 1.9176621437072754, 5.404199123382568, 15.337042808532715, 1.2105075120925903], step: 45400, lr: 9.935206756519513e-05 2023-03-16 06:55:17,719 44k INFO Train Epoch: 53 [90%] 2023-03-16 06:55:17,720 44k INFO Losses: [2.4584832191467285, 2.039030075073242, 9.008186340332031, 17.330533981323242, 1.0713635683059692], step: 45600, lr: 9.935206756519513e-05 2023-03-16 06:55:21,716 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\G_45600.pth 2023-03-16 06:55:25,690 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\D_45600.pth 2023-03-16 06:56:28,696 44k INFO ====> Epoch: 53, cost 630.03 s 2023-03-16 06:58:01,179 44k INFO Train Epoch: 54 [13%] 2023-03-16 06:58:01,179 44k INFO Losses: [2.2067925930023193, 2.4350855350494385, 11.188362121582031, 19.532346725463867, 1.3560559749603271], step: 45800, lr: 9.933964855674948e-05 2023-03-16 07:00:21,774 44k INFO Train Epoch: 54 [36%] 2023-03-16 07:00:21,775 44k INFO Losses: [2.3538448810577393, 2.5210201740264893, 9.532809257507324, 18.806915283203125, 1.1005650758743286], step: 46000, lr: 9.933964855674948e-05 2023-03-16 07:02:42,268 44k INFO Train Epoch: 54 [60%] 2023-03-16 07:02:42,268 44k INFO Losses: [2.5852019786834717, 2.212178945541382, 10.958706855773926, 20.915082931518555, 1.2333284616470337], step: 46200, lr: 9.933964855674948e-05 2023-03-16 07:05:02,839 44k INFO Train Epoch: 54 [83%] 2023-03-16 07:05:02,839 44k INFO Losses: [2.601923704147339, 2.2138912677764893, 8.14380168914795, 18.514530181884766, 1.304003357887268], step: 46400, lr: 9.933964855674948e-05 2023-03-16 07:05:06,766 44k INFO Saving model and optimizer state at iteration 54 to ./logs\44k\G_46400.pth 2023-03-16 07:05:11,935 44k INFO Saving model and optimizer state at iteration 54 to ./logs\44k\D_46400.pth 2023-03-16 07:06:58,304 44k INFO ====> Epoch: 54, cost 629.61 s 2023-03-16 07:07:47,224 44k INFO Train Epoch: 55 [6%] 2023-03-16 07:07:47,224 44k INFO Losses: [2.7612130641937256, 2.4328389167785645, 7.637242794036865, 17.32210350036621, 0.713375449180603], step: 46600, lr: 9.932723110067987e-05 2023-03-16 07:10:07,953 44k INFO Train Epoch: 55 [29%] 2023-03-16 07:10:07,953 44k INFO Losses: [2.473050355911255, 2.249452829360962, 9.372511863708496, 19.453962326049805, 0.9277483224868774], step: 46800, lr: 9.932723110067987e-05 2023-03-16 07:12:28,252 44k INFO Train Epoch: 55 [52%] 2023-03-16 07:12:28,253 44k INFO Losses: [2.7903544902801514, 2.000307559967041, 7.535732269287109, 18.231796264648438, 1.1922674179077148], step: 47000, lr: 9.932723110067987e-05 2023-03-16 07:14:49,110 44k INFO Train Epoch: 55 [76%] 2023-03-16 07:14:49,110 44k INFO Losses: [2.4949023723602295, 2.383155584335327, 11.36546802520752, 21.100799560546875, 1.2444294691085815], step: 47200, lr: 9.932723110067987e-05 2023-03-16 07:14:53,044 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\G_47200.pth 2023-03-16 07:14:57,840 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\D_47200.pth 2023-03-16 07:17:21,883 44k INFO Train Epoch: 55 [99%] 2023-03-16 07:17:21,884 44k INFO Losses: [2.5146989822387695, 2.12974214553833, 8.76436996459961, 16.414226531982422, 1.2755227088928223], step: 47400, lr: 9.932723110067987e-05 2023-03-16 07:17:28,536 44k INFO ====> Epoch: 55, cost 630.23 s 2023-03-16 07:19:54,247 44k INFO Train Epoch: 56 [22%] 2023-03-16 07:19:54,248 44k INFO Losses: [2.320920467376709, 2.228030204772949, 11.90869426727295, 17.102617263793945, 1.2600750923156738], step: 47600, lr: 9.931481519679228e-05 2023-03-16 07:22:27,328 44k INFO Train Epoch: 56 [45%] 2023-03-16 07:22:27,329 44k INFO Losses: [2.4383645057678223, 2.190150737762451, 9.609818458557129, 16.372297286987305, 1.013803243637085], step: 47800, lr: 9.931481519679228e-05 2023-03-16 07:25:15,981 44k INFO Train Epoch: 56 [68%] 2023-03-16 07:25:15,982 44k INFO Losses: [2.45510196685791, 2.1060922145843506, 9.734111785888672, 16.90951919555664, 1.3889333009719849], step: 48000, lr: 9.931481519679228e-05 2023-03-16 07:25:19,965 44k INFO Saving model and optimizer state at iteration 56 to ./logs\44k\G_48000.pth 2023-03-16 07:25:24,637 44k INFO Saving model and optimizer state at iteration 56 to ./logs\44k\D_48000.pth 2023-03-16 07:27:48,931 44k INFO Train Epoch: 56 [92%] 2023-03-16 07:27:48,932 44k INFO Losses: [2.561230182647705, 2.5961544513702393, 9.106178283691406, 18.254220962524414, 1.232792615890503], step: 48200, lr: 9.931481519679228e-05 2023-03-16 07:28:39,326 44k INFO ====> Epoch: 56, cost 670.79 s 2023-03-16 07:30:21,859 44k INFO Train Epoch: 57 [15%] 2023-03-16 07:30:21,860 44k INFO Losses: [2.3587746620178223, 2.2116260528564453, 10.81920337677002, 17.38863182067871, 1.4266881942749023], step: 48400, lr: 9.930240084489267e-05 2023-03-16 07:32:42,842 44k INFO Train Epoch: 57 [38%] 2023-03-16 07:32:42,842 44k INFO Losses: [2.6499264240264893, 1.8325670957565308, 8.235687255859375, 15.932356834411621, 1.5478261709213257], step: 48600, lr: 9.930240084489267e-05 2023-03-16 07:35:03,703 44k INFO Train Epoch: 57 [61%] 2023-03-16 07:35:03,703 44k INFO Losses: [2.5422556400299072, 2.2435131072998047, 7.538516044616699, 17.7718448638916, 1.151097059249878], step: 48800, lr: 9.930240084489267e-05 2023-03-16 07:35:07,505 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\G_48800.pth 2023-03-16 07:35:12,273 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\D_48800.pth 2023-03-16 07:37:35,948 44k INFO Train Epoch: 57 [84%] 2023-03-16 07:37:35,948 44k INFO Losses: [2.566084861755371, 2.3084657192230225, 9.407684326171875, 18.17554473876953, 0.9218719601631165], step: 49000, lr: 9.930240084489267e-05 2023-03-16 07:39:09,622 44k INFO ====> Epoch: 57, cost 630.30 s 2023-03-16 07:40:08,202 44k INFO Train Epoch: 58 [8%] 2023-03-16 07:40:08,203 44k INFO Losses: [2.6008541584014893, 2.3184099197387695, 9.406684875488281, 15.399543762207031, 0.9126559495925903], step: 49200, lr: 9.928998804478705e-05 2023-03-16 07:42:29,136 44k INFO Train Epoch: 58 [31%] 2023-03-16 07:42:29,136 44k INFO Losses: [2.6161224842071533, 2.4613590240478516, 9.866619110107422, 20.135009765625, 1.1020022630691528], step: 49400, lr: 9.928998804478705e-05 2023-03-16 07:44:50,358 44k INFO Train Epoch: 58 [54%] 2023-03-16 07:44:50,358 44k INFO Losses: [2.6284987926483154, 2.173779249191284, 8.21438217163086, 17.837556838989258, 1.416119933128357], step: 49600, lr: 9.928998804478705e-05 2023-03-16 07:44:54,246 44k INFO Saving model and optimizer state at iteration 58 to ./logs\44k\G_49600.pth 2023-03-16 07:44:58,667 44k INFO Saving model and optimizer state at iteration 58 to ./logs\44k\D_49600.pth 2023-03-16 07:47:22,508 44k INFO Train Epoch: 58 [77%] 2023-03-16 07:47:22,508 44k INFO Losses: [2.3962385654449463, 2.0347583293914795, 15.122808456420898, 20.136667251586914, 1.0054899454116821], step: 49800, lr: 9.928998804478705e-05 2023-03-16 07:49:39,794 44k INFO ====> Epoch: 58, cost 630.17 s 2023-03-16 07:49:54,915 44k INFO Train Epoch: 59 [0%] 2023-03-16 07:49:54,915 44k INFO Losses: [2.220061779022217, 2.204134941101074, 14.170027732849121, 21.735240936279297, 1.630082130432129], step: 50000, lr: 9.927757679628145e-05 2023-03-16 07:52:15,863 44k INFO Train Epoch: 59 [24%] 2023-03-16 07:52:15,864 44k INFO Losses: [2.769867181777954, 1.9544068574905396, 8.309430122375488, 15.919169425964355, 1.3742636442184448], step: 50200, lr: 9.927757679628145e-05 2023-03-16 07:54:36,510 44k INFO Train Epoch: 59 [47%] 2023-03-16 07:54:36,510 44k INFO Losses: [2.5146689414978027, 2.3604607582092285, 8.834993362426758, 16.37042999267578, 1.209707260131836], step: 50400, lr: 9.927757679628145e-05 2023-03-16 07:54:40,400 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\G_50400.pth 2023-03-16 07:54:45,036 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\D_50400.pth 2023-03-16 07:57:08,940 44k INFO Train Epoch: 59 [70%] 2023-03-16 07:57:08,941 44k INFO Losses: [2.498619318008423, 2.0596930980682373, 10.153973579406738, 19.86995506286621, 1.1384954452514648], step: 50600, lr: 9.927757679628145e-05 2023-03-16 07:59:29,684 44k INFO Train Epoch: 59 [93%] 2023-03-16 07:59:29,685 44k INFO Losses: [2.7901766300201416, 2.1262145042419434, 3.208369016647339, 15.18115520477295, 0.981759250164032], step: 50800, lr: 9.927757679628145e-05 2023-03-16 08:00:10,133 44k INFO ====> Epoch: 59, cost 630.34 s 2023-03-16 08:02:02,363 44k INFO Train Epoch: 60 [16%] 2023-03-16 08:02:02,364 44k INFO Losses: [2.4366295337677, 2.2576870918273926, 12.618935585021973, 17.408967971801758, 1.1560430526733398], step: 51000, lr: 9.926516709918191e-05 2023-03-16 08:04:23,045 44k INFO Train Epoch: 60 [40%] 2023-03-16 08:04:23,046 44k INFO Losses: [2.712378978729248, 1.9154318571090698, 5.44594669342041, 11.923492431640625, 1.1761837005615234], step: 51200, lr: 9.926516709918191e-05 2023-03-16 08:04:26,906 44k INFO Saving model and optimizer state at iteration 60 to ./logs\44k\G_51200.pth 2023-03-16 08:04:31,635 44k INFO Saving model and optimizer state at iteration 60 to ./logs\44k\D_51200.pth 2023-03-16 08:06:55,425 44k INFO Train Epoch: 60 [63%] 2023-03-16 08:06:55,425 44k INFO Losses: [2.2890450954437256, 2.342878818511963, 13.045965194702148, 18.969261169433594, 0.9563868045806885], step: 51400, lr: 9.926516709918191e-05 2023-03-16 08:09:16,269 44k INFO Train Epoch: 60 [86%] 2023-03-16 08:09:16,270 44k INFO Losses: [2.618273973464966, 2.4321084022521973, 7.109796524047852, 13.319160461425781, 0.9697628021240234], step: 51600, lr: 9.926516709918191e-05 2023-03-16 08:10:40,139 44k INFO ====> Epoch: 60, cost 630.01 s 2023-03-16 08:11:49,520 44k INFO Train Epoch: 61 [9%] 2023-03-16 08:11:49,520 44k INFO Losses: [2.43042254447937, 2.2529478073120117, 10.050570487976074, 16.610795974731445, 0.6957406997680664], step: 51800, lr: 9.92527589532945e-05 2023-03-16 08:14:10,354 44k INFO Train Epoch: 61 [32%] 2023-03-16 08:14:10,354 44k INFO Losses: [2.5791261196136475, 2.348515272140503, 11.054726600646973, 16.7816219329834, 0.9067707061767578], step: 52000, lr: 9.92527589532945e-05 2023-03-16 08:14:14,352 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\G_52000.pth 2023-03-16 08:14:19,382 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\D_52000.pth 2023-03-16 08:16:43,237 44k INFO Train Epoch: 61 [56%] 2023-03-16 08:16:43,238 44k INFO Losses: [2.8262038230895996, 2.1183481216430664, 5.732144832611084, 15.833592414855957, 1.247839093208313], step: 52200, lr: 9.92527589532945e-05 2023-03-16 08:19:04,180 44k INFO Train Epoch: 61 [79%] 2023-03-16 08:19:04,180 44k INFO Losses: [2.645087242126465, 2.0515694618225098, 9.299736976623535, 19.798463821411133, 1.1330314874649048], step: 52400, lr: 9.92527589532945e-05 2023-03-16 08:21:11,647 44k INFO ====> Epoch: 61, cost 631.51 s 2023-03-16 08:21:36,495 44k INFO Train Epoch: 62 [2%] 2023-03-16 08:21:36,496 44k INFO Losses: [2.8362936973571777, 1.7566347122192383, 6.922454357147217, 11.872345924377441, 1.1291018724441528], step: 52600, lr: 9.924035235842533e-05 2023-03-16 08:23:57,450 44k INFO Train Epoch: 62 [25%] 2023-03-16 08:23:57,451 44k INFO Losses: [2.4827873706817627, 2.4121973514556885, 7.417458534240723, 15.273887634277344, 1.1477025747299194], step: 52800, lr: 9.924035235842533e-05 2023-03-16 08:24:01,308 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\G_52800.pth 2023-03-16 08:24:05,537 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\D_52800.pth 2023-03-16 08:26:29,174 44k INFO Train Epoch: 62 [48%] 2023-03-16 08:26:29,174 44k INFO Losses: [2.497466564178467, 2.501901626586914, 6.402492523193359, 15.238126754760742, 1.2123016119003296], step: 53000, lr: 9.924035235842533e-05 2023-03-16 08:28:50,046 44k INFO Train Epoch: 62 [72%] 2023-03-16 08:28:50,046 44k INFO Losses: [2.2440381050109863, 3.073730945587158, 4.811328411102295, 9.904010772705078, 0.9763422012329102], step: 53200, lr: 9.924035235842533e-05 2023-03-16 08:31:10,726 44k INFO Train Epoch: 62 [95%] 2023-03-16 08:31:10,727 44k INFO Losses: [2.209709882736206, 2.573591470718384, 10.295619010925293, 17.65602684020996, 1.238034725189209], step: 53400, lr: 9.924035235842533e-05 2023-03-16 08:31:41,460 44k INFO ====> Epoch: 62, cost 629.81 s 2023-03-16 08:33:43,314 44k INFO Train Epoch: 63 [18%] 2023-03-16 08:33:43,314 44k INFO Losses: [2.4653706550598145, 2.064711093902588, 12.348708152770996, 20.169734954833984, 1.5925910472869873], step: 53600, lr: 9.922794731438052e-05 2023-03-16 08:33:47,128 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\G_53600.pth 2023-03-16 08:33:52,120 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\D_53600.pth 2023-03-16 08:36:16,115 44k INFO Train Epoch: 63 [41%] 2023-03-16 08:36:16,115 44k INFO Losses: [2.3788504600524902, 2.363140821456909, 13.894274711608887, 19.808799743652344, 1.4343682527542114], step: 53800, lr: 9.922794731438052e-05 2023-03-16 08:38:36,962 44k INFO Train Epoch: 63 [65%] 2023-03-16 08:38:36,962 44k INFO Losses: [2.6207938194274902, 2.3101930618286133, 12.603818893432617, 19.395017623901367, 0.9671733975410461], step: 54000, lr: 9.922794731438052e-05 2023-03-16 08:40:58,249 44k INFO Train Epoch: 63 [88%] 2023-03-16 08:40:58,249 44k INFO Losses: [2.4665403366088867, 2.1145427227020264, 8.660067558288574, 16.610074996948242, 0.7179177403450012], step: 54200, lr: 9.922794731438052e-05 2023-03-16 08:42:12,393 44k INFO ====> Epoch: 63, cost 630.93 s 2023-03-16 08:43:30,683 44k INFO Train Epoch: 64 [11%] 2023-03-16 08:43:30,683 44k INFO Losses: [2.3757472038269043, 2.168349266052246, 11.957794189453125, 18.474966049194336, 0.8594464063644409], step: 54400, lr: 9.921554382096622e-05 2023-03-16 08:43:34,506 44k INFO Saving model and optimizer state at iteration 64 to ./logs\44k\G_54400.pth 2023-03-16 08:43:39,120 44k INFO Saving model and optimizer state at iteration 64 to ./logs\44k\D_54400.pth 2023-03-16 08:46:03,001 44k INFO Train Epoch: 64 [34%] 2023-03-16 08:46:03,001 44k INFO Losses: [2.584357738494873, 2.094322919845581, 6.975753307342529, 14.474079132080078, 1.1133513450622559], step: 54600, lr: 9.921554382096622e-05 2023-03-16 08:48:23,794 44k INFO Train Epoch: 64 [57%] 2023-03-16 08:48:23,794 44k INFO Losses: [2.351510524749756, 2.2656517028808594, 8.886553764343262, 17.131067276000977, 1.0484391450881958], step: 54800, lr: 9.921554382096622e-05 2023-03-16 08:50:44,682 44k INFO Train Epoch: 64 [81%] 2023-03-16 08:50:44,682 44k INFO Losses: [2.7745914459228516, 1.9082585573196411, 8.244362831115723, 15.38274097442627, 0.6868692636489868], step: 55000, lr: 9.921554382096622e-05 2023-03-16 08:52:42,334 44k INFO ====> Epoch: 64, cost 629.94 s 2023-03-16 08:53:17,003 44k INFO Train Epoch: 65 [4%] 2023-03-16 08:53:17,003 44k INFO Losses: [2.8256711959838867, 1.8988685607910156, 7.414281368255615, 13.969552040100098, 1.3296607732772827], step: 55200, lr: 9.92031418779886e-05 2023-03-16 08:53:20,882 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\G_55200.pth 2023-03-16 08:53:24,655 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\D_55200.pth 2023-03-16 08:55:48,479 44k INFO Train Epoch: 65 [27%] 2023-03-16 08:55:48,480 44k INFO Losses: [2.3851163387298584, 2.505263566970825, 11.302007675170898, 18.481613159179688, 1.0186898708343506], step: 55400, lr: 9.92031418779886e-05 2023-03-16 08:58:09,175 44k INFO Train Epoch: 65 [50%] 2023-03-16 08:58:09,175 44k INFO Losses: [2.3398609161376953, 2.4234278202056885, 9.798212051391602, 18.275558471679688, 1.0918577909469604], step: 55600, lr: 9.92031418779886e-05 2023-03-16 09:00:30,072 44k INFO Train Epoch: 65 [73%] 2023-03-16 09:00:30,072 44k INFO Losses: [2.524658679962158, 2.2522621154785156, 11.39342975616455, 19.48005485534668, 0.9947400093078613], step: 55800, lr: 9.92031418779886e-05 2023-03-16 09:02:50,768 44k INFO Train Epoch: 65 [97%] 2023-03-16 09:02:50,768 44k INFO Losses: [2.468557834625244, 2.271503448486328, 8.731128692626953, 16.500598907470703, 1.3278340101242065], step: 56000, lr: 9.92031418779886e-05 2023-03-16 09:02:54,740 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\G_56000.pth 2023-03-16 09:02:58,739 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\D_56000.pth 2023-03-16 09:03:22,414 44k INFO ====> Epoch: 65, cost 640.08 s 2023-03-16 09:05:34,239 44k INFO Train Epoch: 66 [20%] 2023-03-16 09:05:34,239 44k INFO Losses: [2.6555261611938477, 2.1583638191223145, 7.791229724884033, 16.836580276489258, 1.379538655281067], step: 56200, lr: 9.919074148525384e-05 2023-03-16 09:07:55,527 44k INFO Train Epoch: 66 [43%] 2023-03-16 09:07:55,528 44k INFO Losses: [2.740678071975708, 2.1616125106811523, 7.732511520385742, 15.182045936584473, 1.0464972257614136], step: 56400, lr: 9.919074148525384e-05 2023-03-16 09:10:16,467 44k INFO Train Epoch: 66 [66%] 2023-03-16 09:10:16,468 44k INFO Losses: [2.4676690101623535, 2.3896846771240234, 9.781915664672852, 17.631288528442383, 1.1943113803863525], step: 56600, lr: 9.919074148525384e-05 2023-03-16 09:12:37,367 44k INFO Train Epoch: 66 [89%] 2023-03-16 09:12:37,367 44k INFO Losses: [2.834840774536133, 2.024867057800293, 9.138463020324707, 18.73454475402832, 1.3200788497924805], step: 56800, lr: 9.919074148525384e-05 2023-03-16 09:12:41,228 44k INFO Saving model and optimizer state at iteration 66 to ./logs\44k\G_56800.pth 2023-03-16 09:12:46,038 44k INFO Saving model and optimizer state at iteration 66 to ./logs\44k\D_56800.pth 2023-03-16 09:13:53,165 44k INFO ====> Epoch: 66, cost 630.75 s 2023-03-16 09:15:21,288 44k INFO Train Epoch: 67 [13%] 2023-03-16 09:15:21,288 44k INFO Losses: [2.640637159347534, 2.4239397048950195, 9.547590255737305, 16.106595993041992, 1.090582013130188], step: 57000, lr: 9.917834264256819e-05 2023-03-16 09:17:42,200 44k INFO Train Epoch: 67 [36%] 2023-03-16 09:17:42,200 44k INFO Losses: [2.765418529510498, 1.7469984292984009, 8.31545639038086, 13.20520305633545, 1.2439275979995728], step: 57200, lr: 9.917834264256819e-05 2023-03-16 09:20:02,920 44k INFO Train Epoch: 67 [59%] 2023-03-16 09:20:02,920 44k INFO Losses: [2.655794143676758, 2.1720283031463623, 9.762409210205078, 19.217693328857422, 1.0600781440734863], step: 57400, lr: 9.917834264256819e-05 2023-03-16 09:22:23,775 44k INFO Train Epoch: 67 [82%] 2023-03-16 09:22:23,775 44k INFO Losses: [2.813000202178955, 2.093132972717285, 9.89596939086914, 16.195690155029297, 1.2041079998016357], step: 57600, lr: 9.917834264256819e-05 2023-03-16 09:22:27,642 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\G_57600.pth 2023-03-16 09:22:32,377 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\D_57600.pth 2023-03-16 09:24:23,087 44k INFO ====> Epoch: 67, cost 629.92 s 2023-03-16 09:25:07,615 44k INFO Train Epoch: 68 [5%] 2023-03-16 09:25:07,615 44k INFO Losses: [2.406370162963867, 2.118962049484253, 11.490840911865234, 20.158477783203125, 1.0509792566299438], step: 57800, lr: 9.916594534973787e-05 2023-03-16 09:27:28,556 44k INFO Train Epoch: 68 [29%] 2023-03-16 09:27:28,557 44k INFO Losses: [2.6468117237091064, 2.185140371322632, 9.144197463989258, 16.037782669067383, 1.2005385160446167], step: 58000, lr: 9.916594534973787e-05 2023-03-16 09:29:49,219 44k INFO Train Epoch: 68 [52%] 2023-03-16 09:29:49,220 44k INFO Losses: [2.679431915283203, 2.2036385536193848, 9.967596054077148, 19.20917320251465, 1.3829982280731201], step: 58200, lr: 9.916594534973787e-05 2023-03-16 09:32:10,182 44k INFO Train Epoch: 68 [75%] 2023-03-16 09:32:10,183 44k INFO Losses: [2.389014482498169, 2.3690638542175293, 7.038926601409912, 12.892892837524414, 1.0902518033981323], step: 58400, lr: 9.916594534973787e-05 2023-03-16 09:32:14,024 44k INFO Saving model and optimizer state at iteration 68 to ./logs\44k\G_58400.pth 2023-03-16 09:32:19,100 44k INFO Saving model and optimizer state at iteration 68 to ./logs\44k\D_58400.pth 2023-03-16 09:34:42,748 44k INFO Train Epoch: 68 [98%] 2023-03-16 09:34:42,748 44k INFO Losses: [2.4814953804016113, 2.1594533920288086, 7.94528865814209, 19.334041595458984, 1.093526840209961], step: 58600, lr: 9.916594534973787e-05 2023-03-16 09:34:53,687 44k INFO ====> Epoch: 68, cost 630.60 s 2023-03-16 09:37:16,329 44k INFO Train Epoch: 69 [21%] 2023-03-16 09:37:16,329 44k INFO Losses: [2.6422762870788574, 2.323188304901123, 9.792487144470215, 19.215627670288086, 1.313987135887146], step: 58800, lr: 9.915354960656915e-05 2023-03-16 09:39:37,021 44k INFO Train Epoch: 69 [45%] 2023-03-16 09:39:37,021 44k INFO Losses: [2.730833053588867, 1.9816628694534302, 6.799505710601807, 14.421127319335938, 1.213290810585022], step: 59000, lr: 9.915354960656915e-05 2023-03-16 09:41:58,003 44k INFO Train Epoch: 69 [68%] 2023-03-16 09:41:58,003 44k INFO Losses: [2.8221323490142822, 2.0237669944763184, 7.835437774658203, 20.036151885986328, 0.9687215685844421], step: 59200, lr: 9.915354960656915e-05 2023-03-16 09:42:01,870 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\G_59200.pth 2023-03-16 09:42:06,115 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\D_59200.pth 2023-03-16 09:44:29,831 44k INFO Train Epoch: 69 [91%] 2023-03-16 09:44:29,831 44k INFO Losses: [2.6930346488952637, 2.602715253829956, 11.227035522460938, 19.467952728271484, 1.2266347408294678], step: 59400, lr: 9.915354960656915e-05 2023-03-16 09:45:24,372 44k INFO ====> Epoch: 69, cost 630.68 s 2023-03-16 09:47:02,499 44k INFO Train Epoch: 70 [14%] 2023-03-16 09:47:02,499 44k INFO Losses: [2.694779634475708, 2.3850975036621094, 9.738021850585938, 19.924657821655273, 1.367608904838562], step: 59600, lr: 9.914115541286833e-05 2023-03-16 09:49:23,415 44k INFO Train Epoch: 70 [37%] 2023-03-16 09:49:23,415 44k INFO Losses: [2.348580837249756, 2.417052745819092, 11.195652961730957, 18.998327255249023, 1.2226345539093018], step: 59800, lr: 9.914115541286833e-05 2023-03-16 09:51:44,304 44k INFO Train Epoch: 70 [61%] 2023-03-16 09:51:44,304 44k INFO Losses: [2.6893177032470703, 2.3425042629241943, 7.35524845123291, 18.334604263305664, 1.2404136657714844], step: 60000, lr: 9.914115541286833e-05 2023-03-16 09:51:48,139 44k INFO Saving model and optimizer state at iteration 70 to ./logs\44k\G_60000.pth 2023-03-16 09:51:53,064 44k INFO Saving model and optimizer state at iteration 70 to ./logs\44k\D_60000.pth 2023-03-16 09:54:17,061 44k INFO Train Epoch: 70 [84%] 2023-03-16 09:54:17,062 44k INFO Losses: [2.806945323944092, 1.805245280265808, 5.714762210845947, 10.534035682678223, 1.2388805150985718], step: 60200, lr: 9.914115541286833e-05 2023-03-16 09:55:54,944 44k INFO ====> Epoch: 70, cost 630.57 s 2023-03-16 09:56:49,392 44k INFO Train Epoch: 71 [7%] 2023-03-16 09:56:49,393 44k INFO Losses: [2.6596875190734863, 1.9748210906982422, 9.140596389770508, 14.65374755859375, 0.9848356246948242], step: 60400, lr: 9.912876276844171e-05 2023-03-16 09:59:10,346 44k INFO Train Epoch: 71 [30%] 2023-03-16 09:59:10,347 44k INFO Losses: [2.5789170265197754, 2.343886375427246, 8.427496910095215, 16.708181381225586, 0.9761437773704529], step: 60600, lr: 9.912876276844171e-05 2023-03-16 10:01:31,023 44k INFO Train Epoch: 71 [53%] 2023-03-16 10:01:31,023 44k INFO Losses: [2.600478172302246, 2.119933843612671, 7.853041648864746, 15.39247989654541, 1.2531832456588745], step: 60800, lr: 9.912876276844171e-05 2023-03-16 10:01:35,024 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\G_60800.pth 2023-03-16 10:01:39,005 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\D_60800.pth 2023-03-16 10:04:03,456 44k INFO Train Epoch: 71 [77%] 2023-03-16 10:04:03,456 44k INFO Losses: [2.4239768981933594, 2.195873975753784, 10.145777702331543, 19.20042610168457, 1.3092432022094727], step: 61000, lr: 9.912876276844171e-05 2023-03-16 10:06:24,416 44k INFO Train Epoch: 71 [100%] 2023-03-16 10:06:24,417 44k INFO Losses: [2.2761623859405518, 2.572124719619751, 10.01583480834961, 19.543323516845703, 1.0851298570632935], step: 61200, lr: 9.912876276844171e-05 2023-03-16 10:06:25,598 44k INFO ====> Epoch: 71, cost 630.65 s 2023-03-16 10:08:57,196 44k INFO Train Epoch: 72 [23%] 2023-03-16 10:08:57,196 44k INFO Losses: [2.661186695098877, 2.2388620376586914, 9.288114547729492, 17.421724319458008, 1.059014916419983], step: 61400, lr: 9.911637167309565e-05 2023-03-16 10:11:17,935 44k INFO Train Epoch: 72 [46%] 2023-03-16 10:11:17,936 44k INFO Losses: [2.5225906372070312, 2.1047894954681396, 12.301124572753906, 17.078475952148438, 1.0504279136657715], step: 61600, lr: 9.911637167309565e-05 2023-03-16 10:11:21,780 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\G_61600.pth 2023-03-16 10:11:26,267 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\D_61600.pth 2023-03-16 10:13:50,188 44k INFO Train Epoch: 72 [69%] 2023-03-16 10:13:50,189 44k INFO Losses: [2.6382248401641846, 1.9710413217544556, 8.450164794921875, 17.128093719482422, 1.1446681022644043], step: 61800, lr: 9.911637167309565e-05 2023-03-16 10:16:11,076 44k INFO Train Epoch: 72 [93%] 2023-03-16 10:16:11,076 44k INFO Losses: [2.7095258235931396, 2.0971291065216064, 6.560122966766357, 13.856376647949219, 1.3054347038269043], step: 62000, lr: 9.911637167309565e-05 2023-03-16 10:16:55,832 44k INFO ====> Epoch: 72, cost 630.23 s 2023-03-16 10:18:43,815 44k INFO Train Epoch: 73 [16%] 2023-03-16 10:18:43,816 44k INFO Losses: [2.5409834384918213, 2.171119451522827, 11.51504135131836, 18.735864639282227, 1.018770694732666], step: 62200, lr: 9.910398212663652e-05 2023-03-16 10:21:04,636 44k INFO Train Epoch: 73 [39%] 2023-03-16 10:21:04,636 44k INFO Losses: [2.750413179397583, 1.759555697441101, 11.146120071411133, 18.67780303955078, 1.3131365776062012], step: 62400, lr: 9.910398212663652e-05 2023-03-16 10:21:08,486 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\G_62400.pth 2023-03-16 10:21:13,251 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\D_62400.pth 2023-03-16 10:23:37,272 44k INFO Train Epoch: 73 [62%] 2023-03-16 10:23:37,272 44k INFO Losses: [2.016291856765747, 2.640414237976074, 10.436443328857422, 17.37982940673828, 1.1400269269943237], step: 62600, lr: 9.910398212663652e-05 2023-03-16 10:25:58,329 44k INFO Train Epoch: 73 [85%] 2023-03-16 10:25:58,330 44k INFO Losses: [2.335686445236206, 2.360227108001709, 11.521604537963867, 18.983108520507812, 0.7645275592803955], step: 62800, lr: 9.910398212663652e-05 2023-03-16 10:27:26,629 44k INFO ====> Epoch: 73, cost 630.80 s 2023-03-16 10:28:36,305 44k INFO Train Epoch: 74 [9%] 2023-03-16 10:28:36,305 44k INFO Losses: [2.45324444770813, 2.174006938934326, 7.5263237953186035, 16.859113693237305, 0.9667986631393433], step: 63000, lr: 9.909159412887068e-05 2023-03-16 10:30:57,271 44k INFO Train Epoch: 74 [32%] 2023-03-16 10:30:57,272 44k INFO Losses: [2.390319347381592, 2.388112783432007, 11.604615211486816, 18.760295867919922, 1.3347492218017578], step: 63200, lr: 9.909159412887068e-05 2023-03-16 10:31:01,160 44k INFO Saving model and optimizer state at iteration 74 to ./logs\44k\G_63200.pth 2023-03-16 10:31:05,804 44k INFO Saving model and optimizer state at iteration 74 to ./logs\44k\D_63200.pth 2023-03-16 10:33:29,909 44k INFO Train Epoch: 74 [55%] 2023-03-16 10:33:29,909 44k INFO Losses: [2.6714751720428467, 1.9055752754211426, 5.818533897399902, 11.990514755249023, 0.916670024394989], step: 63400, lr: 9.909159412887068e-05 2023-03-16 10:35:50,945 44k INFO Train Epoch: 74 [78%] 2023-03-16 10:35:50,946 44k INFO Losses: [2.367539644241333, 2.277756929397583, 11.877970695495605, 17.312002182006836, 0.8547907471656799], step: 63600, lr: 9.909159412887068e-05 2023-03-16 10:38:02,811 44k INFO ====> Epoch: 74, cost 636.18 s 2023-03-16 10:38:23,486 44k INFO Train Epoch: 75 [1%] 2023-03-16 10:38:23,487 44k INFO Losses: [2.328869581222534, 2.351182460784912, 14.422402381896973, 19.440670013427734, 1.38053297996521], step: 63800, lr: 9.907920767960457e-05 2023-03-16 10:40:44,552 44k INFO Train Epoch: 75 [25%] 2023-03-16 10:40:44,552 44k INFO Losses: [2.3881642818450928, 2.206350803375244, 8.753963470458984, 20.304931640625, 1.5186760425567627], step: 64000, lr: 9.907920767960457e-05 2023-03-16 10:40:48,473 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\G_64000.pth 2023-03-16 10:40:53,433 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\D_64000.pth 2023-03-16 10:43:17,191 44k INFO Train Epoch: 75 [48%] 2023-03-16 10:43:17,191 44k INFO Losses: [2.463136672973633, 2.1963863372802734, 8.812617301940918, 19.227642059326172, 1.0107823610305786], step: 64200, lr: 9.907920767960457e-05 2023-03-16 10:45:38,253 44k INFO Train Epoch: 75 [71%] 2023-03-16 10:45:38,254 44k INFO Losses: [2.698432445526123, 2.0327117443084717, 10.723394393920898, 18.746061325073242, 1.0142163038253784], step: 64400, lr: 9.907920767960457e-05 2023-03-16 10:47:59,115 44k INFO Train Epoch: 75 [94%] 2023-03-16 10:47:59,115 44k INFO Losses: [2.480285882949829, 2.299367666244507, 9.459745407104492, 19.359920501708984, 1.2508845329284668], step: 64600, lr: 9.907920767960457e-05 2023-03-16 10:48:33,971 44k INFO ====> Epoch: 75, cost 631.16 s 2023-03-16 10:50:31,864 44k INFO Train Epoch: 76 [17%] 2023-03-16 10:50:31,864 44k INFO Losses: [2.43098521232605, 2.3118276596069336, 11.150856971740723, 17.995481491088867, 0.8780744671821594], step: 64800, lr: 9.906682277864462e-05 2023-03-16 10:50:35,744 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\G_64800.pth 2023-03-16 10:50:40,912 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\D_64800.pth 2023-03-16 10:53:04,856 44k INFO Train Epoch: 76 [41%] 2023-03-16 10:53:04,857 44k INFO Losses: [2.7696619033813477, 2.0564799308776855, 7.811900615692139, 15.814391136169434, 1.0703811645507812], step: 65000, lr: 9.906682277864462e-05 2023-03-16 10:55:25,795 44k INFO Train Epoch: 76 [64%] 2023-03-16 10:55:25,795 44k INFO Losses: [2.5875580310821533, 1.881138801574707, 9.24630355834961, 16.353607177734375, 1.0891456604003906], step: 65200, lr: 9.906682277864462e-05 2023-03-16 10:57:46,731 44k INFO Train Epoch: 76 [87%] 2023-03-16 10:57:46,731 44k INFO Losses: [2.561007499694824, 2.077425003051758, 7.042902946472168, 16.527910232543945, 0.9754928350448608], step: 65400, lr: 9.906682277864462e-05 2023-03-16 10:59:05,077 44k INFO ====> Epoch: 76, cost 631.11 s 2023-03-16 11:00:20,167 44k INFO Train Epoch: 77 [10%] 2023-03-16 11:00:20,168 44k INFO Losses: [2.528118848800659, 2.127331018447876, 8.690193176269531, 16.262258529663086, 1.4545307159423828], step: 65600, lr: 9.905443942579728e-05 2023-03-16 11:00:24,000 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\G_65600.pth 2023-03-16 11:00:28,814 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\D_65600.pth 2023-03-16 11:02:52,657 44k INFO Train Epoch: 77 [33%] 2023-03-16 11:02:52,657 44k INFO Losses: [2.098203182220459, 2.9470953941345215, 5.638371467590332, 14.445476531982422, 1.0409257411956787], step: 65800, lr: 9.905443942579728e-05 2023-03-16 11:05:13,373 44k INFO Train Epoch: 77 [57%] 2023-03-16 11:05:13,374 44k INFO Losses: [2.502119302749634, 2.2887442111968994, 7.832170486450195, 15.630765914916992, 1.168419361114502], step: 66000, lr: 9.905443942579728e-05 2023-03-16 11:07:34,494 44k INFO Train Epoch: 77 [80%] 2023-03-16 11:07:34,495 44k INFO Losses: [2.3963236808776855, 2.3246161937713623, 11.005682945251465, 20.51742172241211, 1.123476505279541], step: 66200, lr: 9.905443942579728e-05 2023-03-16 11:09:36,544 44k INFO ====> Epoch: 77, cost 631.47 s 2023-03-16 11:10:07,150 44k INFO Train Epoch: 78 [3%] 2023-03-16 11:10:07,151 44k INFO Losses: [2.7060089111328125, 1.907375454902649, 10.341023445129395, 18.67043113708496, 1.1860156059265137], step: 66400, lr: 9.904205762086905e-05 2023-03-16 11:10:10,976 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\G_66400.pth 2023-03-16 11:10:15,995 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\D_66400.pth 2023-03-16 11:12:40,402 44k INFO Train Epoch: 78 [26%] 2023-03-16 11:12:40,402 44k INFO Losses: [2.808610200881958, 1.7679601907730103, 7.863913536071777, 13.87572193145752, 1.0200798511505127], step: 66600, lr: 9.904205762086905e-05 2023-03-16 11:15:01,099 44k INFO Train Epoch: 78 [49%] 2023-03-16 11:15:01,100 44k INFO Losses: [2.3522632122039795, 2.712413787841797, 9.669404983520508, 16.108646392822266, 1.0459953546524048], step: 66800, lr: 9.904205762086905e-05 2023-03-16 11:17:22,338 44k INFO Train Epoch: 78 [73%] 2023-03-16 11:17:22,338 44k INFO Losses: [2.425893545150757, 2.1365456581115723, 5.555082321166992, 14.159151077270508, 1.3170448541641235], step: 67000, lr: 9.904205762086905e-05 2023-03-16 11:19:43,151 44k INFO Train Epoch: 78 [96%] 2023-03-16 11:19:43,151 44k INFO Losses: [2.5964128971099854, 1.9527451992034912, 9.409751892089844, 15.959331512451172, 1.0492357015609741], step: 67200, lr: 9.904205762086905e-05 2023-03-16 11:19:47,091 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\G_67200.pth 2023-03-16 11:19:51,818 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\D_67200.pth 2023-03-16 11:20:19,965 44k INFO ====> Epoch: 78, cost 643.42 s 2023-03-16 11:22:27,798 44k INFO Train Epoch: 79 [19%] 2023-03-16 11:22:27,799 44k INFO Losses: [2.3496241569519043, 2.1868739128112793, 9.337618827819824, 18.084306716918945, 0.9031025767326355], step: 67400, lr: 9.902967736366644e-05 2023-03-16 11:24:48,664 44k INFO Train Epoch: 79 [42%] 2023-03-16 11:24:48,664 44k INFO Losses: [2.5033674240112305, 2.3287694454193115, 10.2467622756958, 16.318584442138672, 1.2941663265228271], step: 67600, lr: 9.902967736366644e-05 2023-03-16 11:27:09,755 44k INFO Train Epoch: 79 [65%] 2023-03-16 11:27:09,755 44k INFO Losses: [2.646320104598999, 2.0067977905273438, 5.670422077178955, 15.566702842712402, 1.1926511526107788], step: 67800, lr: 9.902967736366644e-05 2023-03-16 11:29:31,292 44k INFO Train Epoch: 79 [89%] 2023-03-16 11:29:31,293 44k INFO Losses: [2.3877170085906982, 2.483243703842163, 9.996084213256836, 17.505613327026367, 1.0403801202774048], step: 68000, lr: 9.902967736366644e-05 2023-03-16 11:29:35,201 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\G_68000.pth 2023-03-16 11:29:39,797 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\D_68000.pth 2023-03-16 11:30:51,282 44k INFO ====> Epoch: 79, cost 631.32 s 2023-03-16 11:32:15,363 44k INFO Train Epoch: 80 [12%] 2023-03-16 11:32:15,364 44k INFO Losses: [2.5708940029144287, 2.122936487197876, 8.571866989135742, 15.578019142150879, 1.262223482131958], step: 68200, lr: 9.901729865399597e-05 2023-03-16 11:34:36,383 44k INFO Train Epoch: 80 [35%] 2023-03-16 11:34:36,383 44k INFO Losses: [2.513117790222168, 1.9965897798538208, 9.863302230834961, 19.381498336791992, 0.9744350910186768], step: 68400, lr: 9.901729865399597e-05 2023-03-16 11:36:57,359 44k INFO Train Epoch: 80 [58%] 2023-03-16 11:36:57,359 44k INFO Losses: [2.6129510402679443, 1.9019958972930908, 9.516365051269531, 15.819975852966309, 0.9575231671333313], step: 68600, lr: 9.901729865399597e-05 2023-03-16 11:39:18,507 44k INFO Train Epoch: 80 [81%] 2023-03-16 11:39:18,507 44k INFO Losses: [2.3998072147369385, 2.3087053298950195, 9.775774955749512, 17.199798583984375, 1.1019949913024902], step: 68800, lr: 9.901729865399597e-05 2023-03-16 11:39:22,395 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\G_68800.pth 2023-03-16 11:39:27,051 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\D_68800.pth 2023-03-16 11:41:22,222 44k INFO ====> Epoch: 80, cost 630.94 s 2023-03-16 11:42:02,647 44k INFO Train Epoch: 81 [5%] 2023-03-16 11:42:02,648 44k INFO Losses: [2.4106202125549316, 2.0806024074554443, 9.67097282409668, 20.09068489074707, 1.0563468933105469], step: 69000, lr: 9.900492149166423e-05 2023-03-16 11:44:23,714 44k INFO Train Epoch: 81 [28%] 2023-03-16 11:44:23,714 44k INFO Losses: [2.728109836578369, 1.8292467594146729, 1.714731216430664, 9.029252052307129, 0.9704844355583191], step: 69200, lr: 9.900492149166423e-05 2023-03-16 11:46:44,683 44k INFO Train Epoch: 81 [51%] 2023-03-16 11:46:44,683 44k INFO Losses: [2.6591358184814453, 2.046415328979492, 8.600796699523926, 16.02041244506836, 0.8038113713264465], step: 69400, lr: 9.900492149166423e-05 2023-03-16 11:49:05,821 44k INFO Train Epoch: 81 [74%] 2023-03-16 11:49:05,821 44k INFO Losses: [2.7891268730163574, 2.1025776863098145, 11.797796249389648, 18.87082290649414, 1.3287744522094727], step: 69600, lr: 9.900492149166423e-05 2023-03-16 11:49:09,706 44k INFO Saving model and optimizer state at iteration 81 to ./logs\44k\G_69600.pth 2023-03-16 11:49:14,557 44k INFO Saving model and optimizer state at iteration 81 to ./logs\44k\D_69600.pth 2023-03-16 11:51:38,615 44k INFO Train Epoch: 81 [97%] 2023-03-16 11:51:38,616 44k INFO Losses: [2.370436906814575, 2.4459147453308105, 9.739459991455078, 17.31490707397461, 0.9803723096847534], step: 69800, lr: 9.900492149166423e-05 2023-03-16 11:51:53,770 44k INFO ====> Epoch: 81, cost 631.55 s 2023-03-16 11:54:11,410 44k INFO Train Epoch: 82 [21%] 2023-03-16 11:54:11,410 44k INFO Losses: [2.6090104579925537, 2.1435546875, 4.926941871643066, 13.324342727661133, 1.2063342332839966], step: 70000, lr: 9.899254587647776e-05 2023-03-16 11:56:32,847 44k INFO Train Epoch: 82 [44%] 2023-03-16 11:56:32,848 44k INFO Losses: [2.5418076515197754, 2.5245461463928223, 6.935207843780518, 16.87236785888672, 1.247336983680725], step: 70200, lr: 9.899254587647776e-05 2023-03-16 11:58:54,062 44k INFO Train Epoch: 82 [67%] 2023-03-16 11:58:54,063 44k INFO Losses: [2.4289369583129883, 2.2466468811035156, 10.362469673156738, 18.48775291442871, 1.0447865724563599], step: 70400, lr: 9.899254587647776e-05 2023-03-16 11:58:57,910 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\G_70400.pth 2023-03-16 11:59:02,616 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\D_70400.pth 2023-03-16 12:01:26,552 44k INFO Train Epoch: 82 [90%] 2023-03-16 12:01:26,553 44k INFO Losses: [2.598551034927368, 2.215742588043213, 8.440386772155762, 18.72451400756836, 1.2146787643432617], step: 70600, lr: 9.899254587647776e-05 2023-03-16 12:02:25,384 44k INFO ====> Epoch: 82, cost 631.61 s 2023-03-16 12:03:59,473 44k INFO Train Epoch: 83 [13%] 2023-03-16 12:03:59,473 44k INFO Losses: [2.6791679859161377, 2.030775785446167, 9.56004810333252, 15.826242446899414, 1.1185932159423828], step: 70800, lr: 9.89801718082432e-05 2023-03-16 12:06:20,597 44k INFO Train Epoch: 83 [37%] 2023-03-16 12:06:20,597 44k INFO Losses: [2.618070125579834, 2.111739158630371, 9.036067962646484, 21.458377838134766, 1.2428091764450073], step: 71000, lr: 9.89801718082432e-05 2023-03-16 12:08:41,591 44k INFO Train Epoch: 83 [60%] 2023-03-16 12:08:41,591 44k INFO Losses: [2.6112546920776367, 2.324599027633667, 11.940216064453125, 17.900840759277344, 1.188509464263916], step: 71200, lr: 9.89801718082432e-05 2023-03-16 12:08:45,519 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\G_71200.pth 2023-03-16 12:08:50,417 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\D_71200.pth 2023-03-16 12:11:14,507 44k INFO Train Epoch: 83 [83%] 2023-03-16 12:11:14,507 44k INFO Losses: [2.468087673187256, 2.33237361907959, 11.66840934753418, 18.57940101623535, 1.0694384574890137], step: 71400, lr: 9.89801718082432e-05 2023-03-16 12:12:56,879 44k INFO ====> Epoch: 83, cost 631.49 s 2023-03-16 12:13:47,366 44k INFO Train Epoch: 84 [6%] 2023-03-16 12:13:47,366 44k INFO Losses: [2.8269283771514893, 2.71539568901062, 12.534187316894531, 16.75394058227539, 1.1995112895965576], step: 71600, lr: 9.896779928676716e-05 2023-03-16 12:16:08,522 44k INFO Train Epoch: 84 [29%] 2023-03-16 12:16:08,523 44k INFO Losses: [2.205845832824707, 2.6781418323516846, 8.345598220825195, 11.201181411743164, 1.0917948484420776], step: 71800, lr: 9.896779928676716e-05 2023-03-16 12:18:29,523 44k INFO Train Epoch: 84 [53%] 2023-03-16 12:18:29,523 44k INFO Losses: [2.3933863639831543, 2.198794364929199, 9.235222816467285, 17.552640914916992, 1.1761647462844849], step: 72000, lr: 9.896779928676716e-05 2023-03-16 12:18:33,392 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\G_72000.pth 2023-03-16 12:18:38,225 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\D_72000.pth 2023-03-16 12:21:02,401 44k INFO Train Epoch: 84 [76%] 2023-03-16 12:21:02,401 44k INFO Losses: [2.289839506149292, 2.4751429557800293, 9.444184303283691, 15.22669792175293, 1.2231769561767578], step: 72200, lr: 9.896779928676716e-05 2023-03-16 12:23:23,593 44k INFO Train Epoch: 84 [99%] 2023-03-16 12:23:23,593 44k INFO Losses: [2.6320250034332275, 2.0906355381011963, 8.974244117736816, 18.274322509765625, 0.9915524125099182], step: 72400, lr: 9.896779928676716e-05 2023-03-16 12:23:29,757 44k INFO ====> Epoch: 84, cost 632.88 s 2023-03-16 12:25:57,433 44k INFO Train Epoch: 85 [22%] 2023-03-16 12:25:57,434 44k INFO Losses: [2.6007401943206787, 1.9056458473205566, 13.223793983459473, 19.986730575561523, 0.8499215245246887], step: 72600, lr: 9.895542831185631e-05 2023-03-16 12:28:18,477 44k INFO Train Epoch: 85 [45%] 2023-03-16 12:28:18,478 44k INFO Losses: [2.620249032974243, 2.205742359161377, 9.210787773132324, 19.279251098632812, 0.8055150508880615], step: 72800, lr: 9.895542831185631e-05 2023-03-16 12:28:22,340 44k INFO Saving model and optimizer state at iteration 85 to ./logs\44k\G_72800.pth 2023-03-16 12:28:27,351 44k INFO Saving model and optimizer state at iteration 85 to ./logs\44k\D_72800.pth 2023-03-16 12:30:51,729 44k INFO Train Epoch: 85 [69%] 2023-03-16 12:30:51,730 44k INFO Losses: [2.397538185119629, 2.448817491531372, 8.125476837158203, 13.70448112487793, 1.6543577909469604], step: 73000, lr: 9.895542831185631e-05 2023-03-16 12:33:12,793 44k INFO Train Epoch: 85 [92%] 2023-03-16 12:33:12,794 44k INFO Losses: [2.5378577709198, 2.0049054622650146, 11.525632858276367, 18.241024017333984, 0.8985089063644409], step: 73200, lr: 9.895542831185631e-05 2023-03-16 12:34:01,924 44k INFO ====> Epoch: 85, cost 632.17 s 2023-03-16 12:35:46,073 44k INFO Train Epoch: 86 [15%] 2023-03-16 12:35:46,073 44k INFO Losses: [2.3988447189331055, 2.2321853637695312, 6.330034255981445, 15.563575744628906, 0.9577658772468567], step: 73400, lr: 9.894305888331732e-05 2023-03-16 12:38:07,199 44k INFO Train Epoch: 86 [38%] 2023-03-16 12:38:07,199 44k INFO Losses: [2.254021644592285, 2.467561721801758, 10.066779136657715, 16.333839416503906, 1.040700078010559], step: 73600, lr: 9.894305888331732e-05 2023-03-16 12:38:11,065 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\G_73600.pth 2023-03-16 12:38:15,915 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\D_73600.pth 2023-03-16 12:40:39,962 44k INFO Train Epoch: 86 [61%] 2023-03-16 12:40:39,962 44k INFO Losses: [2.5008432865142822, 2.4589431285858154, 8.376287460327148, 18.594253540039062, 1.0213313102722168], step: 73800, lr: 9.894305888331732e-05 2023-03-16 12:43:01,340 44k INFO Train Epoch: 86 [85%] 2023-03-16 12:43:01,340 44k INFO Losses: [2.5177624225616455, 2.3911359310150146, 8.605533599853516, 14.55888843536377, 1.241442322731018], step: 74000, lr: 9.894305888331732e-05 2023-03-16 12:44:33,873 44k INFO ====> Epoch: 86, cost 631.95 s 2023-03-16 12:45:34,193 44k INFO Train Epoch: 87 [8%] 2023-03-16 12:45:34,194 44k INFO Losses: [2.6323742866516113, 2.219949245452881, 10.419412612915039, 17.41073989868164, 0.9916703104972839], step: 74200, lr: 9.89306910009569e-05 2023-03-16 12:47:55,448 44k INFO Train Epoch: 87 [31%] 2023-03-16 12:47:55,448 44k INFO Losses: [2.5240468978881836, 2.4294562339782715, 7.663541793823242, 17.416179656982422, 1.230865478515625], step: 74400, lr: 9.89306910009569e-05 2023-03-16 12:47:59,313 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\G_74400.pth 2023-03-16 12:48:03,814 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\D_74400.pth 2023-03-16 12:50:27,786 44k INFO Train Epoch: 87 [54%] 2023-03-16 12:50:27,786 44k INFO Losses: [2.531316041946411, 2.0301570892333984, 8.255770683288574, 19.39872932434082, 1.6149516105651855], step: 74600, lr: 9.89306910009569e-05 2023-03-16 12:52:49,510 44k INFO Train Epoch: 87 [77%] 2023-03-16 12:52:49,511 44k INFO Losses: [2.6005825996398926, 2.141029119491577, 12.114543914794922, 18.922136306762695, 1.0905050039291382], step: 74800, lr: 9.89306910009569e-05 2023-03-16 12:55:05,788 44k INFO ====> Epoch: 87, cost 631.92 s 2023-03-16 12:55:22,386 44k INFO Train Epoch: 88 [1%] 2023-03-16 12:55:22,386 44k INFO Losses: [2.4883546829223633, 2.1823275089263916, 8.666866302490234, 19.02614974975586, 0.6792629957199097], step: 75000, lr: 9.891832466458178e-05 2023-03-16 12:57:43,853 44k INFO Train Epoch: 88 [24%] 2023-03-16 12:57:43,854 44k INFO Losses: [2.2640554904937744, 2.3997294902801514, 13.185614585876465, 17.672975540161133, 1.2991670370101929], step: 75200, lr: 9.891832466458178e-05 2023-03-16 12:57:47,725 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\G_75200.pth 2023-03-16 12:57:52,432 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\D_75200.pth 2023-03-16 13:00:16,507 44k INFO Train Epoch: 88 [47%] 2023-03-16 13:00:16,507 44k INFO Losses: [2.6520276069641113, 2.1629114151000977, 9.87975025177002, 17.40753173828125, 0.9158090949058533], step: 75400, lr: 9.891832466458178e-05 2023-03-16 13:02:37,760 44k INFO Train Epoch: 88 [70%] 2023-03-16 13:02:37,760 44k INFO Losses: [2.517360210418701, 2.639831304550171, 6.538320541381836, 14.653404235839844, 0.9006597995758057], step: 75600, lr: 9.891832466458178e-05 2023-03-16 13:04:58,901 44k INFO Train Epoch: 88 [94%] 2023-03-16 13:04:58,902 44k INFO Losses: [2.471611976623535, 2.406607151031494, 11.107675552368164, 20.721532821655273, 1.3857065439224243], step: 75800, lr: 9.891832466458178e-05 2023-03-16 13:05:38,018 44k INFO ====> Epoch: 88, cost 632.23 s 2023-03-16 13:07:32,091 44k INFO Train Epoch: 89 [17%] 2023-03-16 13:07:32,092 44k INFO Losses: [2.33335018157959, 2.365901231765747, 11.547170639038086, 19.640823364257812, 0.541162371635437], step: 76000, lr: 9.89059598739987e-05 2023-03-16 13:07:36,019 44k INFO Saving model and optimizer state at iteration 89 to ./logs\44k\G_76000.pth 2023-03-16 13:07:40,360 44k INFO Saving model and optimizer state at iteration 89 to ./logs\44k\D_76000.pth 2023-03-16 13:10:04,541 44k INFO Train Epoch: 89 [40%] 2023-03-16 13:10:04,541 44k INFO Losses: [2.5085320472717285, 2.2776989936828613, 7.562427997589111, 14.410797119140625, 1.2336584329605103], step: 76200, lr: 9.89059598739987e-05 2023-03-16 13:12:25,745 44k INFO Train Epoch: 89 [63%] 2023-03-16 13:12:25,745 44k INFO Losses: [2.4322195053100586, 2.281015396118164, 12.793940544128418, 20.83452606201172, 1.0057134628295898], step: 76400, lr: 9.89059598739987e-05 2023-03-16 13:14:47,040 44k INFO Train Epoch: 89 [86%] 2023-03-16 13:14:47,040 44k INFO Losses: [2.5770535469055176, 1.920440673828125, 10.106855392456055, 17.937307357788086, 1.3206064701080322], step: 76600, lr: 9.89059598739987e-05 2023-03-16 13:16:09,796 44k INFO ====> Epoch: 89, cost 631.78 s 2023-03-16 13:17:20,072 44k INFO Train Epoch: 90 [10%] 2023-03-16 13:17:20,072 44k INFO Losses: [2.470933198928833, 2.47243595123291, 10.682500839233398, 20.524044036865234, 0.9259001612663269], step: 76800, lr: 9.889359662901445e-05 2023-03-16 13:17:24,003 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\G_76800.pth 2023-03-16 13:17:28,853 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\D_76800.pth 2023-03-16 13:19:53,503 44k INFO Train Epoch: 90 [33%] 2023-03-16 13:19:53,503 44k INFO Losses: [2.6356923580169678, 2.2813329696655273, 11.493227005004883, 19.744157791137695, 0.9562310576438904], step: 77000, lr: 9.889359662901445e-05 2023-03-16 13:22:14,730 44k INFO Train Epoch: 90 [56%] 2023-03-16 13:22:14,731 44k INFO Losses: [2.6504271030426025, 2.0704667568206787, 9.411870956420898, 18.63494300842285, 1.1775931119918823], step: 77200, lr: 9.889359662901445e-05 2023-03-16 13:24:36,271 44k INFO Train Epoch: 90 [79%] 2023-03-16 13:24:36,271 44k INFO Losses: [2.531060218811035, 2.455465316772461, 7.445971488952637, 17.976587295532227, 1.2703571319580078], step: 77400, lr: 9.889359662901445e-05 2023-03-16 13:26:42,783 44k INFO ====> Epoch: 90, cost 632.99 s 2023-03-16 13:27:09,447 44k INFO Train Epoch: 91 [2%] 2023-03-16 13:27:09,448 44k INFO Losses: [2.370657444000244, 2.370985984802246, 8.91823673248291, 16.825227737426758, 1.135148286819458], step: 77600, lr: 9.888123492943583e-05 2023-03-16 13:27:13,363 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\G_77600.pth 2023-03-16 13:27:18,095 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\D_77600.pth 2023-03-16 13:29:42,627 44k INFO Train Epoch: 91 [26%] 2023-03-16 13:29:42,628 44k INFO Losses: [2.560262441635132, 1.992663860321045, 9.205438613891602, 12.16047191619873, 1.4508944749832153], step: 77800, lr: 9.888123492943583e-05 2023-03-16 13:32:03,852 44k INFO Train Epoch: 91 [49%] 2023-03-16 13:32:03,853 44k INFO Losses: [2.1882448196411133, 2.537442684173584, 8.711201667785645, 14.238235473632812, 1.0832173824310303], step: 78000, lr: 9.888123492943583e-05 2023-03-16 13:34:25,322 44k INFO Train Epoch: 91 [72%] 2023-03-16 13:34:25,323 44k INFO Losses: [2.4782209396362305, 2.2556543350219727, 6.548068523406982, 14.961197853088379, 1.0331870317459106], step: 78200, lr: 9.888123492943583e-05 2023-03-16 13:36:46,544 44k INFO Train Epoch: 91 [95%] 2023-03-16 13:36:46,545 44k INFO Losses: [2.507966995239258, 1.8072775602340698, 9.940712928771973, 15.807931900024414, 0.9384745955467224], step: 78400, lr: 9.888123492943583e-05 2023-03-16 13:36:57,165 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\G_78400.pth 2023-03-16 13:37:02,480 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\D_78400.pth 2023-03-16 13:37:34,722 44k INFO ====> Epoch: 91, cost 651.94 s 2023-03-16 13:39:38,780 44k INFO Train Epoch: 92 [18%] 2023-03-16 13:39:38,780 44k INFO Losses: [2.56052565574646, 2.509126663208008, 8.63680648803711, 16.028390884399414, 1.2661978006362915], step: 78600, lr: 9.886887477506964e-05 2023-03-16 13:42:00,152 44k INFO Train Epoch: 92 [42%] 2023-03-16 13:42:00,153 44k INFO Losses: [2.5496327877044678, 2.2202260494232178, 9.398844718933105, 16.819528579711914, 1.082554817199707], step: 78800, lr: 9.886887477506964e-05 2023-03-16 13:44:21,428 44k INFO Train Epoch: 92 [65%] 2023-03-16 13:44:21,429 44k INFO Losses: [2.3991949558258057, 2.42226505279541, 10.298372268676758, 19.975242614746094, 1.4890828132629395], step: 79000, lr: 9.886887477506964e-05 2023-03-16 13:46:42,737 44k INFO Train Epoch: 92 [88%] 2023-03-16 13:46:42,738 44k INFO Losses: [2.381392240524292, 2.1223604679107666, 9.497610092163086, 18.502893447875977, 1.3059436082839966], step: 79200, lr: 9.886887477506964e-05 2023-03-16 13:46:46,614 44k INFO Saving model and optimizer state at iteration 92 to ./logs\44k\G_79200.pth 2023-03-16 13:46:51,361 44k INFO Saving model and optimizer state at iteration 92 to ./logs\44k\D_79200.pth 2023-03-16 13:48:08,428 44k INFO ====> Epoch: 92, cost 633.71 s 2023-03-16 13:49:28,439 44k INFO Train Epoch: 93 [11%] 2023-03-16 13:49:28,439 44k INFO Losses: [2.53861665725708, 2.2054591178894043, 10.039389610290527, 15.588826179504395, 1.1830437183380127], step: 79400, lr: 9.885651616572276e-05 2023-03-16 13:51:49,830 44k INFO Train Epoch: 93 [34%] 2023-03-16 13:51:49,830 44k INFO Losses: [2.4078855514526367, 2.3380227088928223, 5.588150978088379, 12.025394439697266, 1.2929489612579346], step: 79600, lr: 9.885651616572276e-05 2023-03-16 13:54:10,908 44k INFO Train Epoch: 93 [58%] 2023-03-16 13:54:10,909 44k INFO Losses: [2.410670280456543, 2.4752628803253174, 11.473843574523926, 18.006458282470703, 1.1560356616973877], step: 79800, lr: 9.885651616572276e-05 2023-03-16 13:56:32,291 44k INFO Train Epoch: 93 [81%] 2023-03-16 13:56:32,291 44k INFO Losses: [2.268266439437866, 2.1862688064575195, 11.312393188476562, 17.607711791992188, 1.140894889831543], step: 80000, lr: 9.885651616572276e-05 2023-03-16 13:56:36,177 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\G_80000.pth 2023-03-16 13:56:40,206 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\D_80000.pth 2023-03-16 13:58:39,936 44k INFO ====> Epoch: 93, cost 631.51 s 2023-03-16 13:59:16,251 44k INFO Train Epoch: 94 [4%] 2023-03-16 13:59:16,251 44k INFO Losses: [2.5984315872192383, 2.091257095336914, 9.800554275512695, 18.895736694335938, 1.1753515005111694], step: 80200, lr: 9.884415910120204e-05 2023-03-16 14:01:37,599 44k INFO Train Epoch: 94 [27%] 2023-03-16 14:01:37,599 44k INFO Losses: [2.506798267364502, 2.200167417526245, 7.769087791442871, 14.414732933044434, 0.9315809011459351], step: 80400, lr: 9.884415910120204e-05 2023-03-16 14:03:58,773 44k INFO Train Epoch: 94 [50%] 2023-03-16 14:03:58,773 44k INFO Losses: [2.6824634075164795, 1.9195497035980225, 7.439716815948486, 17.37777328491211, 1.1364415884017944], step: 80600, lr: 9.884415910120204e-05 2023-03-16 14:06:20,202 44k INFO Train Epoch: 94 [74%] 2023-03-16 14:06:20,203 44k INFO Losses: [2.2964589595794678, 2.3565351963043213, 10.244485855102539, 18.71091651916504, 1.2876509428024292], step: 80800, lr: 9.884415910120204e-05 2023-03-16 14:06:24,118 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\G_80800.pth 2023-03-16 14:06:28,698 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\D_80800.pth 2023-03-16 14:08:53,340 44k INFO Train Epoch: 94 [97%] 2023-03-16 14:08:53,341 44k INFO Losses: [2.332794189453125, 2.1997945308685303, 10.992225646972656, 20.061508178710938, 1.4539374113082886], step: 81000, lr: 9.884415910120204e-05 2023-03-16 14:09:12,850 44k INFO ====> Epoch: 94, cost 632.91 s 2023-03-16 14:11:26,953 44k INFO Train Epoch: 95 [20%] 2023-03-16 14:11:26,954 44k INFO Losses: [2.2868316173553467, 2.350821018218994, 13.393484115600586, 21.281200408935547, 1.177393913269043], step: 81200, lr: 9.883180358131438e-05 2023-03-16 14:13:48,571 44k INFO Train Epoch: 95 [43%] 2023-03-16 14:13:48,571 44k INFO Losses: [2.652494192123413, 2.0590221881866455, 9.344569206237793, 15.785776138305664, 1.1189639568328857], step: 81400, lr: 9.883180358131438e-05 2023-03-16 14:16:11,008 44k INFO Train Epoch: 95 [66%] 2023-03-16 14:16:11,008 44k INFO Losses: [2.518139600753784, 2.243513822555542, 9.34472370147705, 16.712980270385742, 1.1516889333724976], step: 81600, lr: 9.883180358131438e-05 2023-03-16 14:16:14,966 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\G_81600.pth 2023-03-16 14:16:19,846 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\D_81600.pth 2023-03-16 14:18:44,820 44k INFO Train Epoch: 95 [90%] 2023-03-16 14:18:44,820 44k INFO Losses: [2.553950309753418, 2.216747760772705, 10.91457462310791, 19.541292190551758, 1.345664620399475], step: 81800, lr: 9.883180358131438e-05 2023-03-16 14:19:47,992 44k INFO ====> Epoch: 95, cost 635.14 s 2023-03-16 14:21:18,097 44k INFO Train Epoch: 96 [13%] 2023-03-16 14:21:18,097 44k INFO Losses: [2.8282673358917236, 1.812172532081604, 4.776924133300781, 15.003165245056152, 1.3904212713241577], step: 82000, lr: 9.881944960586671e-05 2023-03-16 14:23:39,819 44k INFO Train Epoch: 96 [36%] 2023-03-16 14:23:39,820 44k INFO Losses: [2.334263563156128, 2.462691068649292, 11.100128173828125, 19.5662899017334, 0.8105576634407043], step: 82200, lr: 9.881944960586671e-05 2023-03-16 14:26:01,523 44k INFO Train Epoch: 96 [59%] 2023-03-16 14:26:01,523 44k INFO Losses: [2.595090866088867, 2.006326198577881, 11.860282897949219, 17.519306182861328, 1.1287246942520142], step: 82400, lr: 9.881944960586671e-05 2023-03-16 14:26:05,390 44k INFO Saving model and optimizer state at iteration 96 to ./logs\44k\G_82400.pth 2023-03-16 14:26:10,226 44k INFO Saving model and optimizer state at iteration 96 to ./logs\44k\D_82400.pth 2023-03-16 14:28:35,207 44k INFO Train Epoch: 96 [82%] 2023-03-16 14:28:35,207 44k INFO Losses: [2.443026542663574, 2.1865973472595215, 12.520249366760254, 20.138261795043945, 1.032894253730774], step: 82600, lr: 9.881944960586671e-05 2023-03-16 14:30:22,300 44k INFO ====> Epoch: 96, cost 634.31 s 2023-03-16 14:31:08,580 44k INFO Train Epoch: 97 [6%] 2023-03-16 14:31:08,580 44k INFO Losses: [2.5466251373291016, 2.0842771530151367, 11.774823188781738, 19.68462562561035, 1.2202320098876953], step: 82800, lr: 9.880709717466598e-05 2023-03-16 14:33:30,218 44k INFO Train Epoch: 97 [29%] 2023-03-16 14:33:30,218 44k INFO Losses: [2.644742488861084, 1.932471752166748, 6.441492557525635, 14.261273384094238, 0.9867590069770813], step: 83000, lr: 9.880709717466598e-05 2023-03-16 14:35:51,519 44k INFO Train Epoch: 97 [52%] 2023-03-16 14:35:51,520 44k INFO Losses: [2.300273895263672, 1.988600492477417, 13.510762214660645, 18.47292137145996, 1.233951449394226], step: 83200, lr: 9.880709717466598e-05 2023-03-16 14:35:55,463 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\G_83200.pth 2023-03-16 14:36:00,017 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\D_83200.pth 2023-03-16 14:38:25,101 44k INFO Train Epoch: 97 [75%] 2023-03-16 14:38:25,102 44k INFO Losses: [2.4350175857543945, 2.339785099029541, 7.884140968322754, 13.929859161376953, 0.7632540464401245], step: 83400, lr: 9.880709717466598e-05 2023-03-16 14:40:46,918 44k INFO Train Epoch: 97 [98%] 2023-03-16 14:40:46,919 44k INFO Losses: [2.355095386505127, 2.1651806831359863, 13.947182655334473, 19.81959342956543, 1.1457955837249756], step: 83600, lr: 9.880709717466598e-05 2023-03-16 14:40:56,475 44k INFO ====> Epoch: 97, cost 634.17 s 2023-03-16 14:43:20,411 44k INFO Train Epoch: 98 [22%] 2023-03-16 14:43:20,411 44k INFO Losses: [2.478445529937744, 2.0523433685302734, 8.264986991882324, 13.21651554107666, 0.9347891807556152], step: 83800, lr: 9.879474628751914e-05 2023-03-16 14:45:42,429 44k INFO Train Epoch: 98 [45%] 2023-03-16 14:45:42,430 44k INFO Losses: [2.449118137359619, 2.1999752521514893, 10.900297164916992, 19.97274398803711, 1.0928928852081299], step: 84000, lr: 9.879474628751914e-05 2023-03-16 14:45:46,365 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\G_84000.pth 2023-03-16 14:45:51,529 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\D_84000.pth 2023-03-16 14:48:16,262 44k INFO Train Epoch: 98 [68%] 2023-03-16 14:48:16,263 44k INFO Losses: [2.796704053878784, 1.9731804132461548, 5.245630741119385, 12.65261173248291, 1.323630690574646], step: 84200, lr: 9.879474628751914e-05 2023-03-16 14:50:37,809 44k INFO Train Epoch: 98 [91%] 2023-03-16 14:50:37,809 44k INFO Losses: [2.788247585296631, 2.06575345993042, 7.1351118087768555, 13.369119644165039, 1.2102423906326294], step: 84400, lr: 9.879474628751914e-05 2023-03-16 14:51:31,269 44k INFO ====> Epoch: 98, cost 634.79 s 2023-03-16 14:53:11,562 44k INFO Train Epoch: 99 [14%] 2023-03-16 14:53:11,562 44k INFO Losses: [2.3246405124664307, 2.543771743774414, 10.22067642211914, 17.23099136352539, 1.1793733835220337], step: 84600, lr: 9.87823969442332e-05 2023-03-16 14:55:33,310 44k INFO Train Epoch: 99 [38%] 2023-03-16 14:55:33,311 44k INFO Losses: [2.526538372039795, 2.6370365619659424, 9.150901794433594, 16.160770416259766, 1.4365507364273071], step: 84800, lr: 9.87823969442332e-05 2023-03-16 14:55:37,217 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\G_84800.pth 2023-03-16 14:55:41,477 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\D_84800.pth 2023-03-16 14:58:06,282 44k INFO Train Epoch: 99 [61%] 2023-03-16 14:58:06,282 44k INFO Losses: [2.5400073528289795, 1.9756797552108765, 8.992016792297363, 20.6063289642334, 1.1668174266815186], step: 85000, lr: 9.87823969442332e-05 2023-03-16 15:00:28,195 44k INFO Train Epoch: 99 [84%] 2023-03-16 15:00:28,195 44k INFO Losses: [2.6564455032348633, 2.431309223175049, 8.195633888244629, 16.311067581176758, 1.3473916053771973], step: 85200, lr: 9.87823969442332e-05 2023-03-16 15:02:05,364 44k INFO ====> Epoch: 99, cost 634.09 s 2023-03-16 15:03:01,631 44k INFO Train Epoch: 100 [7%] 2023-03-16 15:03:01,632 44k INFO Losses: [2.521860361099243, 2.1987226009368896, 6.303283214569092, 12.92664909362793, 1.049301266670227], step: 85400, lr: 9.877004914461517e-05 2023-03-16 15:05:23,590 44k INFO Train Epoch: 100 [30%] 2023-03-16 15:05:23,591 44k INFO Losses: [2.5421204566955566, 2.3213376998901367, 10.12356948852539, 16.969974517822266, 0.9479173421859741], step: 85600, lr: 9.877004914461517e-05 2023-03-16 15:05:27,459 44k INFO Saving model and optimizer state at iteration 100 to ./logs\44k\G_85600.pth 2023-03-16 15:05:32,494 44k INFO Saving model and optimizer state at iteration 100 to ./logs\44k\D_85600.pth 2023-03-16 15:07:56,958 44k INFO Train Epoch: 100 [54%] 2023-03-16 15:07:56,959 44k INFO Losses: [2.560506820678711, 2.2779123783111572, 7.053585529327393, 17.470096588134766, 0.7397072911262512], step: 85800, lr: 9.877004914461517e-05 2023-03-16 15:10:18,815 44k INFO Train Epoch: 100 [77%] 2023-03-16 15:10:18,816 44k INFO Losses: [2.572500705718994, 1.9923728704452515, 7.685388565063477, 10.386280059814453, 1.2073436975479126], step: 86000, lr: 9.877004914461517e-05 2023-03-16 15:12:40,963 44k INFO ====> Epoch: 100, cost 635.60 s 2023-03-16 15:12:53,460 44k INFO Train Epoch: 101 [0%] 2023-03-16 15:12:53,460 44k INFO Losses: [2.335265636444092, 2.414954900741577, 13.034676551818848, 18.4708309173584, 1.2445300817489624], step: 86200, lr: 9.875770288847208e-05 2023-03-16 15:15:15,260 44k INFO Train Epoch: 101 [23%] 2023-03-16 15:15:15,261 44k INFO Losses: [2.4407570362091064, 2.404839277267456, 8.781243324279785, 15.572870254516602, 0.9699307680130005], step: 86400, lr: 9.875770288847208e-05 2023-03-16 15:15:19,162 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\G_86400.pth 2023-03-16 15:15:23,347 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\D_86400.pth 2023-03-16 15:17:48,184 44k INFO Train Epoch: 101 [46%] 2023-03-16 15:17:48,184 44k INFO Losses: [2.5239901542663574, 2.063962697982788, 7.593474864959717, 16.75331687927246, 1.1847820281982422], step: 86600, lr: 9.875770288847208e-05 2023-03-16 15:20:10,154 44k INFO Train Epoch: 101 [70%] 2023-03-16 15:20:10,154 44k INFO Losses: [2.407119035720825, 2.3652429580688477, 12.447128295898438, 20.740982055664062, 1.1309082508087158], step: 86800, lr: 9.875770288847208e-05 2023-03-16 15:22:31,985 44k INFO Train Epoch: 101 [93%] 2023-03-16 15:22:31,985 44k INFO Losses: [2.575744152069092, 1.9453648328781128, 9.970337867736816, 21.4005069732666, 1.1480638980865479], step: 87000, lr: 9.875770288847208e-05 2023-03-16 15:23:15,559 44k INFO ====> Epoch: 101, cost 634.60 s 2023-03-16 15:25:05,879 44k INFO Train Epoch: 102 [16%] 2023-03-16 15:25:05,879 44k INFO Losses: [2.619947910308838, 2.2499701976776123, 11.538363456726074, 20.18581771850586, 1.1900607347488403], step: 87200, lr: 9.874535817561101e-05 2023-03-16 15:25:09,761 44k INFO Saving model and optimizer state at iteration 102 to ./logs\44k\G_87200.pth 2023-03-16 15:25:14,880 44k INFO Saving model and optimizer state at iteration 102 to ./logs\44k\D_87200.pth 2023-03-16 15:27:39,665 44k INFO Train Epoch: 102 [39%] 2023-03-16 15:27:39,665 44k INFO Losses: [2.45945405960083, 2.2897281646728516, 12.623381614685059, 19.001632690429688, 1.3109409809112549], step: 87400, lr: 9.874535817561101e-05 2023-03-16 15:30:01,368 44k INFO Train Epoch: 102 [62%] 2023-03-16 15:30:01,368 44k INFO Losses: [2.4822685718536377, 2.11114501953125, 9.224200248718262, 17.800710678100586, 1.345191240310669], step: 87600, lr: 9.874535817561101e-05 2023-03-16 15:32:23,284 44k INFO Train Epoch: 102 [86%] 2023-03-16 15:32:23,285 44k INFO Losses: [2.3915112018585205, 2.23075008392334, 11.346435546875, 20.125709533691406, 0.9983417987823486], step: 87800, lr: 9.874535817561101e-05 2023-03-16 15:33:50,637 44k INFO ====> Epoch: 102, cost 635.08 s 2023-03-16 15:34:56,812 44k INFO Train Epoch: 103 [9%] 2023-03-16 15:34:56,813 44k INFO Losses: [2.8579885959625244, 1.9904065132141113, 8.398605346679688, 15.745253562927246, 1.2231950759887695], step: 88000, lr: 9.873301500583906e-05 2023-03-16 15:35:00,695 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\G_88000.pth 2023-03-16 15:35:05,582 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\D_88000.pth 2023-03-16 15:37:30,590 44k INFO Train Epoch: 103 [32%] 2023-03-16 15:37:30,590 44k INFO Losses: [2.6528215408325195, 2.1534948348999023, 7.265420913696289, 14.643321990966797, 0.918796956539154], step: 88200, lr: 9.873301500583906e-05 2023-03-16 15:39:53,048 44k INFO Train Epoch: 103 [55%] 2023-03-16 15:39:53,049 44k INFO Losses: [2.882535457611084, 2.173745632171631, 6.334557056427002, 17.70293426513672, 1.1672972440719604], step: 88400, lr: 9.873301500583906e-05 2023-03-16 15:42:14,965 44k INFO Train Epoch: 103 [78%] 2023-03-16 15:42:14,966 44k INFO Losses: [2.6294498443603516, 2.3678059577941895, 10.121888160705566, 16.655372619628906, 1.5394238233566284], step: 88600, lr: 9.873301500583906e-05 2023-03-16 15:44:26,281 44k INFO ====> Epoch: 103, cost 635.64 s 2023-03-16 15:44:48,623 44k INFO Train Epoch: 104 [2%] 2023-03-16 15:44:48,623 44k INFO Losses: [2.6877872943878174, 1.775192379951477, 7.146652698516846, 12.57369327545166, 1.0752873420715332], step: 88800, lr: 9.872067337896332e-05 2023-03-16 15:44:52,530 44k INFO Saving model and optimizer state at iteration 104 to ./logs\44k\G_88800.pth 2023-03-16 15:44:57,365 44k INFO Saving model and optimizer state at iteration 104 to ./logs\44k\D_88800.pth 2023-03-16 15:47:21,969 44k INFO Train Epoch: 104 [25%] 2023-03-16 15:47:21,970 44k INFO Losses: [2.321521043777466, 2.234433650970459, 9.50384521484375, 20.257320404052734, 1.0122231245040894], step: 89000, lr: 9.872067337896332e-05 2023-03-16 15:49:43,079 44k INFO Train Epoch: 104 [48%] 2023-03-16 15:49:43,079 44k INFO Losses: [2.4528799057006836, 2.212467908859253, 8.237741470336914, 16.853500366210938, 1.275702714920044], step: 89200, lr: 9.872067337896332e-05 2023-03-16 15:52:04,608 44k INFO Train Epoch: 104 [71%] 2023-03-16 15:52:04,609 44k INFO Losses: [2.8073737621307373, 1.9939384460449219, 9.708775520324707, 17.973865509033203, 1.2557201385498047], step: 89400, lr: 9.872067337896332e-05 2023-03-16 15:54:25,899 44k INFO Train Epoch: 104 [94%] 2023-03-16 15:54:25,900 44k INFO Losses: [2.550107955932617, 2.247429132461548, 8.698875427246094, 20.433908462524414, 1.2232794761657715], step: 89600, lr: 9.872067337896332e-05 2023-03-16 15:54:29,866 44k INFO Saving model and optimizer state at iteration 104 to ./logs\44k\G_89600.pth 2023-03-16 15:54:33,969 44k INFO Saving model and optimizer state at iteration 104 to ./logs\44k\D_89600.pth 2023-03-16 15:55:10,620 44k INFO ====> Epoch: 104, cost 644.34 s 2023-03-16 15:57:10,746 44k INFO Train Epoch: 105 [18%] 2023-03-16 15:57:10,746 44k INFO Losses: [2.554687023162842, 1.985001564025879, 9.41700267791748, 16.409160614013672, 1.1810959577560425], step: 89800, lr: 9.870833329479095e-05 2023-03-16 15:59:32,579 44k INFO Train Epoch: 105 [41%] 2023-03-16 15:59:32,580 44k INFO Losses: [2.5936546325683594, 2.0767674446105957, 12.469273567199707, 19.846824645996094, 1.2267522811889648], step: 90000, lr: 9.870833329479095e-05 2023-03-16 16:01:54,362 44k INFO Train Epoch: 105 [64%] 2023-03-16 16:01:54,362 44k INFO Losses: [2.5386738777160645, 2.139638662338257, 9.052919387817383, 16.323230743408203, 1.062764286994934], step: 90200, lr: 9.870833329479095e-05 2023-03-16 16:04:16,160 44k INFO Train Epoch: 105 [87%] 2023-03-16 16:04:16,160 44k INFO Losses: [2.6863675117492676, 2.0840494632720947, 5.956666469573975, 13.610661506652832, 1.1028852462768555], step: 90400, lr: 9.870833329479095e-05 2023-03-16 16:04:20,120 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\G_90400.pth 2023-03-16 16:04:24,569 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\D_90400.pth 2023-03-16 16:05:45,019 44k INFO ====> Epoch: 105, cost 634.40 s 2023-03-16 16:07:01,210 44k INFO Train Epoch: 106 [10%] 2023-03-16 16:07:01,211 44k INFO Losses: [2.576587438583374, 2.166093349456787, 8.56717586517334, 16.485855102539062, 1.1029696464538574], step: 90600, lr: 9.86959947531291e-05 2023-03-16 16:09:23,829 44k INFO Train Epoch: 106 [34%] 2023-03-16 16:09:23,829 44k INFO Losses: [2.3202390670776367, 2.317450761795044, 10.594006538391113, 16.679000854492188, 1.2510251998901367], step: 90800, lr: 9.86959947531291e-05 2023-03-16 16:11:45,577 44k INFO Train Epoch: 106 [57%] 2023-03-16 16:11:45,577 44k INFO Losses: [2.6875596046447754, 1.8649218082427979, 5.010645866394043, 15.417072296142578, 0.9789808988571167], step: 91000, lr: 9.86959947531291e-05 2023-03-16 16:14:07,462 44k INFO Train Epoch: 106 [80%] 2023-03-16 16:14:07,463 44k INFO Losses: [2.3452610969543457, 2.3236351013183594, 13.754258155822754, 20.54546356201172, 1.1996439695358276], step: 91200, lr: 9.86959947531291e-05 2023-03-16 16:14:11,411 44k INFO Saving model and optimizer state at iteration 106 to ./logs\44k\G_91200.pth 2023-03-16 16:14:16,148 44k INFO Saving model and optimizer state at iteration 106 to ./logs\44k\D_91200.pth 2023-03-16 16:16:20,629 44k INFO ====> Epoch: 106, cost 635.61 s 2023-03-16 16:16:52,893 44k INFO Train Epoch: 107 [3%] 2023-03-16 16:16:52,894 44k INFO Losses: [2.668893814086914, 2.348721742630005, 7.894355773925781, 16.03837776184082, 1.250355839729309], step: 91400, lr: 9.868365775378495e-05 2023-03-16 16:19:14,714 44k INFO Train Epoch: 107 [26%] 2023-03-16 16:19:14,715 44k INFO Losses: [2.421255588531494, 2.2812747955322266, 9.601876258850098, 19.224123001098633, 0.8247466087341309], step: 91600, lr: 9.868365775378495e-05 2023-03-16 16:21:36,047 44k INFO Train Epoch: 107 [50%] 2023-03-16 16:21:36,047 44k INFO Losses: [2.551400899887085, 2.327267646789551, 10.042521476745605, 18.718244552612305, 1.4448013305664062], step: 91800, lr: 9.868365775378495e-05 2023-03-16 16:23:57,957 44k INFO Train Epoch: 107 [73%] 2023-03-16 16:23:57,957 44k INFO Losses: [3.0137693881988525, 1.9405112266540527, 8.973455429077148, 16.043292999267578, 0.8912844061851501], step: 92000, lr: 9.868365775378495e-05 2023-03-16 16:24:02,042 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\G_92000.pth 2023-03-16 16:24:07,192 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\D_92000.pth 2023-03-16 16:26:31,685 44k INFO Train Epoch: 107 [96%] 2023-03-16 16:26:31,685 44k INFO Losses: [2.4455933570861816, 2.476611852645874, 9.181824684143066, 17.148929595947266, 1.2493867874145508], step: 92200, lr: 9.868365775378495e-05 2023-03-16 16:26:55,451 44k INFO ====> Epoch: 107, cost 634.82 s 2023-03-16 16:29:05,537 44k INFO Train Epoch: 108 [19%] 2023-03-16 16:29:05,538 44k INFO Losses: [2.5348761081695557, 2.1264963150024414, 7.814300060272217, 12.893929481506348, 1.520424723625183], step: 92400, lr: 9.867132229656573e-05 2023-03-16 16:31:27,169 44k INFO Train Epoch: 108 [42%] 2023-03-16 16:31:27,169 44k INFO Losses: [2.573436737060547, 2.0489981174468994, 12.610435485839844, 19.986909866333008, 1.1201940774917603], step: 92600, lr: 9.867132229656573e-05 2023-03-16 16:33:49,046 44k INFO Train Epoch: 108 [66%] 2023-03-16 16:33:49,046 44k INFO Losses: [2.5761663913726807, 1.9804880619049072, 10.290548324584961, 17.974288940429688, 0.9498992562294006], step: 92800, lr: 9.867132229656573e-05 2023-03-16 16:33:53,111 44k INFO Saving model and optimizer state at iteration 108 to ./logs\44k\G_92800.pth 2023-03-16 16:33:57,918 44k INFO Saving model and optimizer state at iteration 108 to ./logs\44k\D_92800.pth 2023-03-16 16:36:23,764 44k INFO Train Epoch: 108 [89%] 2023-03-16 16:36:23,765 44k INFO Losses: [2.6448559761047363, 1.9009677171707153, 6.390448093414307, 11.25497055053711, 0.8599870204925537], step: 93000, lr: 9.867132229656573e-05 2023-03-16 16:37:31,258 44k INFO ====> Epoch: 108, cost 635.81 s 2023-03-16 16:38:57,347 44k INFO Train Epoch: 109 [12%] 2023-03-16 16:38:57,348 44k INFO Losses: [2.309954881668091, 2.2428078651428223, 9.233926773071289, 16.28044891357422, 1.2958418130874634], step: 93200, lr: 9.865898838127865e-05 2023-03-16 16:41:19,232 44k INFO Train Epoch: 109 [35%] 2023-03-16 16:41:19,233 44k INFO Losses: [2.594108819961548, 1.8657935857772827, 7.019168853759766, 11.850963592529297, 1.3378523588180542], step: 93400, lr: 9.865898838127865e-05 2023-03-16 16:43:41,015 44k INFO Train Epoch: 109 [58%] 2023-03-16 16:43:41,015 44k INFO Losses: [2.484604835510254, 2.312359094619751, 10.544111251831055, 18.384994506835938, 1.0672850608825684], step: 93600, lr: 9.865898838127865e-05 2023-03-16 16:43:45,020 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\G_93600.pth 2023-03-16 16:43:49,636 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\D_93600.pth 2023-03-16 16:46:14,512 44k INFO Train Epoch: 109 [82%] 2023-03-16 16:46:14,513 44k INFO Losses: [2.5172414779663086, 2.1137266159057617, 9.120918273925781, 18.406944274902344, 1.1802884340286255], step: 93800, lr: 9.865898838127865e-05 2023-03-16 16:48:05,843 44k INFO ====> Epoch: 109, cost 634.59 s 2023-03-16 16:48:48,049 44k INFO Train Epoch: 110 [5%] 2023-03-16 16:48:48,049 44k INFO Losses: [2.70188307762146, 2.12601375579834, 9.618642807006836, 17.709218978881836, 1.368382215499878], step: 94000, lr: 9.864665600773098e-05 2023-03-16 16:51:09,841 44k INFO Train Epoch: 110 [28%] 2023-03-16 16:51:09,842 44k INFO Losses: [2.618835687637329, 2.338515520095825, 10.604942321777344, 17.520809173583984, 1.1767034530639648], step: 94200, lr: 9.864665600773098e-05 2023-03-16 16:53:31,468 44k INFO Train Epoch: 110 [51%] 2023-03-16 16:53:31,468 44k INFO Losses: [2.4151172637939453, 2.0325262546539307, 14.834542274475098, 19.12325096130371, 0.952680230140686], step: 94400, lr: 9.864665600773098e-05 2023-03-16 16:53:35,429 44k INFO Saving model and optimizer state at iteration 110 to ./logs\44k\G_94400.pth 2023-03-16 16:53:39,839 44k INFO Saving model and optimizer state at iteration 110 to ./logs\44k\D_94400.pth 2023-03-16 16:56:04,799 44k INFO Train Epoch: 110 [74%] 2023-03-16 16:56:04,800 44k INFO Losses: [2.4472875595092773, 2.31091046333313, 12.077939987182617, 19.569372177124023, 1.1318812370300293], step: 94600, lr: 9.864665600773098e-05 2023-03-16 16:58:26,603 44k INFO Train Epoch: 110 [98%] 2023-03-16 16:58:26,603 44k INFO Losses: [2.520528554916382, 2.2191479206085205, 6.517521858215332, 10.79832935333252, 1.0564473867416382], step: 94800, lr: 9.864665600773098e-05 2023-03-16 16:58:40,426 44k INFO ====> Epoch: 110, cost 634.58 s 2023-03-16 17:01:00,334 44k INFO Train Epoch: 111 [21%] 2023-03-16 17:01:00,334 44k INFO Losses: [2.427290916442871, 2.4269020557403564, 7.926499843597412, 17.072181701660156, 1.1601970195770264], step: 95000, lr: 9.863432517573002e-05 2023-03-16 17:03:21,862 44k INFO Train Epoch: 111 [44%] 2023-03-16 17:03:21,862 44k INFO Losses: [2.4779489040374756, 2.301846504211426, 9.572372436523438, 18.628503799438477, 1.0712780952453613], step: 95200, lr: 9.863432517573002e-05 2023-03-16 17:03:25,956 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\G_95200.pth 2023-03-16 17:03:30,373 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\D_95200.pth 2023-03-16 17:05:55,226 44k INFO Train Epoch: 111 [67%] 2023-03-16 17:05:55,227 44k INFO Losses: [2.5630228519439697, 2.2663702964782715, 8.585168838500977, 16.72791862487793, 1.1423683166503906], step: 95400, lr: 9.863432517573002e-05 2023-03-16 17:08:16,987 44k INFO Train Epoch: 111 [90%] 2023-03-16 17:08:16,987 44k INFO Losses: [2.324873924255371, 2.118913173675537, 10.85246753692627, 17.77533531188965, 1.2236676216125488], step: 95600, lr: 9.863432517573002e-05 2023-03-16 17:09:14,755 44k INFO ====> Epoch: 111, cost 634.33 s 2023-03-16 17:10:50,729 44k INFO Train Epoch: 112 [14%] 2023-03-16 17:10:50,729 44k INFO Losses: [2.7309038639068604, 1.9041736125946045, 8.468201637268066, 14.49746036529541, 0.9971700310707092], step: 95800, lr: 9.862199588508305e-05 2023-03-16 17:13:12,526 44k INFO Train Epoch: 112 [37%] 2023-03-16 17:13:12,526 44k INFO Losses: [2.4089832305908203, 2.157503128051758, 12.637571334838867, 19.437103271484375, 1.086800456047058], step: 96000, lr: 9.862199588508305e-05 2023-03-16 17:13:16,505 44k INFO Saving model and optimizer state at iteration 112 to ./logs\44k\G_96000.pth 2023-03-16 17:13:21,245 44k INFO Saving model and optimizer state at iteration 112 to ./logs\44k\D_96000.pth 2023-03-16 17:15:45,944 44k INFO Train Epoch: 112 [60%] 2023-03-16 17:15:45,945 44k INFO Losses: [2.7133352756500244, 2.525421380996704, 6.336013317108154, 15.466818809509277, 1.1751772165298462], step: 96200, lr: 9.862199588508305e-05 2023-03-16 17:18:07,839 44k INFO Train Epoch: 112 [83%] 2023-03-16 17:18:07,840 44k INFO Losses: [1.9588565826416016, 2.9071733951568604, 11.862907409667969, 17.62187385559082, 0.9174224734306335], step: 96400, lr: 9.862199588508305e-05 2023-03-16 17:19:49,286 44k INFO ====> Epoch: 112, cost 634.53 s 2023-03-16 17:20:41,276 44k INFO Train Epoch: 113 [6%] 2023-03-16 17:20:41,277 44k INFO Losses: [2.7472920417785645, 2.0884828567504883, 6.207939624786377, 14.516794204711914, 0.9002323150634766], step: 96600, lr: 9.86096681355974e-05 2023-03-16 17:23:03,122 44k INFO Train Epoch: 113 [30%] 2023-03-16 17:23:03,122 44k INFO Losses: [2.4704384803771973, 2.1136035919189453, 8.825756072998047, 18.28575897216797, 1.208356261253357], step: 96800, lr: 9.86096681355974e-05 2023-03-16 17:23:07,022 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\G_96800.pth 2023-03-16 17:23:12,045 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\D_96800.pth 2023-03-16 17:25:36,710 44k INFO Train Epoch: 113 [53%] 2023-03-16 17:25:36,711 44k INFO Losses: [2.4257984161376953, 2.3985185623168945, 10.029997825622559, 17.619152069091797, 1.1128617525100708], step: 97000, lr: 9.86096681355974e-05 2023-03-16 17:27:58,636 44k INFO Train Epoch: 113 [76%] 2023-03-16 17:27:58,636 44k INFO Losses: [2.612734794616699, 2.050194263458252, 9.569403648376465, 16.862882614135742, 1.011615514755249], step: 97200, lr: 9.86096681355974e-05 2023-03-16 17:30:20,474 44k INFO Train Epoch: 113 [99%] 2023-03-16 17:30:20,475 44k INFO Losses: [2.3254780769348145, 2.453434944152832, 10.555414199829102, 19.729419708251953, 0.9522934556007385], step: 97400, lr: 9.86096681355974e-05 2023-03-16 17:30:24,475 44k INFO ====> Epoch: 113, cost 635.19 s 2023-03-16 17:32:54,848 44k INFO Train Epoch: 114 [23%] 2023-03-16 17:32:54,848 44k INFO Losses: [2.5409018993377686, 2.410325288772583, 8.970532417297363, 16.263521194458008, 1.1107163429260254], step: 97600, lr: 9.859734192708044e-05 2023-03-16 17:32:58,816 44k INFO Saving model and optimizer state at iteration 114 to ./logs\44k\G_97600.pth 2023-03-16 17:33:03,622 44k INFO Saving model and optimizer state at iteration 114 to ./logs\44k\D_97600.pth 2023-03-16 17:35:28,251 44k INFO Train Epoch: 114 [46%] 2023-03-16 17:35:28,252 44k INFO Losses: [2.8311381340026855, 1.8400804996490479, 6.033525466918945, 12.222457885742188, 0.9844613671302795], step: 97800, lr: 9.859734192708044e-05 2023-03-16 17:37:50,079 44k INFO Train Epoch: 114 [69%] 2023-03-16 17:37:50,079 44k INFO Losses: [2.661393880844116, 1.9524363279342651, 9.48725700378418, 18.323009490966797, 0.9458364844322205], step: 98000, lr: 9.859734192708044e-05 2023-03-16 17:40:11,963 44k INFO Train Epoch: 114 [92%] 2023-03-16 17:40:11,963 44k INFO Losses: [2.427412986755371, 2.058542490005493, 9.708783149719238, 15.78085994720459, 1.0527818202972412], step: 98200, lr: 9.859734192708044e-05 2023-03-16 17:40:59,828 44k INFO ====> Epoch: 114, cost 635.35 s 2023-03-16 17:42:45,764 44k INFO Train Epoch: 115 [15%] 2023-03-16 17:42:45,764 44k INFO Losses: [2.780426025390625, 1.868059515953064, 6.365862846374512, 12.679797172546387, 1.2570339441299438], step: 98400, lr: 9.858501725933955e-05 2023-03-16 17:42:49,652 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\G_98400.pth 2023-03-16 17:42:53,937 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\D_98400.pth 2023-03-16 17:45:18,595 44k INFO Train Epoch: 115 [39%] 2023-03-16 17:45:18,595 44k INFO Losses: [2.5636813640594482, 2.1780853271484375, 9.600458145141602, 19.13355827331543, 0.9318674802780151], step: 98600, lr: 9.858501725933955e-05 2023-03-16 17:47:40,285 44k INFO Train Epoch: 115 [62%] 2023-03-16 17:47:40,286 44k INFO Losses: [2.6135337352752686, 2.385803461074829, 6.380125999450684, 15.289521217346191, 0.9802082777023315], step: 98800, lr: 9.858501725933955e-05 2023-03-16 17:50:02,216 44k INFO Train Epoch: 115 [85%] 2023-03-16 17:50:02,216 44k INFO Losses: [2.7096385955810547, 2.156022787094116, 6.716906547546387, 15.959321975708008, 0.8909186720848083], step: 99000, lr: 9.858501725933955e-05 2023-03-16 17:51:33,745 44k INFO ====> Epoch: 115, cost 633.92 s 2023-03-16 17:52:35,743 44k INFO Train Epoch: 116 [8%] 2023-03-16 17:52:35,744 44k INFO Losses: [2.8491263389587402, 1.9437086582183838, 8.310803413391113, 17.96224021911621, 0.9863431453704834], step: 99200, lr: 9.857269413218213e-05 2023-03-16 17:52:39,648 44k INFO Saving model and optimizer state at iteration 116 to ./logs\44k\G_99200.pth 2023-03-16 17:52:44,324 44k INFO Saving model and optimizer state at iteration 116 to ./logs\44k\D_99200.pth 2023-03-16 17:55:08,915 44k INFO Train Epoch: 116 [31%] 2023-03-16 17:55:08,916 44k INFO Losses: [2.359935998916626, 2.4886927604675293, 8.858589172363281, 14.912266731262207, 0.8938621878623962], step: 99400, lr: 9.857269413218213e-05 2023-03-16 17:57:30,661 44k INFO Train Epoch: 116 [55%] 2023-03-16 17:57:30,661 44k INFO Losses: [2.682145833969116, 2.0522990226745605, 5.891592502593994, 15.652936935424805, 1.0319194793701172], step: 99600, lr: 9.857269413218213e-05 2023-03-16 17:59:53,457 44k INFO Train Epoch: 116 [78%] 2023-03-16 17:59:53,457 44k INFO Losses: [2.6451926231384277, 2.0077600479125977, 9.742025375366211, 17.58249282836914, 1.2402338981628418], step: 99800, lr: 9.857269413218213e-05 2023-03-16 18:02:08,907 44k INFO ====> Epoch: 116, cost 635.16 s 2023-03-16 18:02:26,951 44k INFO Train Epoch: 117 [1%] 2023-03-16 18:02:26,951 44k INFO Losses: [2.360685348510742, 2.24063777923584, 10.895784378051758, 17.108287811279297, 0.9646963477134705], step: 100000, lr: 9.85603725454156e-05 2023-03-16 18:02:31,000 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\G_100000.pth 2023-03-16 18:02:35,692 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\D_100000.pth 2023-03-16 18:05:00,507 44k INFO Train Epoch: 117 [24%] 2023-03-16 18:05:00,508 44k INFO Losses: [2.5547401905059814, 2.1682896614074707, 9.401448249816895, 17.330257415771484, 1.1059308052062988], step: 100200, lr: 9.85603725454156e-05 2023-03-16 18:07:22,163 44k INFO Train Epoch: 117 [47%] 2023-03-16 18:07:22,164 44k INFO Losses: [2.5967113971710205, 1.9163446426391602, 7.931536674499512, 18.01601791381836, 1.5041505098342896], step: 100400, lr: 9.85603725454156e-05 2023-03-16 18:09:44,039 44k INFO Train Epoch: 117 [71%] 2023-03-16 18:09:44,040 44k INFO Losses: [2.6121950149536133, 2.169825792312622, 7.331231117248535, 15.554452896118164, 1.0386245250701904], step: 100600, lr: 9.85603725454156e-05 2023-03-16 18:12:05,665 44k INFO Train Epoch: 117 [94%] 2023-03-16 18:12:05,666 44k INFO Losses: [2.583763360977173, 1.93252694606781, 8.662615776062012, 16.715497970581055, 1.0967397689819336], step: 100800, lr: 9.85603725454156e-05 2023-03-16 18:12:09,566 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\G_100800.pth 2023-03-16 18:12:13,867 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\D_100800.pth 2023-03-16 18:12:54,800 44k INFO ====> Epoch: 117, cost 645.89 s 2023-03-16 18:14:50,599 44k INFO Train Epoch: 118 [17%] 2023-03-16 18:14:50,599 44k INFO Losses: [2.7455804347991943, 2.332205295562744, 5.422713279724121, 10.746132850646973, 1.1056748628616333], step: 101000, lr: 9.854805249884741e-05 2023-03-16 18:17:12,276 44k INFO Train Epoch: 118 [40%] 2023-03-16 18:17:12,277 44k INFO Losses: [2.5289249420166016, 2.0650782585144043, 9.49332046508789, 17.38090705871582, 0.8996552228927612], step: 101200, lr: 9.854805249884741e-05 2023-03-16 18:19:34,015 44k INFO Train Epoch: 118 [63%] 2023-03-16 18:19:34,015 44k INFO Losses: [2.651522397994995, 2.169668674468994, 9.026548385620117, 16.60739517211914, 1.3681915998458862], step: 101400, lr: 9.854805249884741e-05 2023-03-16 18:21:55,897 44k INFO Train Epoch: 118 [87%] 2023-03-16 18:21:55,897 44k INFO Losses: [2.5981578826904297, 2.3635058403015137, 11.070054054260254, 19.25267791748047, 1.1724743843078613], step: 101600, lr: 9.854805249884741e-05 2023-03-16 18:21:59,912 44k INFO Saving model and optimizer state at iteration 118 to ./logs\44k\G_101600.pth 2023-03-16 18:22:04,730 44k INFO Saving model and optimizer state at iteration 118 to ./logs\44k\D_101600.pth 2023-03-16 18:23:29,469 44k INFO ====> Epoch: 118, cost 634.67 s 2023-03-16 18:24:41,337 44k INFO Train Epoch: 119 [10%] 2023-03-16 18:24:41,337 44k INFO Losses: [2.7960572242736816, 1.8781988620758057, 4.997320652008057, 13.849658966064453, 0.9870519042015076], step: 101800, lr: 9.853573399228505e-05 2023-03-16 18:27:03,063 44k INFO Train Epoch: 119 [33%] 2023-03-16 18:27:03,063 44k INFO Losses: [2.5102124214172363, 2.480954170227051, 10.894518852233887, 18.692394256591797, 0.8500508069992065], step: 102000, lr: 9.853573399228505e-05 2023-03-16 18:29:25,565 44k INFO Train Epoch: 119 [56%] 2023-03-16 18:29:25,565 44k INFO Losses: [2.6556692123413086, 2.1003050804138184, 8.317096710205078, 16.479509353637695, 0.7713571786880493], step: 102200, lr: 9.853573399228505e-05 2023-03-16 18:31:47,416 44k INFO Train Epoch: 119 [79%] 2023-03-16 18:31:47,416 44k INFO Losses: [2.7703285217285156, 1.873124361038208, 7.991750240325928, 18.221967697143555, 1.2415435314178467], step: 102400, lr: 9.853573399228505e-05 2023-03-16 18:31:51,339 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\G_102400.pth 2023-03-16 18:31:55,647 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\D_102400.pth 2023-03-16 18:34:04,171 44k INFO ====> Epoch: 119, cost 634.70 s 2023-03-16 18:34:32,102 44k INFO Train Epoch: 120 [3%] 2023-03-16 18:34:32,102 44k INFO Losses: [2.508291006088257, 2.2929389476776123, 9.129660606384277, 18.93326187133789, 1.2210114002227783], step: 102600, lr: 9.8523417025536e-05 2023-03-16 18:36:54,227 44k INFO Train Epoch: 120 [26%] 2023-03-16 18:36:54,227 44k INFO Losses: [2.6489341259002686, 2.3268842697143555, 9.570460319519043, 15.631248474121094, 1.1944055557250977], step: 102800, lr: 9.8523417025536e-05 2023-03-16 18:39:15,751 44k INFO Train Epoch: 120 [49%] 2023-03-16 18:39:15,751 44k INFO Losses: [2.645707130432129, 2.1584625244140625, 4.444352626800537, 14.013240814208984, 1.0601931810379028], step: 103000, lr: 9.8523417025536e-05 2023-03-16 18:41:37,716 44k INFO Train Epoch: 120 [72%] 2023-03-16 18:41:37,716 44k INFO Losses: [2.821847438812256, 2.1271841526031494, 10.024724006652832, 17.236413955688477, 1.090048909187317], step: 103200, lr: 9.8523417025536e-05 2023-03-16 18:41:41,793 44k INFO Saving model and optimizer state at iteration 120 to ./logs\44k\G_103200.pth 2023-03-16 18:41:46,413 44k INFO Saving model and optimizer state at iteration 120 to ./logs\44k\D_103200.pth 2023-03-16 18:44:11,174 44k INFO Train Epoch: 120 [95%] 2023-03-16 18:44:11,175 44k INFO Losses: [2.4742591381073, 2.039515256881714, 9.060317039489746, 12.495813369750977, 1.1846063137054443], step: 103400, lr: 9.8523417025536e-05 2023-03-16 18:44:39,178 44k INFO ====> Epoch: 120, cost 635.01 s 2023-03-16 18:46:44,744 44k INFO Train Epoch: 121 [19%] 2023-03-16 18:46:44,744 44k INFO Losses: [2.5182604789733887, 2.283984899520874, 5.111583232879639, 10.822510719299316, 1.092755913734436], step: 103600, lr: 9.851110159840781e-05 2023-03-16 18:49:06,410 44k INFO Train Epoch: 121 [42%] 2023-03-16 18:49:06,411 44k INFO Losses: [2.4642066955566406, 2.4138736724853516, 9.587533950805664, 15.375748634338379, 1.1438040733337402], step: 103800, lr: 9.851110159840781e-05 2023-03-16 18:51:27,932 44k INFO Train Epoch: 121 [65%] 2023-03-16 18:51:27,933 44k INFO Losses: [2.3811967372894287, 2.16040301322937, 11.418601036071777, 17.944236755371094, 1.1710041761398315], step: 104000, lr: 9.851110159840781e-05 2023-03-16 18:51:31,903 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\G_104000.pth 2023-03-16 18:51:36,336 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\D_104000.pth 2023-03-16 18:54:00,932 44k INFO Train Epoch: 121 [88%] 2023-03-16 18:54:00,933 44k INFO Losses: [2.491713762283325, 2.084017276763916, 9.538949012756348, 15.590120315551758, 1.0256253480911255], step: 104200, lr: 9.851110159840781e-05 2023-03-16 18:55:12,701 44k INFO ====> Epoch: 121, cost 633.52 s 2023-03-16 18:56:34,427 44k INFO Train Epoch: 122 [11%] 2023-03-16 18:56:34,428 44k INFO Losses: [2.364685297012329, 2.3603594303131104, 10.06946086883545, 16.652935028076172, 1.239411473274231], step: 104400, lr: 9.8498787710708e-05 2023-03-16 18:58:56,165 44k INFO Train Epoch: 122 [35%] 2023-03-16 18:58:56,165 44k INFO Losses: [2.673912286758423, 2.045118808746338, 9.187744140625, 17.807641983032227, 1.0514941215515137], step: 104600, lr: 9.8498787710708e-05 2023-03-16 19:01:17,668 44k INFO Train Epoch: 122 [58%] 2023-03-16 19:01:17,669 44k INFO Losses: [2.850111722946167, 1.819439172744751, 6.855723857879639, 15.999702453613281, 0.9908527135848999], step: 104800, lr: 9.8498787710708e-05 2023-03-16 19:01:21,726 44k INFO Saving model and optimizer state at iteration 122 to ./logs\44k\G_104800.pth 2023-03-16 19:01:25,700 44k INFO Saving model and optimizer state at iteration 122 to ./logs\44k\D_104800.pth 2023-03-16 19:03:50,518 44k INFO Train Epoch: 122 [81%] 2023-03-16 19:03:50,519 44k INFO Losses: [2.570866584777832, 2.212061882019043, 8.445244789123535, 18.540626525878906, 1.0686192512512207], step: 105000, lr: 9.8498787710708e-05 2023-03-16 19:05:46,060 44k INFO ====> Epoch: 122, cost 633.36 s 2023-03-16 19:06:23,815 44k INFO Train Epoch: 123 [4%] 2023-03-16 19:06:23,815 44k INFO Losses: [2.658984661102295, 2.03776478767395, 8.905786514282227, 18.05913734436035, 1.2215849161148071], step: 105200, lr: 9.848647536224416e-05 2023-03-16 19:08:45,652 44k INFO Train Epoch: 123 [27%] 2023-03-16 19:08:45,652 44k INFO Losses: [2.766885280609131, 1.8412401676177979, 5.807347774505615, 12.534477233886719, 0.8368902206420898], step: 105400, lr: 9.848647536224416e-05 2023-03-16 19:11:07,336 44k INFO Train Epoch: 123 [51%] 2023-03-16 19:11:07,337 44k INFO Losses: [2.481703281402588, 2.613020658493042, 9.735782623291016, 17.96216583251953, 0.9566605091094971], step: 105600, lr: 9.848647536224416e-05 2023-03-16 19:11:11,234 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\G_105600.pth 2023-03-16 19:11:15,976 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\D_105600.pth 2023-03-16 19:13:40,668 44k INFO Train Epoch: 123 [74%] 2023-03-16 19:13:40,668 44k INFO Losses: [2.65828013420105, 2.29553484916687, 7.8118133544921875, 15.861117362976074, 0.8149623274803162], step: 105800, lr: 9.848647536224416e-05 2023-03-16 19:16:02,358 44k INFO Train Epoch: 123 [97%] 2023-03-16 19:16:02,359 44k INFO Losses: [2.543610095977783, 2.587454319000244, 12.06008243560791, 21.141298294067383, 1.0037273168563843], step: 106000, lr: 9.848647536224416e-05 2023-03-16 19:16:20,409 44k INFO ====> Epoch: 123, cost 634.35 s 2023-03-16 19:18:35,899 44k INFO Train Epoch: 124 [20%] 2023-03-16 19:18:35,900 44k INFO Losses: [2.7677390575408936, 2.0734002590179443, 6.6729512214660645, 14.842774391174316, 1.0866068601608276], step: 106200, lr: 9.847416455282387e-05 2023-03-16 19:20:59,136 44k INFO Train Epoch: 124 [43%] 2023-03-16 19:20:59,137 44k INFO Losses: [2.623889446258545, 2.2267982959747314, 7.421322822570801, 16.811634063720703, 1.1816513538360596], step: 106400, lr: 9.847416455282387e-05 2023-03-16 19:21:03,127 44k INFO Saving model and optimizer state at iteration 124 to ./logs\44k\G_106400.pth 2023-03-16 19:21:07,216 44k INFO Saving model and optimizer state at iteration 124 to ./logs\44k\D_106400.pth 2023-03-16 19:23:32,865 44k INFO Train Epoch: 124 [67%] 2023-03-16 19:23:32,865 44k INFO Losses: [2.609989643096924, 2.335691452026367, 7.639537334442139, 15.264836311340332, 0.9177182912826538], step: 106600, lr: 9.847416455282387e-05 2023-03-16 19:25:54,657 44k INFO Train Epoch: 124 [90%] 2023-03-16 19:25:54,658 44k INFO Losses: [2.378737449645996, 2.3845090866088867, 8.962225914001465, 16.01801872253418, 1.125346302986145], step: 106800, lr: 9.847416455282387e-05 2023-03-16 19:26:56,522 44k INFO ====> Epoch: 124, cost 636.11 s 2023-03-16 19:28:28,182 44k INFO Train Epoch: 125 [13%] 2023-03-16 19:28:28,183 44k INFO Losses: [2.7037665843963623, 1.9866989850997925, 9.491182327270508, 18.683019638061523, 0.8104425072669983], step: 107000, lr: 9.846185528225477e-05 2023-03-16 19:30:49,963 44k INFO Train Epoch: 125 [36%] 2023-03-16 19:30:49,964 44k INFO Losses: [2.843228816986084, 1.9253036975860596, 5.80888557434082, 11.980276107788086, 0.50477135181427], step: 107200, lr: 9.846185528225477e-05 2023-03-16 19:30:53,997 44k INFO Saving model and optimizer state at iteration 125 to ./logs\44k\G_107200.pth 2023-03-16 19:30:58,549 44k INFO Saving model and optimizer state at iteration 125 to ./logs\44k\D_107200.pth 2023-03-16 19:33:23,297 44k INFO Train Epoch: 125 [59%] 2023-03-16 19:33:23,297 44k INFO Losses: [2.6721930503845215, 1.9610826969146729, 6.891824722290039, 12.61270809173584, 1.3686786890029907], step: 107400, lr: 9.846185528225477e-05 2023-03-16 19:35:45,174 44k INFO Train Epoch: 125 [83%] 2023-03-16 19:35:45,174 44k INFO Losses: [2.832155227661133, 2.0197594165802, 8.903701782226562, 15.358120918273926, 0.6657664179801941], step: 107600, lr: 9.846185528225477e-05 2023-03-16 19:37:30,825 44k INFO ====> Epoch: 125, cost 634.30 s 2023-03-16 19:38:18,660 44k INFO Train Epoch: 126 [6%] 2023-03-16 19:38:18,660 44k INFO Losses: [2.6487207412719727, 2.1701016426086426, 8.01432991027832, 15.6986083984375, 1.0500904321670532], step: 107800, lr: 9.84495475503445e-05 2023-03-16 19:40:40,477 44k INFO Train Epoch: 126 [29%] 2023-03-16 19:40:40,477 44k INFO Losses: [2.182927131652832, 2.3132851123809814, 9.98603343963623, 18.881925582885742, 1.1369465589523315], step: 108000, lr: 9.84495475503445e-05 2023-03-16 19:40:44,384 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\G_108000.pth 2023-03-16 19:40:48,596 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\D_108000.pth 2023-03-16 19:43:13,045 44k INFO Train Epoch: 126 [52%] 2023-03-16 19:43:13,046 44k INFO Losses: [2.431124448776245, 2.566067934036255, 10.191755294799805, 20.783836364746094, 1.0094314813613892], step: 108200, lr: 9.84495475503445e-05 2023-03-16 19:45:35,014 44k INFO Train Epoch: 126 [75%] 2023-03-16 19:45:35,014 44k INFO Losses: [2.567314863204956, 2.226743698120117, 8.772584915161133, 19.738725662231445, 1.3281372785568237], step: 108400, lr: 9.84495475503445e-05 2023-03-16 19:47:56,769 44k INFO Train Epoch: 126 [99%] 2023-03-16 19:47:56,770 44k INFO Losses: [2.681608200073242, 1.8631749153137207, 7.680782794952393, 15.749849319458008, 1.0144494771957397], step: 108600, lr: 9.84495475503445e-05 2023-03-16 19:48:04,895 44k INFO ====> Epoch: 126, cost 634.07 s 2023-03-16 19:50:30,191 44k INFO Train Epoch: 127 [22%] 2023-03-16 19:50:30,191 44k INFO Losses: [2.5527467727661133, 2.0820696353912354, 7.685116767883301, 12.298871994018555, 1.279731273651123], step: 108800, lr: 9.84372413569007e-05 2023-03-16 19:50:34,064 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\G_108800.pth 2023-03-16 19:50:38,837 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\D_108800.pth 2023-03-16 19:53:03,720 44k INFO Train Epoch: 127 [45%] 2023-03-16 19:53:03,720 44k INFO Losses: [2.409576892852783, 2.19966721534729, 10.886187553405762, 20.731252670288086, 1.0587965250015259], step: 109000, lr: 9.84372413569007e-05 2023-03-16 19:55:24,966 44k INFO Train Epoch: 127 [68%] 2023-03-16 19:55:24,966 44k INFO Losses: [2.615319013595581, 2.1089367866516113, 6.700294494628906, 17.904451370239258, 1.2033452987670898], step: 109200, lr: 9.84372413569007e-05 2023-03-16 19:57:46,241 44k INFO Train Epoch: 127 [91%] 2023-03-16 19:57:46,242 44k INFO Losses: [2.4396095275878906, 2.223801612854004, 10.729804039001465, 17.240888595581055, 1.2980788946151733], step: 109400, lr: 9.84372413569007e-05 2023-03-16 19:58:38,154 44k INFO ====> Epoch: 127, cost 633.26 s 2023-03-16 20:00:19,471 44k INFO Train Epoch: 128 [15%] 2023-03-16 20:00:19,471 44k INFO Losses: [2.493750810623169, 2.2511415481567383, 8.670130729675293, 14.583816528320312, 1.1578848361968994], step: 109600, lr: 9.842493670173108e-05 2023-03-16 20:00:23,397 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\G_109600.pth 2023-03-16 20:00:28,500 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\D_109600.pth 2023-03-16 20:02:52,687 44k INFO Train Epoch: 128 [38%] 2023-03-16 20:02:52,688 44k INFO Losses: [2.5978240966796875, 2.1907474994659424, 8.015667915344238, 17.0461483001709, 1.1653794050216675], step: 109800, lr: 9.842493670173108e-05 2023-03-16 20:05:13,917 44k INFO Train Epoch: 128 [61%] 2023-03-16 20:05:13,917 44k INFO Losses: [2.4101357460021973, 2.2859673500061035, 8.304093360900879, 16.690439224243164, 0.8480856418609619], step: 110000, lr: 9.842493670173108e-05 2023-03-16 20:07:35,240 44k INFO Train Epoch: 128 [84%] 2023-03-16 20:07:35,240 44k INFO Losses: [2.3773083686828613, 2.539529800415039, 12.609331130981445, 19.644685745239258, 0.6859080791473389], step: 110200, lr: 9.842493670173108e-05 2023-03-16 20:09:10,634 44k INFO ====> Epoch: 128, cost 632.48 s 2023-03-16 20:10:08,210 44k INFO Train Epoch: 129 [7%] 2023-03-16 20:10:08,211 44k INFO Losses: [2.5666143894195557, 2.002363920211792, 7.7113518714904785, 15.808853149414062, 1.0160375833511353], step: 110400, lr: 9.841263358464336e-05 2023-03-16 20:10:12,271 44k INFO Saving model and optimizer state at iteration 129 to ./logs\44k\G_110400.pth 2023-03-16 20:10:16,969 44k INFO Saving model and optimizer state at iteration 129 to ./logs\44k\D_110400.pth 2023-03-16 20:12:41,299 44k INFO Train Epoch: 129 [31%] 2023-03-16 20:12:41,300 44k INFO Losses: [2.3025834560394287, 2.3525550365448, 12.295973777770996, 16.461994171142578, 1.3132600784301758], step: 110600, lr: 9.841263358464336e-05 2023-03-16 20:15:02,352 44k INFO Train Epoch: 129 [54%] 2023-03-16 20:15:02,352 44k INFO Losses: [2.663271903991699, 2.1506154537200928, 7.188974380493164, 15.313072204589844, 0.8278406262397766], step: 110800, lr: 9.841263358464336e-05 2023-03-16 20:17:23,823 44k INFO Train Epoch: 129 [77%] 2023-03-16 20:17:23,823 44k INFO Losses: [2.555851459503174, 1.9232029914855957, 10.366886138916016, 15.347078323364258, 1.1601710319519043], step: 111000, lr: 9.841263358464336e-05 2023-03-16 20:19:43,729 44k INFO ====> Epoch: 129, cost 633.09 s 2023-03-16 20:19:57,631 44k INFO Train Epoch: 130 [0%] 2023-03-16 20:19:57,632 44k INFO Losses: [2.5679917335510254, 2.2478535175323486, 9.649542808532715, 20.21727180480957, 1.1623823642730713], step: 111200, lr: 9.840033200544528e-05 2023-03-16 20:20:01,657 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\G_111200.pth 2023-03-16 20:20:05,995 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\D_111200.pth 2023-03-16 20:22:30,770 44k INFO Train Epoch: 130 [23%] 2023-03-16 20:22:30,771 44k INFO Losses: [2.720573902130127, 1.9112548828125, 9.5593843460083, 16.673011779785156, 1.0138107538223267], step: 111400, lr: 9.840033200544528e-05 2023-03-16 20:24:52,457 44k INFO Train Epoch: 130 [47%] 2023-03-16 20:24:52,458 44k INFO Losses: [2.4857239723205566, 2.1457386016845703, 4.618976593017578, 10.184432983398438, 1.015313982963562], step: 111600, lr: 9.840033200544528e-05 2023-03-16 20:27:14,283 44k INFO Train Epoch: 130 [70%] 2023-03-16 20:27:14,284 44k INFO Losses: [2.4709649085998535, 2.3266143798828125, 10.410274505615234, 19.23822021484375, 0.6309609413146973], step: 111800, lr: 9.840033200544528e-05 2023-03-16 20:29:36,010 44k INFO Train Epoch: 130 [93%] 2023-03-16 20:29:36,011 44k INFO Losses: [2.377204418182373, 2.4483327865600586, 10.38333797454834, 17.20754623413086, 1.1394609212875366], step: 112000, lr: 9.840033200544528e-05 2023-03-16 20:29:39,883 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\G_112000.pth 2023-03-16 20:29:44,560 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\D_112000.pth 2023-03-16 20:30:29,617 44k INFO ====> Epoch: 130, cost 645.89 s 2023-03-16 20:32:20,911 44k INFO Train Epoch: 131 [16%] 2023-03-16 20:32:20,911 44k INFO Losses: [2.6248302459716797, 2.1353821754455566, 11.614191055297852, 19.0932559967041, 1.1477761268615723], step: 112200, lr: 9.838803196394459e-05 2023-03-16 20:34:42,121 44k INFO Train Epoch: 131 [39%] 2023-03-16 20:34:42,122 44k INFO Losses: [2.37603497505188, 2.4356017112731934, 11.378386497497559, 19.17807960510254, 0.9521220922470093], step: 112400, lr: 9.838803196394459e-05 2023-03-16 20:37:03,373 44k INFO Train Epoch: 131 [63%] 2023-03-16 20:37:03,373 44k INFO Losses: [2.69816255569458, 2.0870635509490967, 7.337793350219727, 13.267099380493164, 1.1462477445602417], step: 112600, lr: 9.838803196394459e-05 2023-03-16 20:39:24,735 44k INFO Train Epoch: 131 [86%] 2023-03-16 20:39:24,736 44k INFO Losses: [2.5169425010681152, 2.2868587970733643, 9.159473419189453, 18.246192932128906, 1.5389779806137085], step: 112800, lr: 9.838803196394459e-05 2023-03-16 20:39:28,789 44k INFO Saving model and optimizer state at iteration 131 to ./logs\44k\G_112800.pth 2023-03-16 20:39:33,627 44k INFO Saving model and optimizer state at iteration 131 to ./logs\44k\D_112800.pth 2023-03-16 20:41:02,191 44k INFO ====> Epoch: 131, cost 632.57 s 2023-03-16 20:42:09,578 44k INFO Train Epoch: 132 [9%] 2023-03-16 20:42:09,579 44k INFO Losses: [2.600416898727417, 2.380168914794922, 9.768733978271484, 15.913911819458008, 1.3374770879745483], step: 113000, lr: 9.837573345994909e-05 2023-03-16 20:44:50,532 44k INFO Train Epoch: 132 [32%] 2023-03-16 20:44:50,533 44k INFO Losses: [2.61757755279541, 2.2627205848693848, 8.742877006530762, 17.649904251098633, 1.293078064918518], step: 113200, lr: 9.837573345994909e-05 2023-03-16 20:47:26,943 44k INFO Train Epoch: 132 [55%] 2023-03-16 20:47:26,943 44k INFO Losses: [2.422358512878418, 2.317683458328247, 11.037595748901367, 18.877872467041016, 1.289183259010315], step: 113400, lr: 9.837573345994909e-05 2023-03-16 20:49:48,486 44k INFO Train Epoch: 132 [79%] 2023-03-16 20:49:48,486 44k INFO Losses: [2.6026101112365723, 1.9980577230453491, 8.421849250793457, 17.658477783203125, 1.1867685317993164], step: 113600, lr: 9.837573345994909e-05 2023-03-16 20:49:52,453 44k INFO Saving model and optimizer state at iteration 132 to ./logs\44k\G_113600.pth 2023-03-16 20:49:56,955 44k INFO Saving model and optimizer state at iteration 132 to ./logs\44k\D_113600.pth 2023-03-16 20:52:09,502 44k INFO ====> Epoch: 132, cost 667.31 s 2023-03-16 20:52:33,124 44k INFO Train Epoch: 133 [2%] 2023-03-16 20:52:33,125 44k INFO Losses: [2.4451916217803955, 2.0726420879364014, 11.253622055053711, 21.747684478759766, 1.3681118488311768], step: 113800, lr: 9.836343649326659e-05 2023-03-16 20:54:55,028 44k INFO Train Epoch: 133 [25%] 2023-03-16 20:54:55,028 44k INFO Losses: [2.773305654525757, 1.7611480951309204, 7.551708221435547, 12.684425354003906, 1.371412992477417], step: 114000, lr: 9.836343649326659e-05 2023-03-16 20:57:16,564 44k INFO Train Epoch: 133 [48%] 2023-03-16 20:57:16,564 44k INFO Losses: [2.8423643112182617, 1.7600538730621338, 4.159178256988525, 8.71160888671875, 0.9202947616577148], step: 114200, lr: 9.836343649326659e-05 2023-03-16 20:59:38,332 44k INFO Train Epoch: 133 [71%] 2023-03-16 20:59:38,332 44k INFO Losses: [2.5056374073028564, 2.261467695236206, 9.996739387512207, 19.861116409301758, 1.1900807619094849], step: 114400, lr: 9.836343649326659e-05 2023-03-16 20:59:42,229 44k INFO Saving model and optimizer state at iteration 133 to ./logs\44k\G_114400.pth 2023-03-16 20:59:46,611 44k INFO Saving model and optimizer state at iteration 133 to ./logs\44k\D_114400.pth 2023-03-16 21:02:10,742 44k INFO Train Epoch: 133 [95%] 2023-03-16 21:02:10,742 44k INFO Losses: [2.5132265090942383, 2.012876033782959, 11.208643913269043, 20.729951858520508, 0.9037682414054871], step: 114600, lr: 9.836343649326659e-05 2023-03-16 21:02:42,926 44k INFO ====> Epoch: 133, cost 633.42 s 2023-03-16 21:04:43,889 44k INFO Train Epoch: 134 [18%] 2023-03-16 21:04:43,889 44k INFO Losses: [2.449303150177002, 2.162792921066284, 11.729571342468262, 19.72464942932129, 1.1677029132843018], step: 114800, lr: 9.835114106370493e-05 2023-03-16 21:07:05,140 44k INFO Train Epoch: 134 [41%] 2023-03-16 21:07:05,141 44k INFO Losses: [2.3951432704925537, 2.078047275543213, 10.17193603515625, 15.875231742858887, 1.460920810699463], step: 115000, lr: 9.835114106370493e-05 2023-03-16 21:09:26,433 44k INFO Train Epoch: 134 [64%] 2023-03-16 21:09:26,433 44k INFO Losses: [2.626671314239502, 2.0337295532226562, 9.061286926269531, 15.578713417053223, 1.2408673763275146], step: 115200, lr: 9.835114106370493e-05 2023-03-16 21:09:30,492 44k INFO Saving model and optimizer state at iteration 134 to ./logs\44k\G_115200.pth 2023-03-16 21:09:34,993 44k INFO Saving model and optimizer state at iteration 134 to ./logs\44k\D_115200.pth 2023-03-16 21:11:59,283 44k INFO Train Epoch: 134 [87%] 2023-03-16 21:11:59,284 44k INFO Losses: [2.6535017490386963, 2.0613343715667725, 6.730578422546387, 16.930768966674805, 0.8290624022483826], step: 115400, lr: 9.835114106370493e-05 2023-03-16 21:14:08,286 44k INFO ====> Epoch: 134, cost 685.36 s 2023-03-16 21:15:41,298 44k INFO Train Epoch: 135 [11%] 2023-03-16 21:15:41,298 44k INFO Losses: [2.7069106101989746, 2.0505430698394775, 5.064854145050049, 15.812265396118164, 1.322622299194336], step: 115600, lr: 9.833884717107196e-05 2023-03-16 21:18:25,921 44k INFO Train Epoch: 135 [34%] 2023-03-16 21:18:25,922 44k INFO Losses: [2.5693359375, 1.9666976928710938, 12.35958480834961, 17.730270385742188, 1.2497332096099854], step: 115800, lr: 9.833884717107196e-05 2023-03-16 21:21:34,631 44k INFO Train Epoch: 135 [57%] 2023-03-16 21:21:34,631 44k INFO Losses: [2.670649290084839, 1.8388310670852661, 9.371175765991211, 15.206411361694336, 1.1855354309082031], step: 116000, lr: 9.833884717107196e-05 2023-03-16 21:21:39,475 44k INFO Saving model and optimizer state at iteration 135 to ./logs\44k\G_116000.pth 2023-03-16 21:21:44,187 44k INFO Saving model and optimizer state at iteration 135 to ./logs\44k\D_116000.pth 2023-03-16 21:24:21,045 44k INFO Train Epoch: 135 [80%] 2023-03-16 21:24:21,045 44k INFO Losses: [2.4109795093536377, 2.2689414024353027, 8.995115280151367, 17.2922306060791, 1.2698101997375488], step: 116200, lr: 9.833884717107196e-05 2023-03-16 21:26:31,693 44k INFO ====> Epoch: 135, cost 743.41 s 2023-03-16 21:27:10,741 44k INFO Train Epoch: 136 [3%] 2023-03-16 21:27:10,742 44k INFO Losses: [2.3979849815368652, 2.3024377822875977, 11.538329124450684, 20.29210662841797, 0.8903095126152039], step: 116400, lr: 9.832655481517557e-05 2023-03-16 21:29:58,167 44k INFO Train Epoch: 136 [27%] 2023-03-16 21:29:58,167 44k INFO Losses: [2.718248128890991, 2.5003182888031006, 8.116427421569824, 15.684185028076172, 1.157799482345581], step: 116600, lr: 9.832655481517557e-05 2023-03-16 21:32:48,132 44k INFO Train Epoch: 136 [50%] 2023-03-16 21:32:48,133 44k INFO Losses: [2.3706018924713135, 2.414168357849121, 12.063079833984375, 17.07933235168457, 1.0406662225723267], step: 116800, lr: 9.832655481517557e-05 2023-03-16 21:32:52,189 44k INFO Saving model and optimizer state at iteration 136 to ./logs\44k\G_116800.pth 2023-03-16 21:32:56,396 44k INFO Saving model and optimizer state at iteration 136 to ./logs\44k\D_116800.pth 2023-03-16 21:36:05,226 44k INFO Train Epoch: 136 [73%] 2023-03-16 21:36:05,226 44k INFO Losses: [2.4945931434631348, 2.52345871925354, 9.368271827697754, 19.82061767578125, 0.8507871031761169], step: 117000, lr: 9.832655481517557e-05 2023-03-16 21:38:38,624 44k INFO Train Epoch: 136 [96%] 2023-03-16 21:38:38,624 44k INFO Losses: [2.4999053478240967, 2.1363024711608887, 12.018359184265137, 20.207496643066406, 1.066769003868103], step: 117200, lr: 9.832655481517557e-05 2023-03-16 21:39:01,770 44k INFO ====> Epoch: 136, cost 750.08 s 2023-03-16 21:41:17,401 44k INFO Train Epoch: 137 [19%] 2023-03-16 21:41:17,402 44k INFO Losses: [2.7969393730163574, 2.2330756187438965, 8.101114273071289, 16.88085174560547, 1.3562439680099487], step: 117400, lr: 9.831426399582366e-05 2023-03-16 21:43:43,952 44k INFO Train Epoch: 137 [43%] 2023-03-16 21:43:43,952 44k INFO Losses: [2.4791440963745117, 2.1699817180633545, 6.417663097381592, 18.22201919555664, 1.0202897787094116], step: 117600, lr: 9.831426399582366e-05 2023-03-16 21:43:47,900 44k INFO Saving model and optimizer state at iteration 137 to ./logs\44k\G_117600.pth 2023-03-16 21:43:52,438 44k INFO Saving model and optimizer state at iteration 137 to ./logs\44k\D_117600.pth 2023-03-16 21:46:22,574 44k INFO Train Epoch: 137 [66%] 2023-03-16 21:46:22,574 44k INFO Losses: [2.4053943157196045, 2.1500415802001953, 7.654122352600098, 11.787588119506836, 0.8890261650085449], step: 117800, lr: 9.831426399582366e-05 2023-03-16 21:48:49,365 44k INFO Train Epoch: 137 [89%] 2023-03-16 21:48:49,366 44k INFO Losses: [2.7532899379730225, 2.1072521209716797, 10.815797805786133, 16.139217376708984, 1.1834875345230103], step: 118000, lr: 9.831426399582366e-05 2023-03-16 21:49:57,764 44k INFO ====> Epoch: 137, cost 655.99 s 2023-03-16 21:51:27,742 44k INFO Train Epoch: 138 [12%] 2023-03-16 21:51:27,742 44k INFO Losses: [2.338207960128784, 2.92521333694458, 12.210193634033203, 23.45649528503418, 1.0388580560684204], step: 118200, lr: 9.830197471282419e-05 2023-03-16 21:53:54,622 44k INFO Train Epoch: 138 [35%] 2023-03-16 21:53:54,622 44k INFO Losses: [2.5494065284729004, 1.93572199344635, 8.401667594909668, 18.295738220214844, 1.054984211921692], step: 118400, lr: 9.830197471282419e-05 2023-03-16 21:53:58,495 44k INFO Saving model and optimizer state at iteration 138 to ./logs\44k\G_118400.pth 2023-03-16 21:54:03,086 44k INFO Saving model and optimizer state at iteration 138 to ./logs\44k\D_118400.pth 2023-03-16 21:56:32,853 44k INFO Train Epoch: 138 [59%] 2023-03-16 21:56:32,854 44k INFO Losses: [2.438476324081421, 2.084506034851074, 8.874791145324707, 15.839492797851562, 0.9948272705078125], step: 118600, lr: 9.830197471282419e-05 2023-03-16 21:58:59,710 44k INFO Train Epoch: 138 [82%] 2023-03-16 21:58:59,710 44k INFO Losses: [2.8350605964660645, 1.8919711112976074, 5.331559658050537, 16.32180404663086, 1.1552438735961914], step: 118800, lr: 9.830197471282419e-05 2023-03-16 22:00:53,606 44k INFO ====> Epoch: 138, cost 655.84 s 2023-03-16 22:01:38,135 44k INFO Train Epoch: 139 [5%] 2023-03-16 22:01:38,136 44k INFO Losses: [2.37713623046875, 2.4471938610076904, 6.440496444702148, 12.292020797729492, 1.0370068550109863], step: 119000, lr: 9.828968696598508e-05 2023-03-16 22:04:05,048 44k INFO Train Epoch: 139 [28%] 2023-03-16 22:04:05,049 44k INFO Losses: [2.6040005683898926, 2.1829333305358887, 8.543479919433594, 15.96783447265625, 1.160687804222107], step: 119200, lr: 9.828968696598508e-05 2023-03-16 22:04:08,950 44k INFO Saving model and optimizer state at iteration 139 to ./logs\44k\G_119200.pth 2023-03-16 22:04:13,703 44k INFO Saving model and optimizer state at iteration 139 to ./logs\44k\D_119200.pth 2023-03-16 22:06:54,441 44k INFO Train Epoch: 139 [52%] 2023-03-16 22:06:54,442 44k INFO Losses: [2.5802597999572754, 2.0953307151794434, 5.222555160522461, 16.309436798095703, 1.3970965147018433], step: 119400, lr: 9.828968696598508e-05 2023-03-16 22:09:23,862 44k INFO Train Epoch: 139 [75%] 2023-03-16 22:09:23,863 44k INFO Losses: [2.349489212036133, 2.4815945625305176, 11.19788932800293, 17.439172744750977, 1.3460718393325806], step: 119600, lr: 9.828968696598508e-05 2023-03-16 22:12:01,033 44k INFO Train Epoch: 139 [98%] 2023-03-16 22:12:01,033 44k INFO Losses: [2.6351993083953857, 2.3718814849853516, 7.736194133758545, 16.910945892333984, 1.0389046669006348], step: 119800, lr: 9.828968696598508e-05 2023-03-16 22:12:14,486 44k INFO ====> Epoch: 139, cost 680.88 s 2023-03-16 22:15:31,362 44k INFO Train Epoch: 140 [21%] 2023-03-16 22:15:31,363 44k INFO Losses: [2.60390043258667, 2.2557363510131836, 9.618741035461426, 16.128873825073242, 1.222044587135315], step: 120000, lr: 9.827740075511432e-05 2023-03-16 22:15:35,451 44k INFO Saving model and optimizer state at iteration 140 to ./logs\44k\G_120000.pth 2023-03-16 22:15:40,144 44k INFO Saving model and optimizer state at iteration 140 to ./logs\44k\D_120000.pth 2023-03-16 22:18:29,464 44k INFO Train Epoch: 140 [44%] 2023-03-16 22:18:29,464 44k INFO Losses: [2.452655553817749, 2.4233131408691406, 11.284075736999512, 21.19755744934082, 1.2812215089797974], step: 120200, lr: 9.827740075511432e-05 2023-03-16 22:21:50,085 44k INFO Train Epoch: 140 [68%] 2023-03-16 22:21:50,085 44k INFO Losses: [2.8661935329437256, 2.1015310287475586, 6.510406017303467, 14.287141799926758, 1.0608881711959839], step: 120400, lr: 9.827740075511432e-05 2023-03-16 22:24:57,264 44k INFO Train Epoch: 140 [91%] 2023-03-16 22:24:57,264 44k INFO Losses: [2.4830498695373535, 2.1070289611816406, 13.073107719421387, 19.676254272460938, 0.9451220035552979], step: 120600, lr: 9.827740075511432e-05 2023-03-16 22:26:03,912 44k INFO ====> Epoch: 140, cost 829.43 s 2023-03-16 22:27:59,097 44k INFO Train Epoch: 141 [14%] 2023-03-16 22:27:59,097 44k INFO Losses: [2.6181640625, 2.039006233215332, 7.657082557678223, 15.365702629089355, 1.0301382541656494], step: 120800, lr: 9.826511608001993e-05 2023-03-16 22:28:03,066 44k INFO Saving model and optimizer state at iteration 141 to ./logs\44k\G_120800.pth 2023-03-16 22:28:07,891 44k INFO Saving model and optimizer state at iteration 141 to ./logs\44k\D_120800.pth 2023-03-16 22:30:48,600 44k INFO Train Epoch: 141 [37%] 2023-03-16 22:30:48,600 44k INFO Losses: [2.5587451457977295, 1.9840689897537231, 7.809965133666992, 17.580013275146484, 1.1884479522705078], step: 121000, lr: 9.826511608001993e-05 2023-03-16 22:33:50,899 44k INFO Train Epoch: 141 [60%] 2023-03-16 22:33:50,899 44k INFO Losses: [2.608827829360962, 2.1126480102539062, 8.765237808227539, 17.291404724121094, 1.3661949634552002], step: 121200, lr: 9.826511608001993e-05 2023-03-16 22:36:23,540 44k INFO Train Epoch: 141 [84%] 2023-03-16 22:36:23,541 44k INFO Losses: [2.4790761470794678, 2.2974586486816406, 14.068603515625, 19.655866622924805, 1.4581853151321411], step: 121400, lr: 9.826511608001993e-05 2023-03-16 22:38:13,204 44k INFO ====> Epoch: 141, cost 729.29 s 2023-03-16 22:39:11,276 44k INFO Train Epoch: 142 [7%] 2023-03-16 22:39:11,277 44k INFO Losses: [2.5454375743865967, 2.2510147094726562, 6.980429649353027, 17.490814208984375, 0.6913356184959412], step: 121600, lr: 9.825283294050992e-05 2023-03-16 22:39:15,219 44k INFO Saving model and optimizer state at iteration 142 to ./logs\44k\G_121600.pth 2023-03-16 22:39:20,094 44k INFO Saving model and optimizer state at iteration 142 to ./logs\44k\D_121600.pth 2023-03-16 22:42:05,300 44k INFO Train Epoch: 142 [30%] 2023-03-16 22:42:05,301 44k INFO Losses: [2.504025936126709, 2.151691436767578, 5.913041591644287, 14.993463516235352, 1.2519930601119995], step: 121800, lr: 9.825283294050992e-05 2023-03-16 22:44:40,938 44k INFO Train Epoch: 142 [53%] 2023-03-16 22:44:40,945 44k INFO Losses: [2.7238986492156982, 1.797090768814087, 5.831445217132568, 15.547506332397461, 0.9965900778770447], step: 122000, lr: 9.825283294050992e-05 2023-03-16 22:47:16,773 44k INFO Train Epoch: 142 [76%] 2023-03-16 22:47:16,774 44k INFO Losses: [2.7373805046081543, 2.163633108139038, 8.925752639770508, 18.23073387145996, 0.9029061198234558], step: 122200, lr: 9.825283294050992e-05 2023-03-16 22:49:50,215 44k INFO Train Epoch: 142 [100%] 2023-03-16 22:49:50,215 44k INFO Losses: [2.2551932334899902, 2.522709846496582, 12.586668968200684, 19.819482803344727, 0.580974280834198], step: 122400, lr: 9.825283294050992e-05 2023-03-16 22:49:54,338 44k INFO Saving model and optimizer state at iteration 142 to ./logs\44k\G_122400.pth 2023-03-16 22:49:58,677 44k INFO Saving model and optimizer state at iteration 142 to ./logs\44k\D_122400.pth 2023-03-16 22:50:04,284 44k INFO ====> Epoch: 142, cost 711.08 s 2023-03-16 22:52:58,101 44k INFO Train Epoch: 143 [23%] 2023-03-16 22:52:58,102 44k INFO Losses: [2.595994472503662, 2.1094799041748047, 10.695647239685059, 18.222400665283203, 1.1259431838989258], step: 122600, lr: 9.824055133639235e-05 2023-03-16 22:55:36,646 44k INFO Train Epoch: 143 [46%] 2023-03-16 22:55:36,646 44k INFO Losses: [2.823068857192993, 1.8365389108657837, 6.985522270202637, 12.749483108520508, 1.2464447021484375], step: 122800, lr: 9.824055133639235e-05 2023-03-16 22:58:14,885 44k INFO Train Epoch: 143 [69%] 2023-03-16 22:58:14,885 44k INFO Losses: [2.576725959777832, 2.0482053756713867, 8.435483932495117, 15.820399284362793, 0.8962357640266418], step: 123000, lr: 9.824055133639235e-05 2023-03-16 23:00:50,632 44k INFO Train Epoch: 143 [92%] 2023-03-16 23:00:50,632 44k INFO Losses: [2.315415382385254, 2.428802251815796, 9.45921802520752, 19.94500732421875, 1.3810192346572876], step: 123200, lr: 9.824055133639235e-05 2023-03-16 23:00:54,586 44k INFO Saving model and optimizer state at iteration 143 to ./logs\44k\G_123200.pth 2023-03-16 23:00:58,813 44k INFO Saving model and optimizer state at iteration 143 to ./logs\44k\D_123200.pth 2023-03-16 23:01:51,482 44k INFO ====> Epoch: 143, cost 707.20 s 2023-03-16 23:03:48,132 44k INFO Train Epoch: 144 [16%] 2023-03-16 23:03:48,133 44k INFO Losses: [2.6273438930511475, 1.9956663846969604, 4.523970603942871, 11.16288948059082, 1.0220561027526855], step: 123400, lr: 9.822827126747529e-05 2023-03-16 23:06:22,372 44k INFO Train Epoch: 144 [39%] 2023-03-16 23:06:22,372 44k INFO Losses: [2.4827325344085693, 2.2570762634277344, 12.361068725585938, 17.617965698242188, 1.2966997623443604], step: 123600, lr: 9.822827126747529e-05 2023-03-16 23:09:10,747 44k INFO Train Epoch: 144 [62%] 2023-03-16 23:09:10,748 44k INFO Losses: [2.8704357147216797, 2.022487163543701, 7.9073333740234375, 14.66025447845459, 0.9383806586265564], step: 123800, lr: 9.822827126747529e-05 2023-03-16 23:12:03,876 44k INFO Train Epoch: 144 [85%] 2023-03-16 23:12:03,876 44k INFO Losses: [2.484707832336426, 2.0236706733703613, 8.094359397888184, 16.48890495300293, 0.8600778579711914], step: 124000, lr: 9.822827126747529e-05 2023-03-16 23:12:07,814 44k INFO Saving model and optimizer state at iteration 144 to ./logs\44k\G_124000.pth 2023-03-16 23:12:12,842 44k INFO Saving model and optimizer state at iteration 144 to ./logs\44k\D_124000.pth 2023-03-16 23:14:04,829 44k INFO ====> Epoch: 144, cost 733.35 s 2023-03-16 23:15:13,036 44k INFO Train Epoch: 145 [8%] 2023-03-16 23:15:13,037 44k INFO Losses: [2.7076497077941895, 2.054171085357666, 8.532033920288086, 17.94244384765625, 1.2180298566818237], step: 124200, lr: 9.821599273356685e-05 2023-03-16 23:17:48,695 44k INFO Train Epoch: 145 [32%] 2023-03-16 23:17:48,695 44k INFO Losses: [2.621206045150757, 2.2107436656951904, 10.023968696594238, 15.911124229431152, 1.1811844110488892], step: 124400, lr: 9.821599273356685e-05 2023-03-16 23:20:25,801 44k INFO Train Epoch: 145 [55%] 2023-03-16 23:20:25,802 44k INFO Losses: [2.649275541305542, 2.1416778564453125, 11.14062786102295, 18.206287384033203, 1.0119317770004272], step: 124600, lr: 9.821599273356685e-05 2023-03-16 23:23:31,227 44k INFO Train Epoch: 145 [78%] 2023-03-16 23:23:31,227 44k INFO Losses: [2.331132650375366, 2.140429973602295, 9.650829315185547, 19.255184173583984, 1.0140457153320312], step: 124800, lr: 9.821599273356685e-05 2023-03-16 23:23:35,200 44k INFO Saving model and optimizer state at iteration 145 to ./logs\44k\G_124800.pth 2023-03-16 23:23:39,545 44k INFO Saving model and optimizer state at iteration 145 to ./logs\44k\D_124800.pth 2023-03-16 23:26:25,453 44k INFO ====> Epoch: 145, cost 740.62 s 2023-03-16 23:26:46,463 44k INFO Train Epoch: 146 [1%] 2023-03-16 23:26:46,464 44k INFO Losses: [2.358099937438965, 2.353086233139038, 11.549097061157227, 19.048086166381836, 0.7859032154083252], step: 125000, lr: 9.820371573447515e-05 2023-03-16 23:29:28,869 44k INFO Train Epoch: 146 [24%] 2023-03-16 23:29:28,869 44k INFO Losses: [2.405975341796875, 2.0758254528045654, 11.106308937072754, 18.427959442138672, 1.1071792840957642], step: 125200, lr: 9.820371573447515e-05 2023-03-16 23:32:13,831 44k INFO Train Epoch: 146 [48%] 2023-03-16 23:32:13,831 44k INFO Losses: [2.375122547149658, 2.2398929595947266, 10.212577819824219, 19.27030372619629, 0.9906389713287354], step: 125400, lr: 9.820371573447515e-05 2023-03-16 23:34:57,699 44k INFO Train Epoch: 146 [71%] 2023-03-16 23:34:57,700 44k INFO Losses: [2.6488022804260254, 2.088793992996216, 8.362471580505371, 19.329336166381836, 1.0661683082580566], step: 125600, lr: 9.820371573447515e-05 2023-03-16 23:35:01,736 44k INFO Saving model and optimizer state at iteration 146 to ./logs\44k\G_125600.pth 2023-03-16 23:35:06,387 44k INFO Saving model and optimizer state at iteration 146 to ./logs\44k\D_125600.pth 2023-03-16 23:37:48,355 44k INFO Train Epoch: 146 [94%] 2023-03-16 23:37:48,356 44k INFO Losses: [2.359959602355957, 2.166764974594116, 10.199384689331055, 18.601924896240234, 1.0775203704833984], step: 125800, lr: 9.820371573447515e-05 2023-03-16 23:38:27,387 44k INFO ====> Epoch: 146, cost 721.93 s 2023-03-16 23:41:08,992 44k INFO Train Epoch: 147 [17%] 2023-03-16 23:41:08,993 44k INFO Losses: [2.4349617958068848, 2.4245314598083496, 10.553848266601562, 17.04421615600586, 1.2597944736480713], step: 126000, lr: 9.819144027000834e-05 2023-03-16 23:44:02,700 44k INFO Train Epoch: 147 [40%] 2023-03-16 23:44:02,701 44k INFO Losses: [2.577012062072754, 2.090944290161133, 9.388341903686523, 16.80068588256836, 0.9351382851600647], step: 126200, lr: 9.819144027000834e-05 2023-03-16 23:46:38,870 44k INFO Train Epoch: 147 [64%] 2023-03-16 23:46:38,870 44k INFO Losses: [2.566521167755127, 2.2643845081329346, 6.40802526473999, 15.741965293884277, 1.0539950132369995], step: 126400, lr: 9.819144027000834e-05 2023-03-16 23:46:42,890 44k INFO Saving model and optimizer state at iteration 147 to ./logs\44k\G_126400.pth 2023-03-16 23:46:47,087 44k INFO Saving model and optimizer state at iteration 147 to ./logs\44k\D_126400.pth 2023-03-16 23:49:24,033 44k INFO Train Epoch: 147 [87%] 2023-03-16 23:49:24,033 44k INFO Losses: [2.5308022499084473, 2.215419292449951, 12.526217460632324, 20.01734161376953, 0.766711950302124], step: 126600, lr: 9.819144027000834e-05 2023-03-16 23:50:49,524 44k INFO ====> Epoch: 147, cost 742.14 s 2023-03-16 23:52:10,480 44k INFO Train Epoch: 148 [10%] 2023-03-16 23:52:10,481 44k INFO Losses: [2.488266944885254, 2.0867743492126465, 7.993011474609375, 17.101844787597656, 1.0308082103729248], step: 126800, lr: 9.817916633997459e-05 2023-03-16 23:55:01,081 44k INFO Train Epoch: 148 [33%] 2023-03-16 23:55:01,081 44k INFO Losses: [2.655285358428955, 2.059166193008423, 5.7750468254089355, 17.350406646728516, 1.3344104290008545], step: 127000, lr: 9.817916633997459e-05 2023-03-16 23:57:43,433 44k INFO Train Epoch: 148 [56%] 2023-03-16 23:57:43,433 44k INFO Losses: [2.7036352157592773, 1.8317204713821411, 8.408808708190918, 17.24069595336914, 1.14274001121521], step: 127200, lr: 9.817916633997459e-05 2023-03-16 23:57:47,587 44k INFO Saving model and optimizer state at iteration 148 to ./logs\44k\G_127200.pth 2023-03-16 23:57:52,501 44k INFO Saving model and optimizer state at iteration 148 to ./logs\44k\D_127200.pth 2023-03-17 00:00:46,364 44k INFO Train Epoch: 148 [80%] 2023-03-17 00:00:46,364 44k INFO Losses: [2.447233200073242, 2.4372596740722656, 8.824172019958496, 17.66782569885254, 1.0986580848693848], step: 127400, lr: 9.817916633997459e-05 2023-03-17 00:03:10,573 44k INFO ====> Epoch: 148, cost 741.05 s 2023-03-17 00:03:41,264 44k INFO Train Epoch: 149 [3%] 2023-03-17 00:03:41,264 44k INFO Losses: [2.751725673675537, 1.7117197513580322, 5.903391361236572, 17.66267204284668, 0.8748002052307129], step: 127600, lr: 9.816689394418209e-05 2023-03-17 00:06:13,500 44k INFO Train Epoch: 149 [26%] 2023-03-17 00:06:13,500 44k INFO Losses: [2.3793113231658936, 2.2046382427215576, 9.87548542022705, 17.781047821044922, 1.060261845588684], step: 127800, lr: 9.816689394418209e-05 2023-03-17 00:08:41,543 44k INFO Train Epoch: 149 [49%] 2023-03-17 00:08:41,544 44k INFO Losses: [2.5828328132629395, 2.0902047157287598, 9.818443298339844, 19.784555435180664, 1.2083024978637695], step: 128000, lr: 9.816689394418209e-05 2023-03-17 00:08:45,566 44k INFO Saving model and optimizer state at iteration 149 to ./logs\44k\G_128000.pth 2023-03-17 00:08:50,238 44k INFO Saving model and optimizer state at iteration 149 to ./logs\44k\D_128000.pth 2023-03-17 00:11:21,674 44k INFO Train Epoch: 149 [72%] 2023-03-17 00:11:21,674 44k INFO Losses: [2.48557186126709, 2.3923754692077637, 7.523228168487549, 14.440921783447266, 1.0974245071411133], step: 128200, lr: 9.816689394418209e-05 2023-03-17 00:13:49,975 44k INFO Train Epoch: 149 [96%] 2023-03-17 00:13:49,975 44k INFO Losses: [2.5967092514038086, 2.052150011062622, 8.133780479431152, 15.948707580566406, 0.6137081384658813], step: 128400, lr: 9.816689394418209e-05 2023-03-17 00:14:17,930 44k INFO ====> Epoch: 149, cost 667.36 s 2023-03-17 00:16:30,736 44k INFO Train Epoch: 150 [19%] 2023-03-17 00:16:30,737 44k INFO Losses: [2.415712594985962, 2.360647678375244, 6.447394371032715, 12.59783935546875, 1.1369941234588623], step: 128600, lr: 9.815462308243906e-05 2023-03-17 00:18:59,220 44k INFO Train Epoch: 150 [42%] 2023-03-17 00:18:59,220 44k INFO Losses: [2.699209451675415, 2.1566219329833984, 10.336148262023926, 16.72136116027832, 1.168589472770691], step: 128800, lr: 9.815462308243906e-05 2023-03-17 00:19:03,272 44k INFO Saving model and optimizer state at iteration 150 to ./logs\44k\G_128800.pth 2023-03-17 00:19:08,195 44k INFO Saving model and optimizer state at iteration 150 to ./logs\44k\D_128800.pth 2023-03-17 00:21:39,434 44k INFO Train Epoch: 150 [65%] 2023-03-17 00:21:39,435 44k INFO Losses: [2.4148716926574707, 2.1401431560516357, 11.968809127807617, 19.13926887512207, 0.8973556756973267], step: 129000, lr: 9.815462308243906e-05 2023-03-17 00:24:08,168 44k INFO Train Epoch: 150 [88%] 2023-03-17 00:24:08,168 44k INFO Losses: [2.1991350650787354, 2.901834487915039, 9.527708053588867, 14.045792579650879, 1.1244757175445557], step: 129200, lr: 9.815462308243906e-05 2023-03-17 00:25:21,864 44k INFO ====> Epoch: 150, cost 663.93 s 2023-03-17 00:26:48,573 44k INFO Train Epoch: 151 [12%] 2023-03-17 00:26:48,574 44k INFO Losses: [2.5896153450012207, 2.055222511291504, 7.413106918334961, 17.227890014648438, 1.1841509342193604], step: 129400, lr: 9.814235375455375e-05 2023-03-17 00:29:17,233 44k INFO Train Epoch: 151 [35%] 2023-03-17 00:29:17,233 44k INFO Losses: [2.256833791732788, 2.686868906021118, 11.881604194641113, 18.35038185119629, 1.171618938446045], step: 129600, lr: 9.814235375455375e-05 2023-03-17 00:29:21,251 44k INFO Saving model and optimizer state at iteration 151 to ./logs\44k\G_129600.pth 2023-03-17 00:29:26,020 44k INFO Saving model and optimizer state at iteration 151 to ./logs\44k\D_129600.pth 2023-03-17 00:31:57,422 44k INFO Train Epoch: 151 [58%] 2023-03-17 00:31:57,422 44k INFO Losses: [2.539158582687378, 2.4129598140716553, 7.630036354064941, 16.307376861572266, 1.0400872230529785], step: 129800, lr: 9.814235375455375e-05 2023-03-17 00:34:26,158 44k INFO Train Epoch: 151 [81%] 2023-03-17 00:34:26,159 44k INFO Losses: [2.617077350616455, 2.3197813034057617, 9.131001472473145, 18.423721313476562, 1.1265307664871216], step: 130000, lr: 9.814235375455375e-05 2023-03-17 00:36:26,066 44k INFO ====> Epoch: 151, cost 664.20 s 2023-03-17 00:37:06,802 44k INFO Train Epoch: 152 [4%] 2023-03-17 00:37:06,802 44k INFO Losses: [2.6361520290374756, 2.234288454055786, 13.311426162719727, 20.51342010498047, 1.1426868438720703], step: 130200, lr: 9.813008596033443e-05 2023-03-17 00:39:35,759 44k INFO Train Epoch: 152 [28%] 2023-03-17 00:39:35,759 44k INFO Losses: [2.291067600250244, 2.374169111251831, 11.14600944519043, 19.074310302734375, 1.3418415784835815], step: 130400, lr: 9.813008596033443e-05 2023-03-17 00:39:39,692 44k INFO Saving model and optimizer state at iteration 152 to ./logs\44k\G_130400.pth 2023-03-17 00:39:44,171 44k INFO Saving model and optimizer state at iteration 152 to ./logs\44k\D_130400.pth 2023-03-17 00:42:15,212 44k INFO Train Epoch: 152 [51%] 2023-03-17 00:42:15,213 44k INFO Losses: [2.564774990081787, 2.0989067554473877, 7.439937591552734, 17.21500015258789, 1.1871618032455444], step: 130600, lr: 9.813008596033443e-05 2023-03-17 00:44:43,605 44k INFO Train Epoch: 152 [74%] 2023-03-17 00:44:43,605 44k INFO Losses: [2.399200201034546, 2.3625001907348633, 8.681337356567383, 19.383737564086914, 0.9885560274124146], step: 130800, lr: 9.813008596033443e-05 2023-03-17 00:47:12,029 44k INFO Train Epoch: 152 [97%] 2023-03-17 00:47:12,030 44k INFO Losses: [2.525820732116699, 2.124408483505249, 9.25457763671875, 21.540849685668945, 1.2362661361694336], step: 131000, lr: 9.813008596033443e-05 2023-03-17 00:47:29,491 44k INFO ====> Epoch: 152, cost 663.43 s 2023-03-17 00:49:52,829 44k INFO Train Epoch: 153 [20%] 2023-03-17 00:49:52,829 44k INFO Losses: [2.6160972118377686, 1.9533493518829346, 7.806790828704834, 13.775176048278809, 0.9596372842788696], step: 131200, lr: 9.811781969958938e-05 2023-03-17 00:49:56,800 44k INFO Saving model and optimizer state at iteration 153 to ./logs\44k\G_131200.pth 2023-03-17 00:50:00,874 44k INFO Saving model and optimizer state at iteration 153 to ./logs\44k\D_131200.pth 2023-03-17 00:52:32,174 44k INFO Train Epoch: 153 [44%] 2023-03-17 00:52:32,174 44k INFO Losses: [2.3606553077697754, 2.300306558609009, 13.016871452331543, 20.110557556152344, 1.3908571004867554], step: 131400, lr: 9.811781969958938e-05 2023-03-17 00:55:00,853 44k INFO Train Epoch: 153 [67%] 2023-03-17 00:55:00,854 44k INFO Losses: [2.5813591480255127, 2.2664458751678467, 8.490554809570312, 17.80695152282715, 1.205512523651123], step: 131600, lr: 9.811781969958938e-05 2023-03-17 00:57:29,469 44k INFO Train Epoch: 153 [90%] 2023-03-17 00:57:29,470 44k INFO Losses: [2.3462088108062744, 2.336672067642212, 12.323161125183105, 16.424224853515625, 0.749019980430603], step: 131800, lr: 9.811781969958938e-05 2023-03-17 00:58:32,769 44k INFO ====> Epoch: 153, cost 663.28 s 2023-03-17 01:00:10,057 44k INFO Train Epoch: 154 [13%] 2023-03-17 01:00:10,058 44k INFO Losses: [2.5691049098968506, 2.1054344177246094, 9.768426895141602, 19.627782821655273, 1.0007680654525757], step: 132000, lr: 9.810555497212693e-05 2023-03-17 01:00:14,009 44k INFO Saving model and optimizer state at iteration 154 to ./logs\44k\G_132000.pth 2023-03-17 01:00:18,360 44k INFO Saving model and optimizer state at iteration 154 to ./logs\44k\D_132000.pth 2023-03-17 01:02:49,489 44k INFO Train Epoch: 154 [36%] 2023-03-17 01:02:49,489 44k INFO Losses: [2.7464959621429443, 2.425760507583618, 11.475157737731934, 19.428955078125, 1.4478747844696045], step: 132200, lr: 9.810555497212693e-05 2023-03-17 01:05:18,110 44k INFO Train Epoch: 154 [60%] 2023-03-17 01:05:18,110 44k INFO Losses: [2.5280842781066895, 2.5086557865142822, 7.379947185516357, 16.56066131591797, 1.0479612350463867], step: 132400, lr: 9.810555497212693e-05 2023-03-17 01:07:46,859 44k INFO Train Epoch: 154 [83%] 2023-03-17 01:07:46,860 44k INFO Losses: [2.411043882369995, 2.3865270614624023, 10.344655990600586, 20.237417221069336, 1.1374088525772095], step: 132600, lr: 9.810555497212693e-05 2023-03-17 01:09:36,084 44k INFO ====> Epoch: 154, cost 663.31 s 2023-03-17 01:10:27,244 44k INFO Train Epoch: 155 [6%] 2023-03-17 01:10:27,244 44k INFO Losses: [2.4393982887268066, 2.163987874984741, 10.541488647460938, 18.911582946777344, 1.0273256301879883], step: 132800, lr: 9.809329177775541e-05 2023-03-17 01:10:31,237 44k INFO Saving model and optimizer state at iteration 155 to ./logs\44k\G_132800.pth 2023-03-17 01:10:35,661 44k INFO Saving model and optimizer state at iteration 155 to ./logs\44k\D_132800.pth 2023-03-17 01:13:07,116 44k INFO Train Epoch: 155 [29%] 2023-03-17 01:13:07,116 44k INFO Losses: [2.6554982662200928, 2.0798585414886475, 5.977485656738281, 12.61422348022461, 1.0354533195495605], step: 133000, lr: 9.809329177775541e-05 2023-03-17 01:15:35,447 44k INFO Train Epoch: 155 [52%] 2023-03-17 01:15:35,448 44k INFO Losses: [2.564168930053711, 2.184077739715576, 9.992856979370117, 16.83275604248047, 1.1839832067489624], step: 133200, lr: 9.809329177775541e-05 2023-03-17 01:18:04,289 44k INFO Train Epoch: 155 [76%] 2023-03-17 01:18:04,290 44k INFO Losses: [2.6933979988098145, 1.9334681034088135, 9.285740852355957, 14.454839706420898, 1.0433650016784668], step: 133400, lr: 9.809329177775541e-05 2023-03-17 01:20:32,764 44k INFO Train Epoch: 155 [99%] 2023-03-17 01:20:32,765 44k INFO Losses: [2.276045799255371, 2.7131402492523193, 7.461902141571045, 11.761798858642578, 0.852959156036377], step: 133600, lr: 9.809329177775541e-05 2023-03-17 01:20:36,789 44k INFO Saving model and optimizer state at iteration 155 to ./logs\44k\G_133600.pth 2023-03-17 01:20:41,595 44k INFO Saving model and optimizer state at iteration 155 to ./logs\44k\D_133600.pth 2023-03-17 01:20:51,577 44k INFO ====> Epoch: 155, cost 675.49 s 2023-03-17 01:23:24,915 44k INFO Train Epoch: 156 [22%] 2023-03-17 01:23:24,915 44k INFO Losses: [2.517193555831909, 2.196741819381714, 8.215263366699219, 16.506484985351562, 1.0288277864456177], step: 133800, lr: 9.808103011628319e-05 2023-03-17 01:25:53,509 44k INFO Train Epoch: 156 [45%] 2023-03-17 01:25:53,509 44k INFO Losses: [2.7307496070861816, 1.8829314708709717, 5.085661888122559, 12.908968925476074, 0.9342838525772095], step: 134000, lr: 9.808103011628319e-05 2023-03-17 01:28:22,067 44k INFO Train Epoch: 156 [68%] 2023-03-17 01:28:22,067 44k INFO Losses: [2.5497350692749023, 2.1451120376586914, 7.920618057250977, 14.070574760437012, 1.183791995048523], step: 134200, lr: 9.808103011628319e-05 2023-03-17 01:30:50,608 44k INFO Train Epoch: 156 [92%] 2023-03-17 01:30:50,608 44k INFO Losses: [2.6691949367523193, 2.116057872772217, 6.455686569213867, 14.821425437927246, 0.9019187688827515], step: 134400, lr: 9.808103011628319e-05 2023-03-17 01:30:54,567 44k INFO Saving model and optimizer state at iteration 156 to ./logs\44k\G_134400.pth 2023-03-17 01:30:59,036 44k INFO Saving model and optimizer state at iteration 156 to ./logs\44k\D_134400.pth 2023-03-17 01:31:55,049 44k INFO ====> Epoch: 156, cost 663.47 s 2023-03-17 01:33:42,548 44k INFO Train Epoch: 157 [15%] 2023-03-17 01:33:42,549 44k INFO Losses: [2.3345847129821777, 2.6085715293884277, 7.78112268447876, 14.715474128723145, 1.159313678741455], step: 134600, lr: 9.806876998751865e-05 2023-03-17 01:36:11,267 44k INFO Train Epoch: 157 [38%] 2023-03-17 01:36:11,267 44k INFO Losses: [2.393420696258545, 1.974041223526001, 12.565536499023438, 16.287708282470703, 1.0047355890274048], step: 134800, lr: 9.806876998751865e-05 2023-03-17 01:38:40,021 44k INFO Train Epoch: 157 [61%] 2023-03-17 01:38:40,021 44k INFO Losses: [2.7354419231414795, 2.0887598991394043, 8.264328002929688, 15.367631912231445, 1.069992184638977], step: 135000, lr: 9.806876998751865e-05 2023-03-17 01:41:08,650 44k INFO Train Epoch: 157 [84%] 2023-03-17 01:41:08,650 44k INFO Losses: [2.2983317375183105, 2.3591806888580322, 10.994393348693848, 19.937833786010742, 0.9649831056594849], step: 135200, lr: 9.806876998751865e-05 2023-03-17 01:41:12,605 44k INFO Saving model and optimizer state at iteration 157 to ./logs\44k\G_135200.pth 2023-03-17 01:41:16,956 44k INFO Saving model and optimizer state at iteration 157 to ./logs\44k\D_135200.pth 2023-03-17 01:42:58,485 44k INFO ====> Epoch: 157, cost 663.44 s 2023-03-17 01:43:59,929 44k INFO Train Epoch: 158 [8%] 2023-03-17 01:43:59,930 44k INFO Losses: [2.523040771484375, 2.399348020553589, 5.6751813888549805, 10.457598686218262, 1.078735589981079], step: 135400, lr: 9.80565113912702e-05 2023-03-17 01:46:28,574 44k INFO Train Epoch: 158 [31%] 2023-03-17 01:46:28,575 44k INFO Losses: [2.6300747394561768, 2.187523126602173, 6.425880432128906, 7.18980598449707, 0.9425265789031982], step: 135600, lr: 9.80565113912702e-05 2023-03-17 01:48:57,032 44k INFO Train Epoch: 158 [54%] 2023-03-17 01:48:57,032 44k INFO Losses: [2.6664721965789795, 2.2597880363464355, 7.8700408935546875, 18.359954833984375, 0.8327602744102478], step: 135800, lr: 9.80565113912702e-05 2023-03-17 01:51:25,683 44k INFO Train Epoch: 158 [77%] 2023-03-17 01:51:25,684 44k INFO Losses: [2.6327970027923584, 2.2266626358032227, 11.46153450012207, 15.719919204711914, 0.5102459192276001], step: 136000, lr: 9.80565113912702e-05 2023-03-17 01:51:29,743 44k INFO Saving model and optimizer state at iteration 158 to ./logs\44k\G_136000.pth 2023-03-17 01:51:34,880 44k INFO Saving model and optimizer state at iteration 158 to ./logs\44k\D_136000.pth 2023-03-17 01:54:02,618 44k INFO ====> Epoch: 158, cost 664.13 s 2023-03-17 01:54:18,379 44k INFO Train Epoch: 159 [0%] 2023-03-17 01:54:18,379 44k INFO Losses: [2.3002219200134277, 2.364471197128296, 10.488377571105957, 20.454450607299805, 1.110022783279419], step: 136200, lr: 9.804425432734629e-05 2023-03-17 01:56:46,943 44k INFO Train Epoch: 159 [24%] 2023-03-17 01:56:46,944 44k INFO Losses: [2.6246657371520996, 2.130096197128296, 11.83732795715332, 16.56211280822754, 0.7863137125968933], step: 136400, lr: 9.804425432734629e-05 2023-03-17 01:59:15,332 44k INFO Train Epoch: 159 [47%] 2023-03-17 01:59:15,332 44k INFO Losses: [2.4440879821777344, 2.3475584983825684, 9.112939834594727, 16.772945404052734, 0.8992201089859009], step: 136600, lr: 9.804425432734629e-05 2023-03-17 02:01:43,912 44k INFO Train Epoch: 159 [70%] 2023-03-17 02:01:43,912 44k INFO Losses: [2.3906445503234863, 2.2145683765411377, 11.601531982421875, 18.37070083618164, 1.1057242155075073], step: 136800, lr: 9.804425432734629e-05 2023-03-17 02:01:48,010 44k INFO Saving model and optimizer state at iteration 159 to ./logs\44k\G_136800.pth 2023-03-17 02:01:52,510 44k INFO Saving model and optimizer state at iteration 159 to ./logs\44k\D_136800.pth 2023-03-17 02:04:24,083 44k INFO Train Epoch: 159 [93%] 2023-03-17 02:04:24,084 44k INFO Losses: [2.7148520946502686, 2.2018675804138184, 6.108780860900879, 17.00740623474121, 1.3248183727264404], step: 137000, lr: 9.804425432734629e-05 2023-03-17 02:05:06,704 44k INFO ====> Epoch: 159, cost 664.09 s 2023-03-17 02:07:04,568 44k INFO Train Epoch: 160 [16%] 2023-03-17 02:07:04,569 44k INFO Losses: [2.5513932704925537, 2.311124324798584, 10.374914169311523, 19.60601806640625, 1.1364154815673828], step: 137200, lr: 9.803199879555537e-05 2023-03-17 02:09:32,708 44k INFO Train Epoch: 160 [40%] 2023-03-17 02:09:32,708 44k INFO Losses: [2.847250461578369, 1.8093777894973755, 6.009575843811035, 14.95215129852295, 1.2205055952072144], step: 137400, lr: 9.803199879555537e-05 2023-03-17 02:12:00,968 44k INFO Train Epoch: 160 [63%] 2023-03-17 02:12:00,968 44k INFO Losses: [2.0765199661254883, 2.5105602741241455, 12.959762573242188, 18.94706153869629, 0.9385449886322021], step: 137600, lr: 9.803199879555537e-05 2023-03-17 02:12:04,968 44k INFO Saving model and optimizer state at iteration 160 to ./logs\44k\G_137600.pth 2023-03-17 02:12:09,575 44k INFO Saving model and optimizer state at iteration 160 to ./logs\44k\D_137600.pth 2023-03-17 02:14:40,798 44k INFO Train Epoch: 160 [86%] 2023-03-17 02:14:40,798 44k INFO Losses: [2.6948587894439697, 2.418565511703491, 10.139283180236816, 18.690683364868164, 0.8992422819137573], step: 137800, lr: 9.803199879555537e-05 2023-03-17 02:16:09,273 44k INFO ====> Epoch: 160, cost 662.57 s 2023-03-17 02:17:21,196 44k INFO Train Epoch: 161 [9%] 2023-03-17 02:17:21,197 44k INFO Losses: [2.586038589477539, 2.2690281867980957, 9.3168306350708, 17.365747451782227, 0.9121077060699463], step: 138000, lr: 9.801974479570593e-05 2023-03-17 02:19:49,749 44k INFO Train Epoch: 161 [32%] 2023-03-17 02:19:49,749 44k INFO Losses: [2.469874858856201, 2.096616506576538, 11.499251365661621, 18.25429344177246, 1.141331672668457], step: 138200, lr: 9.801974479570593e-05 2023-03-17 02:22:18,083 44k INFO Train Epoch: 161 [56%] 2023-03-17 02:22:18,083 44k INFO Losses: [2.5623390674591064, 2.1181957721710205, 7.102635860443115, 16.999736785888672, 0.8728217482566833], step: 138400, lr: 9.801974479570593e-05 2023-03-17 02:22:22,073 44k INFO Saving model and optimizer state at iteration 161 to ./logs\44k\G_138400.pth 2023-03-17 02:22:26,625 44k INFO Saving model and optimizer state at iteration 161 to ./logs\44k\D_138400.pth 2023-03-17 02:24:58,206 44k INFO Train Epoch: 161 [79%] 2023-03-17 02:24:58,207 44k INFO Losses: [2.594107151031494, 2.0858473777770996, 9.875823974609375, 19.862407684326172, 1.103354811668396], step: 138600, lr: 9.801974479570593e-05 2023-03-17 02:27:12,656 44k INFO ====> Epoch: 161, cost 663.38 s 2023-03-17 02:27:38,687 44k INFO Train Epoch: 162 [2%] 2023-03-17 02:27:38,687 44k INFO Losses: [2.477698802947998, 2.258739471435547, 5.865163326263428, 13.129929542541504, 1.0137971639633179], step: 138800, lr: 9.800749232760646e-05 2023-03-17 02:30:07,102 44k INFO Train Epoch: 162 [25%] 2023-03-17 02:30:07,102 44k INFO Losses: [2.529237747192383, 2.427645444869995, 8.749346733093262, 17.2939510345459, 0.915623128414154], step: 139000, lr: 9.800749232760646e-05 2023-03-17 02:32:35,116 44k INFO Train Epoch: 162 [48%] 2023-03-17 02:32:35,116 44k INFO Losses: [2.6592180728912354, 2.359607458114624, 9.900457382202148, 18.25187873840332, 1.077658772468567], step: 139200, lr: 9.800749232760646e-05 2023-03-17 02:32:39,097 44k INFO Saving model and optimizer state at iteration 162 to ./logs\44k\G_139200.pth 2023-03-17 02:32:43,554 44k INFO Saving model and optimizer state at iteration 162 to ./logs\44k\D_139200.pth 2023-03-17 02:35:15,162 44k INFO Train Epoch: 162 [72%] 2023-03-17 02:35:15,162 44k INFO Losses: [2.872166156768799, 2.401613235473633, 8.554033279418945, 16.712404251098633, 1.0579465627670288], step: 139400, lr: 9.800749232760646e-05 2023-03-17 02:37:43,457 44k INFO Train Epoch: 162 [95%] 2023-03-17 02:37:43,458 44k INFO Losses: [2.4037578105926514, 2.1888580322265625, 13.578193664550781, 19.663280487060547, 0.9334489703178406], step: 139600, lr: 9.800749232760646e-05 2023-03-17 02:38:15,759 44k INFO ====> Epoch: 162, cost 663.10 s 2023-03-17 02:40:24,023 44k INFO Train Epoch: 163 [18%] 2023-03-17 02:40:24,024 44k INFO Losses: [2.3230702877044678, 2.2706871032714844, 12.91253662109375, 16.13296890258789, 1.1754270792007446], step: 139800, lr: 9.79952413910655e-05 2023-03-17 02:42:52,267 44k INFO Train Epoch: 163 [41%] 2023-03-17 02:42:52,267 44k INFO Losses: [2.4422245025634766, 1.9766712188720703, 13.0701265335083, 19.008121490478516, 1.0905224084854126], step: 140000, lr: 9.79952413910655e-05 2023-03-17 02:42:56,273 44k INFO Saving model and optimizer state at iteration 163 to ./logs\44k\G_140000.pth 2023-03-17 02:43:00,793 44k INFO Saving model and optimizer state at iteration 163 to ./logs\44k\D_140000.pth 2023-03-17 02:45:32,080 44k INFO Train Epoch: 163 [65%] 2023-03-17 02:45:32,081 44k INFO Losses: [2.350684642791748, 2.5365664958953857, 11.221023559570312, 17.91377067565918, 1.3627434968948364], step: 140200, lr: 9.79952413910655e-05 2023-03-17 02:48:00,342 44k INFO Train Epoch: 163 [88%] 2023-03-17 02:48:00,343 44k INFO Losses: [2.862650156021118, 1.8294017314910889, 5.306360721588135, 12.428266525268555, 0.9324517846107483], step: 140400, lr: 9.79952413910655e-05 2023-03-17 02:49:18,379 44k INFO ====> Epoch: 163, cost 662.62 s 2023-03-17 02:50:40,542 44k INFO Train Epoch: 164 [11%] 2023-03-17 02:50:40,542 44k INFO Losses: [2.446183681488037, 2.6024868488311768, 7.289701461791992, 14.351048469543457, 0.8997002243995667], step: 140600, lr: 9.798299198589162e-05 2023-03-17 02:53:08,707 44k INFO Train Epoch: 164 [34%] 2023-03-17 02:53:08,708 44k INFO Losses: [2.5445973873138428, 2.0433363914489746, 9.31977653503418, 15.808696746826172, 1.0555921792984009], step: 140800, lr: 9.798299198589162e-05 2023-03-17 02:53:12,735 44k INFO Saving model and optimizer state at iteration 164 to ./logs\44k\G_140800.pth 2023-03-17 02:53:17,536 44k INFO Saving model and optimizer state at iteration 164 to ./logs\44k\D_140800.pth 2023-03-17 02:55:48,920 44k INFO Train Epoch: 164 [57%] 2023-03-17 02:55:48,920 44k INFO Losses: [2.5227034091949463, 2.127213954925537, 10.770003318786621, 19.890451431274414, 1.0163207054138184], step: 141000, lr: 9.798299198589162e-05 2023-03-17 02:58:17,573 44k INFO Train Epoch: 164 [81%] 2023-03-17 02:58:17,573 44k INFO Losses: [2.763941526412964, 1.9198627471923828, 7.280249118804932, 15.13233757019043, 1.3059158325195312], step: 141200, lr: 9.798299198589162e-05 2023-03-17 03:00:21,572 44k INFO ====> Epoch: 164, cost 663.19 s 2023-03-17 03:00:57,851 44k INFO Train Epoch: 165 [4%] 2023-03-17 03:00:57,852 44k INFO Losses: [2.447089672088623, 2.7299928665161133, 7.3454976081848145, 11.7441987991333, 1.3206690549850464], step: 141400, lr: 9.797074411189339e-05 2023-03-17 03:03:26,388 44k INFO Train Epoch: 165 [27%] 2023-03-17 03:03:26,389 44k INFO Losses: [2.516489028930664, 2.2890121936798096, 9.439591407775879, 17.224992752075195, 0.8915944695472717], step: 141600, lr: 9.797074411189339e-05 2023-03-17 03:03:30,515 44k INFO Saving model and optimizer state at iteration 165 to ./logs\44k\G_141600.pth 2023-03-17 03:03:35,459 44k INFO Saving model and optimizer state at iteration 165 to ./logs\44k\D_141600.pth 2023-03-17 03:06:06,969 44k INFO Train Epoch: 165 [50%] 2023-03-17 03:06:06,969 44k INFO Losses: [2.7116169929504395, 1.9286301136016846, 5.885444164276123, 10.189154624938965, 1.3165570497512817], step: 141800, lr: 9.797074411189339e-05 2023-03-17 03:08:36,014 44k INFO Train Epoch: 165 [73%] 2023-03-17 03:08:36,015 44k INFO Losses: [2.6926136016845703, 1.7219265699386597, 6.048027515411377, 10.557703971862793, 1.0029960870742798], step: 142000, lr: 9.797074411189339e-05 2023-03-17 03:11:04,317 44k INFO Train Epoch: 165 [97%] 2023-03-17 03:11:04,317 44k INFO Losses: [2.3012771606445312, 2.398860216140747, 12.689188957214355, 19.434764862060547, 1.4195674657821655], step: 142200, lr: 9.797074411189339e-05 2023-03-17 03:11:26,267 44k INFO ====> Epoch: 165, cost 664.70 s 2023-03-17 03:13:44,905 44k INFO Train Epoch: 166 [20%] 2023-03-17 03:13:44,905 44k INFO Losses: [2.640139579772949, 1.974373698234558, 6.0584845542907715, 13.671327590942383, 1.248789668083191], step: 142400, lr: 9.795849776887939e-05 2023-03-17 03:13:48,817 44k INFO Saving model and optimizer state at iteration 166 to ./logs\44k\G_142400.pth 2023-03-17 03:13:53,387 44k INFO Saving model and optimizer state at iteration 166 to ./logs\44k\D_142400.pth 2023-03-17 03:16:24,484 44k INFO Train Epoch: 166 [43%] 2023-03-17 03:16:24,485 44k INFO Losses: [2.706765651702881, 2.155057907104492, 4.879976749420166, 11.718202590942383, 0.8339847922325134], step: 142600, lr: 9.795849776887939e-05 2023-03-17 03:18:53,181 44k INFO Train Epoch: 166 [66%] 2023-03-17 03:18:53,182 44k INFO Losses: [2.3635663986206055, 2.335904121398926, 10.191503524780273, 14.623701095581055, 0.9413886666297913], step: 142800, lr: 9.795849776887939e-05 2023-03-17 03:21:21,772 44k INFO Train Epoch: 166 [89%] 2023-03-17 03:21:21,772 44k INFO Losses: [2.2081024646759033, 2.324535369873047, 7.493485450744629, 16.667739868164062, 1.2379875183105469], step: 143000, lr: 9.795849776887939e-05 2023-03-17 03:22:29,504 44k INFO ====> Epoch: 166, cost 663.24 s 2023-03-17 03:24:02,076 44k INFO Train Epoch: 167 [13%] 2023-03-17 03:24:02,076 44k INFO Losses: [2.424859046936035, 2.338594675064087, 9.637482643127441, 15.816794395446777, 0.9032336473464966], step: 143200, lr: 9.794625295665828e-05 2023-03-17 03:24:06,009 44k INFO Saving model and optimizer state at iteration 167 to ./logs\44k\G_143200.pth 2023-03-17 03:24:10,736 44k INFO Saving model and optimizer state at iteration 167 to ./logs\44k\D_143200.pth 2023-03-17 03:26:42,240 44k INFO Train Epoch: 167 [36%] 2023-03-17 03:26:42,241 44k INFO Losses: [2.451507568359375, 2.2082338333129883, 9.615740776062012, 14.761962890625, 1.3188064098358154], step: 143400, lr: 9.794625295665828e-05 2023-03-17 03:29:10,613 44k INFO Train Epoch: 167 [59%] 2023-03-17 03:29:10,614 44k INFO Losses: [2.4975109100341797, 2.489692449569702, 8.87106990814209, 19.038158416748047, 0.9233438968658447], step: 143600, lr: 9.794625295665828e-05 2023-03-17 03:31:39,238 44k INFO Train Epoch: 167 [82%] 2023-03-17 03:31:39,239 44k INFO Losses: [2.2628579139709473, 2.7422564029693604, 8.794774055480957, 16.297344207763672, 1.094745397567749], step: 143800, lr: 9.794625295665828e-05 2023-03-17 03:33:32,842 44k INFO ====> Epoch: 167, cost 663.34 s 2023-03-17 03:34:19,550 44k INFO Train Epoch: 168 [5%] 2023-03-17 03:34:19,551 44k INFO Losses: [2.7288668155670166, 1.9101848602294922, 10.881402015686035, 18.40813636779785, 0.9740910530090332], step: 144000, lr: 9.79340096750387e-05 2023-03-17 03:34:23,521 44k INFO Saving model and optimizer state at iteration 168 to ./logs\44k\G_144000.pth 2023-03-17 03:34:27,647 44k INFO Saving model and optimizer state at iteration 168 to ./logs\44k\D_144000.pth 2023-03-17 03:36:58,889 44k INFO Train Epoch: 168 [29%] 2023-03-17 03:36:58,890 44k INFO Losses: [2.5587921142578125, 2.2055411338806152, 8.42703914642334, 16.04062271118164, 1.4534878730773926], step: 144200, lr: 9.79340096750387e-05 2023-03-17 03:39:27,310 44k INFO Train Epoch: 168 [52%] 2023-03-17 03:39:27,310 44k INFO Losses: [2.3272881507873535, 2.3591809272766113, 10.390473365783691, 17.78326416015625, 0.999224066734314], step: 144400, lr: 9.79340096750387e-05 2023-03-17 03:41:55,864 44k INFO Train Epoch: 168 [75%] 2023-03-17 03:41:55,864 44k INFO Losses: [2.569274663925171, 2.5329504013061523, 10.683361053466797, 16.084957122802734, 1.4148212671279907], step: 144600, lr: 9.79340096750387e-05 2023-03-17 03:44:23,907 44k INFO Train Epoch: 168 [98%] 2023-03-17 03:44:23,908 44k INFO Losses: [2.7039825916290283, 2.183861255645752, 7.462647914886475, 16.30885887145996, 0.8449428081512451], step: 144800, lr: 9.79340096750387e-05 2023-03-17 03:44:27,957 44k INFO Saving model and optimizer state at iteration 168 to ./logs\44k\G_144800.pth 2023-03-17 03:44:32,555 44k INFO Saving model and optimizer state at iteration 168 to ./logs\44k\D_144800.pth 2023-03-17 03:44:46,949 44k INFO ====> Epoch: 168, cost 674.11 s 2023-03-17 03:47:15,702 44k INFO Train Epoch: 169 [21%] 2023-03-17 03:47:15,702 44k INFO Losses: [2.364196300506592, 2.4302682876586914, 8.182635307312012, 18.021549224853516, 1.2243766784667969], step: 145000, lr: 9.792176792382932e-05 2023-03-17 03:49:43,922 44k INFO Train Epoch: 169 [45%] 2023-03-17 03:49:43,923 44k INFO Losses: [2.501154899597168, 2.1145782470703125, 8.719955444335938, 19.622589111328125, 1.067897081375122], step: 145200, lr: 9.792176792382932e-05 2023-03-17 03:52:12,457 44k INFO Train Epoch: 169 [68%] 2023-03-17 03:52:12,458 44k INFO Losses: [2.581768274307251, 2.164931297302246, 6.590740203857422, 15.0590238571167, 0.9627941250801086], step: 145400, lr: 9.792176792382932e-05 2023-03-17 03:54:40,880 44k INFO Train Epoch: 169 [91%] 2023-03-17 03:54:40,881 44k INFO Losses: [2.533121347427368, 2.481605291366577, 10.853741645812988, 16.999500274658203, 1.2951222658157349], step: 145600, lr: 9.792176792382932e-05 2023-03-17 03:54:44,804 44k INFO Saving model and optimizer state at iteration 169 to ./logs\44k\G_145600.pth 2023-03-17 03:54:49,759 44k INFO Saving model and optimizer state at iteration 169 to ./logs\44k\D_145600.pth 2023-03-17 03:55:50,107 44k INFO ====> Epoch: 169, cost 663.16 s 2023-03-17 03:57:33,084 44k INFO Train Epoch: 170 [14%] 2023-03-17 03:57:33,084 44k INFO Losses: [2.327822208404541, 2.608090877532959, 10.177037239074707, 16.69096565246582, 0.8707289695739746], step: 145800, lr: 9.790952770283884e-05 2023-03-17 04:00:01,572 44k INFO Train Epoch: 170 [37%] 2023-03-17 04:00:01,572 44k INFO Losses: [2.3490688800811768, 2.2802298069000244, 15.611377716064453, 20.93351936340332, 1.1758294105529785], step: 146000, lr: 9.790952770283884e-05 2023-03-17 04:02:29,853 44k INFO Train Epoch: 170 [61%] 2023-03-17 04:02:29,854 44k INFO Losses: [2.680532217025757, 2.0305442810058594, 6.073188304901123, 14.020232200622559, 1.1849318742752075], step: 146200, lr: 9.790952770283884e-05 2023-03-17 04:04:58,300 44k INFO Train Epoch: 170 [84%] 2023-03-17 04:04:58,301 44k INFO Losses: [2.455702304840088, 2.1881103515625, 8.552666664123535, 16.435523986816406, 1.0114518404006958], step: 146400, lr: 9.790952770283884e-05 2023-03-17 04:05:02,426 44k INFO Saving model and optimizer state at iteration 170 to ./logs\44k\G_146400.pth 2023-03-17 04:05:07,459 44k INFO Saving model and optimizer state at iteration 170 to ./logs\44k\D_146400.pth 2023-03-17 04:06:53,481 44k INFO ====> Epoch: 170, cost 663.37 s 2023-03-17 04:07:50,346 44k INFO Train Epoch: 171 [7%] 2023-03-17 04:07:50,346 44k INFO Losses: [2.7567005157470703, 2.0358102321624756, 5.515676498413086, 12.781986236572266, 1.126438856124878], step: 146600, lr: 9.789728901187598e-05 2023-03-17 04:10:18,882 44k INFO Train Epoch: 171 [30%] 2023-03-17 04:10:18,883 44k INFO Losses: [2.4134185314178467, 2.411006450653076, 9.705337524414062, 17.1720027923584, 1.2169947624206543], step: 146800, lr: 9.789728901187598e-05 2023-03-17 04:12:47,207 44k INFO Train Epoch: 171 [53%] 2023-03-17 04:12:47,208 44k INFO Losses: [2.526456117630005, 2.4200174808502197, 16.013336181640625, 22.409595489501953, 1.1010955572128296], step: 147000, lr: 9.789728901187598e-05 2023-03-17 04:15:15,809 44k INFO Train Epoch: 171 [77%] 2023-03-17 04:15:15,810 44k INFO Losses: [2.6113109588623047, 2.057957172393799, 6.948886871337891, 15.95431137084961, 1.3352605104446411], step: 147200, lr: 9.789728901187598e-05 2023-03-17 04:15:19,763 44k INFO Saving model and optimizer state at iteration 171 to ./logs\44k\G_147200.pth 2023-03-17 04:15:24,654 44k INFO Saving model and optimizer state at iteration 171 to ./logs\44k\D_147200.pth 2023-03-17 04:17:56,067 44k INFO Train Epoch: 171 [100%] 2023-03-17 04:17:56,068 44k INFO Losses: [2.422013521194458, 2.768256187438965, 8.054593086242676, 17.355350494384766, 1.210132360458374], step: 147400, lr: 9.789728901187598e-05 2023-03-17 04:17:57,201 44k INFO ====> Epoch: 171, cost 663.72 s 2023-03-17 04:20:36,488 44k INFO Train Epoch: 172 [23%] 2023-03-17 04:20:36,489 44k INFO Losses: [2.582075357437134, 2.2937467098236084, 7.07363224029541, 14.062685012817383, 1.207210659980774], step: 147600, lr: 9.78850518507495e-05 2023-03-17 04:23:04,790 44k INFO Train Epoch: 172 [46%] 2023-03-17 04:23:04,790 44k INFO Losses: [2.496306896209717, 2.371169090270996, 6.933284282684326, 12.106698989868164, 0.8136837482452393], step: 147800, lr: 9.78850518507495e-05 2023-03-17 04:25:33,350 44k INFO Train Epoch: 172 [69%] 2023-03-17 04:25:33,351 44k INFO Losses: [2.785550117492676, 2.0808534622192383, 7.880479335784912, 13.822983741760254, 1.0237905979156494], step: 148000, lr: 9.78850518507495e-05 2023-03-17 04:25:37,300 44k INFO Saving model and optimizer state at iteration 172 to ./logs\44k\G_148000.pth 2023-03-17 04:25:41,913 44k INFO Saving model and optimizer state at iteration 172 to ./logs\44k\D_148000.pth 2023-03-17 04:28:12,997 44k INFO Train Epoch: 172 [93%] 2023-03-17 04:28:12,998 44k INFO Losses: [2.3825669288635254, 2.607023000717163, 8.927057266235352, 16.781091690063477, 0.8865895867347717], step: 148200, lr: 9.78850518507495e-05 2023-03-17 04:29:00,038 44k INFO ====> Epoch: 172, cost 662.84 s 2023-03-17 04:30:53,492 44k INFO Train Epoch: 173 [16%] 2023-03-17 04:30:53,492 44k INFO Losses: [2.5149362087249756, 2.388176441192627, 10.55745792388916, 17.17435646057129, 0.8697108626365662], step: 148400, lr: 9.787281621926815e-05 2023-03-17 04:33:21,928 44k INFO Train Epoch: 173 [39%] 2023-03-17 04:33:21,928 44k INFO Losses: [2.478977680206299, 2.2506818771362305, 9.217108726501465, 16.144859313964844, 1.2031205892562866], step: 148600, lr: 9.787281621926815e-05 2023-03-17 04:35:50,106 44k INFO Train Epoch: 173 [62%] 2023-03-17 04:35:50,106 44k INFO Losses: [2.764279842376709, 2.1100428104400635, 9.15719985961914, 17.24066925048828, 1.1677459478378296], step: 148800, lr: 9.787281621926815e-05 2023-03-17 04:35:54,160 44k INFO Saving model and optimizer state at iteration 173 to ./logs\44k\G_148800.pth 2023-03-17 04:35:58,597 44k INFO Saving model and optimizer state at iteration 173 to ./logs\44k\D_148800.pth 2023-03-17 04:38:29,873 44k INFO Train Epoch: 173 [85%] 2023-03-17 04:38:29,873 44k INFO Losses: [2.379112482070923, 2.36975359916687, 8.412988662719727, 13.019725799560547, 1.0920727252960205], step: 149000, lr: 9.787281621926815e-05 2023-03-17 04:40:02,579 44k INFO ====> Epoch: 173, cost 662.54 s 2023-03-17 04:41:09,958 44k INFO Train Epoch: 174 [9%] 2023-03-17 04:41:09,959 44k INFO Losses: [2.5267598628997803, 2.127523422241211, 8.676936149597168, 19.98634147644043, 1.0583478212356567], step: 149200, lr: 9.786058211724074e-05 2023-03-17 04:43:38,134 44k INFO Train Epoch: 174 [32%] 2023-03-17 04:43:38,135 44k INFO Losses: [2.524840831756592, 2.096362829208374, 7.767177104949951, 17.57430648803711, 1.0727272033691406], step: 149400, lr: 9.786058211724074e-05 2023-03-17 04:46:06,133 44k INFO Train Epoch: 174 [55%] 2023-03-17 04:46:06,134 44k INFO Losses: [2.443358898162842, 2.4926700592041016, 10.864221572875977, 18.81585121154785, 1.0038161277770996], step: 149600, lr: 9.786058211724074e-05 2023-03-17 04:46:10,147 44k INFO Saving model and optimizer state at iteration 174 to ./logs\44k\G_149600.pth 2023-03-17 04:46:14,907 44k INFO Saving model and optimizer state at iteration 174 to ./logs\44k\D_149600.pth 2023-03-17 04:48:46,315 44k INFO Train Epoch: 174 [78%] 2023-03-17 04:48:46,316 44k INFO Losses: [2.4831860065460205, 2.291755199432373, 9.610107421875, 17.76339340209961, 1.2181892395019531], step: 149800, lr: 9.786058211724074e-05 2023-03-17 04:51:04,937 44k INFO ====> Epoch: 174, cost 662.36 s 2023-03-17 04:51:26,434 44k INFO Train Epoch: 175 [1%] 2023-03-17 04:51:26,434 44k INFO Losses: [2.316476821899414, 2.4078292846679688, 10.161185264587402, 17.048128128051758, 1.3801575899124146], step: 150000, lr: 9.784834954447608e-05 2023-03-17 04:53:54,741 44k INFO Train Epoch: 175 [25%] 2023-03-17 04:53:54,741 44k INFO Losses: [2.490647315979004, 2.203364372253418, 8.828428268432617, 16.252395629882812, 1.727457046508789], step: 150200, lr: 9.784834954447608e-05 2023-03-17 04:56:22,744 44k INFO Train Epoch: 175 [48%] 2023-03-17 04:56:22,744 44k INFO Losses: [2.6196646690368652, 2.3160810470581055, 5.765528678894043, 16.644458770751953, 1.2453479766845703], step: 150400, lr: 9.784834954447608e-05 2023-03-17 04:56:26,739 44k INFO Saving model and optimizer state at iteration 175 to ./logs\44k\G_150400.pth 2023-03-17 04:56:31,423 44k INFO Saving model and optimizer state at iteration 175 to ./logs\44k\D_150400.pth 2023-03-17 04:59:02,740 44k INFO Train Epoch: 175 [71%] 2023-03-17 04:59:02,740 44k INFO Losses: [2.5605993270874023, 1.9865453243255615, 8.02536678314209, 18.642375946044922, 0.9614807367324829], step: 150600, lr: 9.784834954447608e-05 2023-03-17 05:01:30,895 44k INFO Train Epoch: 175 [94%] 2023-03-17 05:01:30,896 44k INFO Losses: [2.6482958793640137, 2.0347652435302734, 4.976829528808594, 14.364816665649414, 0.8533002734184265], step: 150800, lr: 9.784834954447608e-05 2023-03-17 05:02:07,433 44k INFO ====> Epoch: 175, cost 662.50 s 2023-03-17 05:04:11,080 44k INFO Train Epoch: 176 [17%] 2023-03-17 05:04:11,080 44k INFO Losses: [2.7511351108551025, 1.9432408809661865, 9.959835052490234, 18.560335159301758, 1.3339757919311523], step: 151000, lr: 9.783611850078301e-05 2023-03-17 05:06:39,301 44k INFO Train Epoch: 176 [41%] 2023-03-17 05:06:39,301 44k INFO Losses: [2.47171688079834, 2.4483909606933594, 10.158058166503906, 16.733802795410156, 1.1019073724746704], step: 151200, lr: 9.783611850078301e-05 2023-03-17 05:06:43,292 44k INFO Saving model and optimizer state at iteration 176 to ./logs\44k\G_151200.pth 2023-03-17 05:06:48,015 44k INFO Saving model and optimizer state at iteration 176 to ./logs\44k\D_151200.pth 2023-03-17 05:09:19,270 44k INFO Train Epoch: 176 [64%] 2023-03-17 05:09:19,271 44k INFO Losses: [2.4788334369659424, 2.417572021484375, 11.033356666564941, 17.90091896057129, 1.2928680181503296], step: 151400, lr: 9.783611850078301e-05 2023-03-17 05:11:47,413 44k INFO Train Epoch: 176 [87%] 2023-03-17 05:11:47,413 44k INFO Losses: [2.5044939517974854, 2.466836929321289, 8.511308670043945, 14.652682304382324, 1.2372088432312012], step: 151600, lr: 9.783611850078301e-05 2023-03-17 05:13:09,858 44k INFO ====> Epoch: 176, cost 662.42 s 2023-03-17 05:14:27,566 44k INFO Train Epoch: 177 [10%] 2023-03-17 05:14:27,567 44k INFO Losses: [2.373826026916504, 2.329979181289673, 9.79957389831543, 17.43158721923828, 0.9910444021224976], step: 151800, lr: 9.782388898597041e-05 2023-03-17 05:16:55,925 44k INFO Train Epoch: 177 [33%] 2023-03-17 05:16:55,925 44k INFO Losses: [2.549975872039795, 2.617885112762451, 7.252525329589844, 15.827512741088867, 0.8250650763511658], step: 152000, lr: 9.782388898597041e-05 2023-03-17 05:16:59,989 44k INFO Saving model and optimizer state at iteration 177 to ./logs\44k\G_152000.pth 2023-03-17 05:17:04,167 44k INFO Saving model and optimizer state at iteration 177 to ./logs\44k\D_152000.pth 2023-03-17 05:19:35,134 44k INFO Train Epoch: 177 [57%] 2023-03-17 05:19:35,134 44k INFO Losses: [2.7095255851745605, 2.4443631172180176, 13.259649276733398, 18.448991775512695, 0.9803855419158936], step: 152200, lr: 9.782388898597041e-05 2023-03-17 05:22:03,535 44k INFO Train Epoch: 177 [80%] 2023-03-17 05:22:03,536 44k INFO Losses: [2.5159764289855957, 2.3317272663116455, 13.233838081359863, 17.836889266967773, 0.8537470102310181], step: 152400, lr: 9.782388898597041e-05 2023-03-17 05:24:11,899 44k INFO ====> Epoch: 177, cost 662.04 s 2023-03-17 05:24:43,634 44k INFO Train Epoch: 178 [3%] 2023-03-17 05:24:43,634 44k INFO Losses: [2.524367332458496, 2.3306667804718018, 7.064251899719238, 13.210238456726074, 0.9437455534934998], step: 152600, lr: 9.781166099984716e-05 2023-03-17 05:27:12,120 44k INFO Train Epoch: 178 [26%] 2023-03-17 05:27:12,121 44k INFO Losses: [2.4999148845672607, 2.40010929107666, 7.501005172729492, 12.893268585205078, 1.1655915975570679], step: 152800, lr: 9.781166099984716e-05 2023-03-17 05:27:16,096 44k INFO Saving model and optimizer state at iteration 178 to ./logs\44k\G_152800.pth 2023-03-17 05:27:21,133 44k INFO Saving model and optimizer state at iteration 178 to ./logs\44k\D_152800.pth 2023-03-17 05:29:51,875 44k INFO Train Epoch: 178 [49%] 2023-03-17 05:29:51,875 44k INFO Losses: [1.990065336227417, 3.0442261695861816, 11.280783653259277, 14.133079528808594, 0.8407149314880371], step: 153000, lr: 9.781166099984716e-05 2023-03-17 05:32:20,210 44k INFO Train Epoch: 178 [73%] 2023-03-17 05:32:20,211 44k INFO Losses: [2.5386605262756348, 2.147806406021118, 8.762890815734863, 17.796842575073242, 0.8478281497955322], step: 153200, lr: 9.781166099984716e-05 2023-03-17 05:34:48,438 44k INFO Train Epoch: 178 [96%] 2023-03-17 05:34:48,438 44k INFO Losses: [2.359585762023926, 2.107469081878662, 8.712498664855957, 16.28331756591797, 1.2751965522766113], step: 153400, lr: 9.781166099984716e-05 2023-03-17 05:35:14,756 44k INFO ====> Epoch: 178, cost 662.86 s 2023-03-17 05:37:28,955 44k INFO Train Epoch: 179 [19%] 2023-03-17 05:37:28,955 44k INFO Losses: [2.4544169902801514, 2.1253585815429688, 10.429885864257812, 16.955026626586914, 0.9776434898376465], step: 153600, lr: 9.779943454222217e-05 2023-03-17 05:37:32,982 44k INFO Saving model and optimizer state at iteration 179 to ./logs\44k\G_153600.pth 2023-03-17 05:37:38,006 44k INFO Saving model and optimizer state at iteration 179 to ./logs\44k\D_153600.pth 2023-03-17 05:40:09,046 44k INFO Train Epoch: 179 [42%] 2023-03-17 05:40:09,047 44k INFO Losses: [2.483859062194824, 2.084451198577881, 9.369510650634766, 18.19339370727539, 1.1224453449249268], step: 153800, lr: 9.779943454222217e-05 2023-03-17 05:42:37,546 44k INFO Train Epoch: 179 [65%] 2023-03-17 05:42:37,547 44k INFO Losses: [2.6977531909942627, 2.354731798171997, 8.680590629577637, 18.95774269104004, 1.0594661235809326], step: 154000, lr: 9.779943454222217e-05 2023-03-17 05:45:05,972 44k INFO Train Epoch: 179 [89%] 2023-03-17 05:45:05,972 44k INFO Losses: [2.876570224761963, 2.091716766357422, 5.893319606781006, 14.462336540222168, 1.2357300519943237], step: 154200, lr: 9.779943454222217e-05 2023-03-17 05:46:17,909 44k INFO ====> Epoch: 179, cost 663.15 s 2023-03-17 05:47:45,931 44k INFO Train Epoch: 180 [12%] 2023-03-17 05:47:45,931 44k INFO Losses: [2.4821908473968506, 2.2757577896118164, 7.9934163093566895, 18.056087493896484, 0.9111646413803101], step: 154400, lr: 9.778720961290439e-05 2023-03-17 05:47:49,916 44k INFO Saving model and optimizer state at iteration 180 to ./logs\44k\G_154400.pth 2023-03-17 05:47:54,841 44k INFO Saving model and optimizer state at iteration 180 to ./logs\44k\D_154400.pth 2023-03-17 05:50:26,327 44k INFO Train Epoch: 180 [35%] 2023-03-17 05:50:26,327 44k INFO Losses: [2.4757654666900635, 2.162977695465088, 12.550704956054688, 20.85257339477539, 1.4939967393875122], step: 154600, lr: 9.778720961290439e-05 2023-03-17 05:52:54,578 44k INFO Train Epoch: 180 [58%] 2023-03-17 05:52:54,579 44k INFO Losses: [2.399872303009033, 2.255711078643799, 8.376919746398926, 15.364433288574219, 1.1091541051864624], step: 154800, lr: 9.778720961290439e-05 2023-03-17 05:55:22,779 44k INFO Train Epoch: 180 [81%] 2023-03-17 05:55:22,779 44k INFO Losses: [2.4450411796569824, 2.122586488723755, 10.472504615783691, 18.310335159301758, 0.8930374383926392], step: 155000, lr: 9.778720961290439e-05 2023-03-17 05:57:20,556 44k INFO ====> Epoch: 180, cost 662.65 s 2023-03-17 05:58:02,576 44k INFO Train Epoch: 181 [5%] 2023-03-17 05:58:02,576 44k INFO Losses: [2.5874807834625244, 2.1806986331939697, 9.842427253723145, 16.3331298828125, 0.8174580335617065], step: 155200, lr: 9.777498621170277e-05 2023-03-17 05:58:06,525 44k INFO Saving model and optimizer state at iteration 181 to ./logs\44k\G_155200.pth 2023-03-17 05:58:11,529 44k INFO Saving model and optimizer state at iteration 181 to ./logs\44k\D_155200.pth 2023-03-17 06:00:42,894 44k INFO Train Epoch: 181 [28%] 2023-03-17 06:00:42,894 44k INFO Losses: [2.6012840270996094, 2.1999425888061523, 7.486272811889648, 14.67336368560791, 1.2439916133880615], step: 155400, lr: 9.777498621170277e-05 2023-03-17 06:03:11,333 44k INFO Train Epoch: 181 [51%] 2023-03-17 06:03:11,334 44k INFO Losses: [2.545856237411499, 2.158097505569458, 9.040119171142578, 17.864683151245117, 0.9660888314247131], step: 155600, lr: 9.777498621170277e-05 2023-03-17 06:05:39,874 44k INFO Train Epoch: 181 [74%] 2023-03-17 06:05:39,875 44k INFO Losses: [2.47072172164917, 2.4763195514678955, 12.242470741271973, 20.180837631225586, 1.468169927597046], step: 155800, lr: 9.777498621170277e-05 2023-03-17 06:08:08,344 44k INFO Train Epoch: 181 [97%] 2023-03-17 06:08:08,345 44k INFO Losses: [2.3977630138397217, 2.345214366912842, 9.862263679504395, 17.533727645874023, 0.960277259349823], step: 156000, lr: 9.777498621170277e-05 2023-03-17 06:08:12,369 44k INFO Saving model and optimizer state at iteration 181 to ./logs\44k\G_156000.pth 2023-03-17 06:08:16,594 44k INFO Saving model and optimizer state at iteration 181 to ./logs\44k\D_156000.pth 2023-03-17 06:08:35,523 44k INFO ====> Epoch: 181, cost 674.97 s 2023-03-17 06:10:59,734 44k INFO Train Epoch: 182 [21%] 2023-03-17 06:10:59,735 44k INFO Losses: [2.310131788253784, 2.6508727073669434, 9.105015754699707, 18.836477279663086, 1.088100790977478], step: 156200, lr: 9.776276433842631e-05 2023-03-17 06:13:27,500 44k INFO Train Epoch: 182 [44%] 2023-03-17 06:13:27,501 44k INFO Losses: [2.466567277908325, 2.6655592918395996, 8.671792984008789, 18.49411392211914, 0.9472891688346863], step: 156400, lr: 9.776276433842631e-05 2023-03-17 06:15:55,872 44k INFO Train Epoch: 182 [67%] 2023-03-17 06:15:55,873 44k INFO Losses: [2.4529812335968018, 2.2415454387664795, 9.30099105834961, 16.7056827545166, 1.0293956995010376], step: 156600, lr: 9.776276433842631e-05 2023-03-17 06:18:23,827 44k INFO Train Epoch: 182 [90%] 2023-03-17 06:18:23,827 44k INFO Losses: [2.643660306930542, 2.1427485942840576, 8.57205867767334, 12.310561180114746, 0.7321068048477173], step: 156800, lr: 9.776276433842631e-05 2023-03-17 06:18:27,755 44k INFO Saving model and optimizer state at iteration 182 to ./logs\44k\G_156800.pth 2023-03-17 06:18:31,720 44k INFO Saving model and optimizer state at iteration 182 to ./logs\44k\D_156800.pth 2023-03-17 06:19:36,574 44k INFO ====> Epoch: 182, cost 661.05 s 2023-03-17 06:21:15,050 44k INFO Train Epoch: 183 [13%] 2023-03-17 06:21:15,050 44k INFO Losses: [2.560438632965088, 1.97500741481781, 7.957427978515625, 13.793886184692383, 1.1316478252410889], step: 157000, lr: 9.7750543992884e-05 2023-03-17 06:23:43,094 44k INFO Train Epoch: 183 [37%] 2023-03-17 06:23:43,094 44k INFO Losses: [2.444546699523926, 2.0931308269500732, 7.149639129638672, 13.740453720092773, 1.3505452871322632], step: 157200, lr: 9.7750543992884e-05 2023-03-17 06:26:11,251 44k INFO Train Epoch: 183 [60%] 2023-03-17 06:26:11,251 44k INFO Losses: [2.3723175525665283, 2.108992338180542, 13.744295120239258, 17.03548812866211, 1.4573131799697876], step: 157400, lr: 9.7750543992884e-05 2023-03-17 06:28:39,407 44k INFO Train Epoch: 183 [83%] 2023-03-17 06:28:39,408 44k INFO Losses: [2.346473455429077, 2.3889002799987793, 11.227554321289062, 18.469223022460938, 1.2401717901229858], step: 157600, lr: 9.7750543992884e-05 2023-03-17 06:28:43,360 44k INFO Saving model and optimizer state at iteration 183 to ./logs\44k\G_157600.pth 2023-03-17 06:28:48,017 44k INFO Saving model and optimizer state at iteration 183 to ./logs\44k\D_157600.pth 2023-03-17 06:30:38,510 44k INFO ====> Epoch: 183, cost 661.94 s 2023-03-17 06:31:31,044 44k INFO Train Epoch: 184 [6%] 2023-03-17 06:31:31,045 44k INFO Losses: [2.3038387298583984, 2.3574635982513428, 11.082600593566895, 12.653703689575195, 1.1784908771514893], step: 157800, lr: 9.773832517488488e-05 2023-03-17 06:33:59,230 44k INFO Train Epoch: 184 [29%] 2023-03-17 06:33:59,230 44k INFO Losses: [2.457047462463379, 2.3235692977905273, 9.859566688537598, 18.639406204223633, 1.1555182933807373], step: 158000, lr: 9.773832517488488e-05 2023-03-17 06:36:27,220 44k INFO Train Epoch: 184 [53%] 2023-03-17 06:36:27,220 44k INFO Losses: [2.5135438442230225, 2.304372549057007, 11.394046783447266, 17.720903396606445, 1.1155513525009155], step: 158200, lr: 9.773832517488488e-05 2023-03-17 06:38:55,516 44k INFO Train Epoch: 184 [76%] 2023-03-17 06:38:55,516 44k INFO Losses: [2.4492039680480957, 2.0925087928771973, 12.029546737670898, 20.067325592041016, 0.9155536890029907], step: 158400, lr: 9.773832517488488e-05 2023-03-17 06:38:59,554 44k INFO Saving model and optimizer state at iteration 184 to ./logs\44k\G_158400.pth 2023-03-17 06:39:04,446 44k INFO Saving model and optimizer state at iteration 184 to ./logs\44k\D_158400.pth 2023-03-17 06:41:35,583 44k INFO Train Epoch: 184 [99%] 2023-03-17 06:41:35,583 44k INFO Losses: [2.6044139862060547, 2.1308798789978027, 9.323615074157715, 16.09023094177246, 1.0678629875183105], step: 158600, lr: 9.773832517488488e-05 2023-03-17 06:41:41,193 44k INFO ====> Epoch: 184, cost 662.68 s 2023-03-17 06:44:15,814 44k INFO Train Epoch: 185 [22%] 2023-03-17 06:44:15,814 44k INFO Losses: [2.5555191040039062, 2.293426513671875, 7.908082008361816, 15.728020668029785, 1.0532076358795166], step: 158800, lr: 9.772610788423802e-05 2023-03-17 06:46:43,868 44k INFO Train Epoch: 185 [45%] 2023-03-17 06:46:43,869 44k INFO Losses: [2.496262311935425, 2.1173171997070312, 10.838447570800781, 16.408809661865234, 1.247388482093811], step: 159000, lr: 9.772610788423802e-05 2023-03-17 06:49:12,128 44k INFO Train Epoch: 185 [69%] 2023-03-17 06:49:12,129 44k INFO Losses: [3.038511276245117, 1.882521390914917, 3.4645323753356934, 10.00643253326416, 1.2121785879135132], step: 159200, lr: 9.772610788423802e-05 2023-03-17 06:49:16,230 44k INFO Saving model and optimizer state at iteration 185 to ./logs\44k\G_159200.pth 2023-03-17 06:49:20,689 44k INFO Saving model and optimizer state at iteration 185 to ./logs\44k\D_159200.pth 2023-03-17 06:51:51,594 44k INFO Train Epoch: 185 [92%] 2023-03-17 06:51:51,594 44k INFO Losses: [2.5219171047210693, 2.193493604660034, 8.588115692138672, 15.037384986877441, 1.2440730333328247], step: 159400, lr: 9.772610788423802e-05 2023-03-17 06:52:43,044 44k INFO ====> Epoch: 185, cost 661.85 s 2023-03-17 06:54:31,836 44k INFO Train Epoch: 186 [15%] 2023-03-17 06:54:31,836 44k INFO Losses: [2.631683826446533, 1.9511587619781494, 8.66096019744873, 17.58308982849121, 1.2330199480056763], step: 159600, lr: 9.771389212075249e-05 2023-03-17 06:56:59,901 44k INFO Train Epoch: 186 [38%] 2023-03-17 06:56:59,901 44k INFO Losses: [2.4186573028564453, 2.2586464881896973, 12.870800018310547, 14.0155611038208, 0.6771851778030396], step: 159800, lr: 9.771389212075249e-05 2023-03-17 06:59:27,992 44k INFO Train Epoch: 186 [61%] 2023-03-17 06:59:27,993 44k INFO Losses: [2.4266715049743652, 2.017918109893799, 10.682202339172363, 18.807079315185547, 0.8807221055030823], step: 160000, lr: 9.771389212075249e-05 2023-03-17 06:59:31,958 44k INFO Saving model and optimizer state at iteration 186 to ./logs\44k\G_160000.pth 2023-03-17 06:59:36,423 44k INFO Saving model and optimizer state at iteration 186 to ./logs\44k\D_160000.pth 2023-03-17 07:02:07,788 44k INFO Train Epoch: 186 [85%] 2023-03-17 07:02:07,788 44k INFO Losses: [2.508913040161133, 2.153705358505249, 7.566250801086426, 14.377301216125488, 1.06252920627594], step: 160200, lr: 9.771389212075249e-05 2023-03-17 07:03:44,772 44k INFO ====> Epoch: 186, cost 661.73 s 2023-03-17 07:04:47,776 44k INFO Train Epoch: 187 [8%] 2023-03-17 07:04:47,776 44k INFO Losses: [2.6274523735046387, 2.386012554168701, 9.844226837158203, 17.3111572265625, 1.1927189826965332], step: 160400, lr: 9.77016778842374e-05 2023-03-17 07:07:16,013 44k INFO Train Epoch: 187 [31%] 2023-03-17 07:07:16,014 44k INFO Losses: [2.6072137355804443, 2.286259174346924, 8.228365898132324, 18.3955020904541, 1.0648289918899536], step: 160600, lr: 9.77016778842374e-05 2023-03-17 07:09:44,060 44k INFO Train Epoch: 187 [54%] 2023-03-17 07:09:44,061 44k INFO Losses: [2.4347996711730957, 2.136563301086426, 11.930638313293457, 19.336347579956055, 1.072536587715149], step: 160800, lr: 9.77016778842374e-05 2023-03-17 07:09:48,071 44k INFO Saving model and optimizer state at iteration 187 to ./logs\44k\G_160800.pth 2023-03-17 07:09:52,703 44k INFO Saving model and optimizer state at iteration 187 to ./logs\44k\D_160800.pth 2023-03-17 07:12:23,930 44k INFO Train Epoch: 187 [77%] 2023-03-17 07:12:23,930 44k INFO Losses: [2.4648234844207764, 2.293100357055664, 9.250081062316895, 17.470815658569336, 1.052790641784668], step: 161000, lr: 9.77016778842374e-05 2023-03-17 07:14:46,872 44k INFO ====> Epoch: 187, cost 662.10 s 2023-03-17 07:15:04,003 44k INFO Train Epoch: 188 [1%] 2023-03-17 07:15:04,003 44k INFO Losses: [2.398002862930298, 2.131838083267212, 10.654569625854492, 18.63772964477539, 0.5098158121109009], step: 161200, lr: 9.768946517450186e-05 2023-03-17 07:17:32,310 44k INFO Train Epoch: 188 [24%] 2023-03-17 07:17:32,310 44k INFO Losses: [2.7805404663085938, 2.1511292457580566, 6.319175720214844, 10.435309410095215, 0.9377111792564392], step: 161400, lr: 9.768946517450186e-05 2023-03-17 07:20:08,508 44k INFO Train Epoch: 188 [47%] 2023-03-17 07:20:08,508 44k INFO Losses: [2.6574854850769043, 2.186532497406006, 9.33181095123291, 16.761625289916992, 1.1283817291259766], step: 161600, lr: 9.768946517450186e-05 2023-03-17 07:20:12,462 44k INFO Saving model and optimizer state at iteration 188 to ./logs\44k\G_161600.pth 2023-03-17 07:20:17,429 44k INFO Saving model and optimizer state at iteration 188 to ./logs\44k\D_161600.pth 2023-03-17 07:23:56,685 44k INFO Train Epoch: 188 [70%] 2023-03-17 07:23:56,686 44k INFO Losses: [2.7606468200683594, 2.437741279602051, 9.689196586608887, 20.3095645904541, 1.0235880613327026], step: 161800, lr: 9.768946517450186e-05 2023-03-17 07:27:19,712 44k INFO Train Epoch: 188 [94%] 2023-03-17 07:27:19,713 44k INFO Losses: [2.3320538997650146, 2.464343309402466, 9.619725227355957, 18.110944747924805, 1.11414635181427], step: 162000, lr: 9.768946517450186e-05 2023-03-17 07:28:00,732 44k INFO ====> Epoch: 188, cost 793.86 s 2023-03-17 07:29:59,774 44k INFO Train Epoch: 189 [17%] 2023-03-17 07:29:59,774 44k INFO Losses: [2.5960915088653564, 1.9530669450759888, 12.138261795043945, 16.93499755859375, 0.7881253957748413], step: 162200, lr: 9.767725399135504e-05 2023-03-17 07:32:27,901 44k INFO Train Epoch: 189 [40%] 2023-03-17 07:32:27,902 44k INFO Losses: [2.6089494228363037, 2.0271475315093994, 9.180065155029297, 19.148468017578125, 1.0097097158432007], step: 162400, lr: 9.767725399135504e-05 2023-03-17 07:32:31,905 44k INFO Saving model and optimizer state at iteration 189 to ./logs\44k\G_162400.pth 2023-03-17 07:32:36,974 44k INFO Saving model and optimizer state at iteration 189 to ./logs\44k\D_162400.pth 2023-03-17 07:35:08,070 44k INFO Train Epoch: 189 [63%] 2023-03-17 07:35:08,071 44k INFO Losses: [2.425295114517212, 2.221632242202759, 10.648872375488281, 19.134172439575195, 0.9930372834205627], step: 162600, lr: 9.767725399135504e-05 2023-03-17 07:37:36,486 44k INFO Train Epoch: 189 [86%] 2023-03-17 07:37:36,487 44k INFO Losses: [2.6789138317108154, 2.5082006454467773, 8.635883331298828, 18.6782283782959, 1.0627405643463135], step: 162800, lr: 9.767725399135504e-05 2023-03-17 07:39:03,256 44k INFO ====> Epoch: 189, cost 662.52 s 2023-03-17 07:40:16,518 44k INFO Train Epoch: 190 [10%] 2023-03-17 07:40:16,519 44k INFO Losses: [2.5750889778137207, 2.2774722576141357, 10.381712913513184, 18.843019485473633, 1.1645941734313965], step: 163000, lr: 9.766504433460612e-05 2023-03-17 07:42:44,824 44k INFO Train Epoch: 190 [33%] 2023-03-17 07:42:44,824 44k INFO Losses: [2.420280933380127, 2.7254488468170166, 7.499799728393555, 14.191332817077637, 0.7606639862060547], step: 163200, lr: 9.766504433460612e-05 2023-03-17 07:42:48,762 44k INFO Saving model and optimizer state at iteration 190 to ./logs\44k\G_163200.pth 2023-03-17 07:42:53,218 44k INFO Saving model and optimizer state at iteration 190 to ./logs\44k\D_163200.pth 2023-03-17 07:45:24,138 44k INFO Train Epoch: 190 [56%] 2023-03-17 07:45:24,138 44k INFO Losses: [2.594676971435547, 2.1435606479644775, 8.848711967468262, 16.043333053588867, 1.0694626569747925], step: 163400, lr: 9.766504433460612e-05 2023-03-17 07:47:52,260 44k INFO Train Epoch: 190 [79%] 2023-03-17 07:47:52,261 44k INFO Losses: [3.0120434761047363, 1.715635061264038, 6.725149154663086, 14.351950645446777, 1.2351633310317993], step: 163600, lr: 9.766504433460612e-05 2023-03-17 07:50:04,869 44k INFO ====> Epoch: 190, cost 661.61 s 2023-03-17 07:50:32,353 44k INFO Train Epoch: 191 [2%] 2023-03-17 07:50:32,354 44k INFO Losses: [2.144507884979248, 2.803480625152588, 9.801067352294922, 17.667644500732422, 1.4852038621902466], step: 163800, lr: 9.765283620406429e-05 2023-03-17 07:53:00,800 44k INFO Train Epoch: 191 [26%] 2023-03-17 07:53:00,801 44k INFO Losses: [2.5672261714935303, 2.036823034286499, 12.703105926513672, 19.446910858154297, 1.0415467023849487], step: 164000, lr: 9.765283620406429e-05 2023-03-17 07:53:04,739 44k INFO Saving model and optimizer state at iteration 191 to ./logs\44k\G_164000.pth 2023-03-17 07:53:09,244 44k INFO Saving model and optimizer state at iteration 191 to ./logs\44k\D_164000.pth 2023-03-17 07:55:40,232 44k INFO Train Epoch: 191 [49%] 2023-03-17 07:55:40,232 44k INFO Losses: [2.5076684951782227, 1.973518967628479, 10.841337203979492, 17.651342391967773, 0.9550176858901978], step: 164200, lr: 9.765283620406429e-05 2023-03-17 07:58:08,487 44k INFO Train Epoch: 191 [72%] 2023-03-17 07:58:08,487 44k INFO Losses: [2.79302978515625, 2.283331871032715, 9.096528053283691, 16.270946502685547, 1.2117018699645996], step: 164400, lr: 9.765283620406429e-05 2023-03-17 08:00:36,594 44k INFO Train Epoch: 191 [95%] 2023-03-17 08:00:36,594 44k INFO Losses: [2.5434906482696533, 1.8959403038024902, 8.860926628112793, 19.149446487426758, 1.2027291059494019], step: 164600, lr: 9.765283620406429e-05 2023-03-17 08:01:07,358 44k INFO ====> Epoch: 191, cost 662.49 s 2023-03-17 08:03:16,939 44k INFO Train Epoch: 192 [18%] 2023-03-17 08:03:16,939 44k INFO Losses: [2.4140048027038574, 2.361828327178955, 7.745521068572998, 16.391355514526367, 1.1187617778778076], step: 164800, lr: 9.764062959953878e-05 2023-03-17 08:03:20,906 44k INFO Saving model and optimizer state at iteration 192 to ./logs\44k\G_164800.pth 2023-03-17 08:03:25,439 44k INFO Saving model and optimizer state at iteration 192 to ./logs\44k\D_164800.pth 2023-03-17 08:05:56,537 44k INFO Train Epoch: 192 [42%] 2023-03-17 08:05:56,537 44k INFO Losses: [2.524367094039917, 2.143397569656372, 8.94959831237793, 19.082700729370117, 1.0176409482955933], step: 165000, lr: 9.764062959953878e-05 2023-03-17 08:08:24,797 44k INFO Train Epoch: 192 [65%] 2023-03-17 08:08:24,798 44k INFO Losses: [2.533832311630249, 2.152256488800049, 10.429092407226562, 15.597479820251465, 0.9361334443092346], step: 165200, lr: 9.764062959953878e-05 2023-03-17 08:10:53,185 44k INFO Train Epoch: 192 [88%] 2023-03-17 08:10:53,186 44k INFO Losses: [2.6477620601654053, 2.0403356552124023, 8.148681640625, 15.917501449584961, 1.1515703201293945], step: 165400, lr: 9.764062959953878e-05 2023-03-17 08:12:09,703 44k INFO ====> Epoch: 192, cost 662.34 s 2023-03-17 08:13:33,402 44k INFO Train Epoch: 193 [11%] 2023-03-17 08:13:33,402 44k INFO Losses: [2.480067253112793, 2.3576276302337646, 11.013958930969238, 17.281869888305664, 1.2817589044570923], step: 165600, lr: 9.762842452083883e-05 2023-03-17 08:13:37,415 44k INFO Saving model and optimizer state at iteration 193 to ./logs\44k\G_165600.pth 2023-03-17 08:13:42,315 44k INFO Saving model and optimizer state at iteration 193 to ./logs\44k\D_165600.pth 2023-03-17 08:16:13,628 44k INFO Train Epoch: 193 [34%] 2023-03-17 08:16:13,628 44k INFO Losses: [2.6615967750549316, 2.3636748790740967, 8.694622993469238, 15.606887817382812, 1.063123106956482], step: 165800, lr: 9.762842452083883e-05 2023-03-17 08:18:41,778 44k INFO Train Epoch: 193 [58%] 2023-03-17 08:18:41,779 44k INFO Losses: [2.5017693042755127, 2.2038495540618896, 5.835836410522461, 10.891594886779785, 1.0728026628494263], step: 166000, lr: 9.762842452083883e-05 2023-03-17 08:21:09,946 44k INFO Train Epoch: 193 [81%] 2023-03-17 08:21:09,946 44k INFO Losses: [2.557206630706787, 2.0195369720458984, 8.594018936157227, 17.9444580078125, 1.0938029289245605], step: 166200, lr: 9.762842452083883e-05 2023-03-17 08:23:12,218 44k INFO ====> Epoch: 193, cost 662.52 s 2023-03-17 08:23:49,873 44k INFO Train Epoch: 194 [4%] 2023-03-17 08:23:49,873 44k INFO Losses: [2.633340358734131, 2.3438849449157715, 9.24264144897461, 18.009733200073242, 1.1712721586227417], step: 166400, lr: 9.761622096777372e-05 2023-03-17 08:23:53,882 44k INFO Saving model and optimizer state at iteration 194 to ./logs\44k\G_166400.pth 2023-03-17 08:23:58,859 44k INFO Saving model and optimizer state at iteration 194 to ./logs\44k\D_166400.pth 2023-03-17 08:26:29,799 44k INFO Train Epoch: 194 [27%] 2023-03-17 08:26:29,800 44k INFO Losses: [2.5408284664154053, 2.6331465244293213, 7.800838947296143, 18.58409881591797, 1.1200098991394043], step: 166600, lr: 9.761622096777372e-05 2023-03-17 08:28:58,062 44k INFO Train Epoch: 194 [50%] 2023-03-17 08:28:58,063 44k INFO Losses: [2.4628701210021973, 2.1961326599121094, 8.397313117980957, 17.80116844177246, 0.9893330335617065], step: 166800, lr: 9.761622096777372e-05 2023-03-17 08:31:26,539 44k INFO Train Epoch: 194 [74%] 2023-03-17 08:31:26,539 44k INFO Losses: [2.514503002166748, 2.0370850563049316, 9.701926231384277, 16.796152114868164, 1.022076964378357], step: 167000, lr: 9.761622096777372e-05 2023-03-17 08:33:54,866 44k INFO Train Epoch: 194 [97%] 2023-03-17 08:33:54,866 44k INFO Losses: [2.4498300552368164, 2.395138740539551, 9.104439735412598, 17.474746704101562, 1.2711936235427856], step: 167200, lr: 9.761622096777372e-05 2023-03-17 08:33:58,833 44k INFO Saving model and optimizer state at iteration 194 to ./logs\44k\G_167200.pth 2023-03-17 08:34:03,387 44k INFO Saving model and optimizer state at iteration 194 to ./logs\44k\D_167200.pth 2023-03-17 08:34:26,764 44k INFO ====> Epoch: 194, cost 674.55 s 2023-03-17 08:36:46,563 44k INFO Train Epoch: 195 [20%] 2023-03-17 08:36:46,564 44k INFO Losses: [2.6454954147338867, 1.976471185684204, 10.262067794799805, 19.72707176208496, 1.2827997207641602], step: 167400, lr: 9.760401894015275e-05 2023-03-17 08:39:14,473 44k INFO Train Epoch: 195 [43%] 2023-03-17 08:39:14,473 44k INFO Losses: [2.5744073390960693, 2.136284828186035, 9.853887557983398, 15.574749946594238, 1.081174373626709], step: 167600, lr: 9.760401894015275e-05 2023-03-17 08:41:42,667 44k INFO Train Epoch: 195 [66%] 2023-03-17 08:41:42,667 44k INFO Losses: [2.377514600753784, 2.171715259552002, 9.387972831726074, 15.909814834594727, 1.2523181438446045], step: 167800, lr: 9.760401894015275e-05 2023-03-17 08:44:10,826 44k INFO Train Epoch: 195 [90%] 2023-03-17 08:44:10,827 44k INFO Losses: [2.435706615447998, 2.443080425262451, 9.737548828125, 20.17072868347168, 1.3082952499389648], step: 168000, lr: 9.760401894015275e-05 2023-03-17 08:44:14,823 44k INFO Saving model and optimizer state at iteration 195 to ./logs\44k\G_168000.pth 2023-03-17 08:44:19,669 44k INFO Saving model and optimizer state at iteration 195 to ./logs\44k\D_168000.pth 2023-03-17 08:45:28,603 44k INFO ====> Epoch: 195, cost 661.84 s 2023-03-17 08:47:02,484 44k INFO Train Epoch: 196 [13%] 2023-03-17 08:47:02,484 44k INFO Losses: [2.57918381690979, 2.209052324295044, 7.466860294342041, 14.023988723754883, 1.0219624042510986], step: 168200, lr: 9.759181843778522e-05 2023-03-17 08:49:30,529 44k INFO Train Epoch: 196 [36%] 2023-03-17 08:49:30,529 44k INFO Losses: [2.7127602100372314, 1.9558050632476807, 7.962477684020996, 19.259891510009766, 0.7208997011184692], step: 168400, lr: 9.759181843778522e-05 2023-03-17 08:51:58,569 44k INFO Train Epoch: 196 [59%] 2023-03-17 08:51:58,581 44k INFO Losses: [2.579914093017578, 2.140896797180176, 8.994905471801758, 13.217306137084961, 0.9272205829620361], step: 168600, lr: 9.759181843778522e-05 2023-03-17 08:54:26,667 44k INFO Train Epoch: 196 [82%] 2023-03-17 08:54:26,668 44k INFO Losses: [2.558849811553955, 2.1900415420532227, 12.606536865234375, 18.619491577148438, 1.495125412940979], step: 168800, lr: 9.759181843778522e-05 2023-03-17 08:54:30,731 44k INFO Saving model and optimizer state at iteration 196 to ./logs\44k\G_168800.pth 2023-03-17 08:54:35,234 44k INFO Saving model and optimizer state at iteration 196 to ./logs\44k\D_168800.pth 2023-03-17 08:56:30,269 44k INFO ====> Epoch: 196, cost 661.67 s 2023-03-17 08:57:18,392 44k INFO Train Epoch: 197 [6%] 2023-03-17 08:57:18,392 44k INFO Losses: [2.4595494270324707, 2.301403522491455, 7.652793884277344, 18.690139770507812, 1.168496012687683], step: 169000, lr: 9.757961946048049e-05 2023-03-17 08:59:46,517 44k INFO Train Epoch: 197 [29%] 2023-03-17 08:59:46,517 44k INFO Losses: [2.4135782718658447, 2.466648578643799, 11.20015811920166, 15.818689346313477, 1.1201403141021729], step: 169200, lr: 9.757961946048049e-05 2023-03-17 09:02:14,428 44k INFO Train Epoch: 197 [52%] 2023-03-17 09:02:14,428 44k INFO Losses: [2.4540581703186035, 1.977914810180664, 13.488401412963867, 19.535581588745117, 1.073477864265442], step: 169400, lr: 9.757961946048049e-05 2023-03-17 09:04:42,723 44k INFO Train Epoch: 197 [75%] 2023-03-17 09:04:42,723 44k INFO Losses: [2.700099468231201, 1.792422890663147, 4.268450736999512, 10.347494125366211, 0.7359927892684937], step: 169600, lr: 9.757961946048049e-05 2023-03-17 09:04:46,775 44k INFO Saving model and optimizer state at iteration 197 to ./logs\44k\G_169600.pth 2023-03-17 09:04:51,742 44k INFO Saving model and optimizer state at iteration 197 to ./logs\44k\D_169600.pth 2023-03-17 09:07:22,773 44k INFO Train Epoch: 197 [98%] 2023-03-17 09:07:22,774 44k INFO Losses: [2.469176769256592, 2.191925287246704, 8.81887435913086, 16.948305130004883, 1.0945781469345093], step: 169800, lr: 9.757961946048049e-05 2023-03-17 09:07:32,815 44k INFO ====> Epoch: 197, cost 662.55 s 2023-03-17 09:10:02,842 44k INFO Train Epoch: 198 [22%] 2023-03-17 09:10:02,842 44k INFO Losses: [2.47700834274292, 2.4732918739318848, 10.610676765441895, 17.721647262573242, 1.2591755390167236], step: 170000, lr: 9.756742200804793e-05 2023-03-17 09:12:30,816 44k INFO Train Epoch: 198 [45%] 2023-03-17 09:12:30,817 44k INFO Losses: [2.1951756477355957, 3.0940463542938232, 6.806368827819824, 10.959386825561523, 1.6940101385116577], step: 170200, lr: 9.756742200804793e-05 2023-03-17 09:14:58,933 44k INFO Train Epoch: 198 [68%] 2023-03-17 09:14:58,934 44k INFO Losses: [2.750842571258545, 1.9666810035705566, 8.837784767150879, 18.411151885986328, 0.9965085983276367], step: 170400, lr: 9.756742200804793e-05 2023-03-17 09:15:03,050 44k INFO Saving model and optimizer state at iteration 198 to ./logs\44k\G_170400.pth 2023-03-17 09:15:07,272 44k INFO Saving model and optimizer state at iteration 198 to ./logs\44k\D_170400.pth 2023-03-17 09:17:38,114 44k INFO Train Epoch: 198 [91%] 2023-03-17 09:17:38,114 44k INFO Losses: [2.6109299659729004, 2.346360921859741, 12.227387428283691, 18.949474334716797, 1.2989249229431152], step: 170600, lr: 9.756742200804793e-05 2023-03-17 09:18:33,999 44k INFO ====> Epoch: 198, cost 661.18 s 2023-03-17 09:20:18,375 44k INFO Train Epoch: 199 [14%] 2023-03-17 09:20:18,376 44k INFO Losses: [2.6574270725250244, 2.4879305362701416, 9.534896850585938, 17.42763900756836, 0.996246874332428], step: 170800, lr: 9.755522608029692e-05 2023-03-17 09:22:46,505 44k INFO Train Epoch: 199 [38%] 2023-03-17 09:22:46,506 44k INFO Losses: [2.81965970993042, 2.132734537124634, 7.434218406677246, 16.879657745361328, 1.1689292192459106], step: 171000, lr: 9.755522608029692e-05 2023-03-17 09:25:14,623 44k INFO Train Epoch: 199 [61%] 2023-03-17 09:25:14,624 44k INFO Losses: [2.173938274383545, 2.209927558898926, 12.461265563964844, 19.20315170288086, 1.070429801940918], step: 171200, lr: 9.755522608029692e-05 2023-03-17 09:25:18,593 44k INFO Saving model and optimizer state at iteration 199 to ./logs\44k\G_171200.pth 2023-03-17 09:25:22,982 44k INFO Saving model and optimizer state at iteration 199 to ./logs\44k\D_171200.pth 2023-03-17 09:27:54,027 44k INFO Train Epoch: 199 [84%] 2023-03-17 09:27:54,028 44k INFO Losses: [2.854801654815674, 2.0860280990600586, 5.961915969848633, 15.200457572937012, 1.1349196434020996], step: 171400, lr: 9.755522608029692e-05 2023-03-17 09:29:35,514 44k INFO ====> Epoch: 199, cost 661.52 s 2023-03-17 09:30:34,032 44k INFO Train Epoch: 200 [7%] 2023-03-17 09:30:34,032 44k INFO Losses: [2.7460217475891113, 2.0681543350219727, 9.035928726196289, 16.298255920410156, 1.0604963302612305], step: 171600, lr: 9.754303167703689e-05 2023-03-17 09:33:02,198 44k INFO Train Epoch: 200 [30%] 2023-03-17 09:33:02,198 44k INFO Losses: [2.453409194946289, 2.4092094898223877, 8.706682205200195, 14.529901504516602, 1.0705111026763916], step: 171800, lr: 9.754303167703689e-05 2023-03-17 09:35:30,181 44k INFO Train Epoch: 200 [54%] 2023-03-17 09:35:30,181 44k INFO Losses: [2.7389702796936035, 2.0051207542419434, 6.066150665283203, 13.745101928710938, 0.9693252444267273], step: 172000, lr: 9.754303167703689e-05 2023-03-17 09:35:34,171 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\G_172000.pth 2023-03-17 09:35:38,915 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\D_172000.pth 2023-03-17 09:38:10,245 44k INFO Train Epoch: 200 [77%] 2023-03-17 09:38:10,246 44k INFO Losses: [2.7999207973480225, 1.8012341260910034, 8.389494895935059, 17.32674217224121, 0.8758504986763], step: 172200, lr: 9.754303167703689e-05 2023-03-17 09:40:37,734 44k INFO ====> Epoch: 200, cost 662.22 s 2023-03-17 09:40:50,442 44k INFO Train Epoch: 201 [0%] 2023-03-17 09:40:50,442 44k INFO Losses: [2.596759557723999, 2.1207494735717773, 9.404854774475098, 18.077627182006836, 0.9853483438491821], step: 172400, lr: 9.753083879807726e-05 2023-03-17 09:43:18,729 44k INFO Train Epoch: 201 [23%] 2023-03-17 09:43:18,730 44k INFO Losses: [2.615448236465454, 2.4388585090637207, 7.953864574432373, 13.938075065612793, 1.2013497352600098], step: 172600, lr: 9.753083879807726e-05 2023-03-17 09:45:46,823 44k INFO Train Epoch: 201 [46%] 2023-03-17 09:45:46,824 44k INFO Losses: [2.4347870349884033, 2.105323314666748, 9.152451515197754, 15.659737586975098, 1.4442061185836792], step: 172800, lr: 9.753083879807726e-05 2023-03-17 09:45:50,822 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\G_172800.pth 2023-03-17 09:45:55,639 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\D_172800.pth 2023-03-17 09:48:26,988 44k INFO Train Epoch: 201 [70%] 2023-03-17 09:48:26,988 44k INFO Losses: [2.7913033962249756, 2.1060898303985596, 11.25236701965332, 19.57780647277832, 1.277811884880066], step: 173000, lr: 9.753083879807726e-05 2023-03-17 09:50:55,133 44k INFO Train Epoch: 201 [93%] 2023-03-17 09:50:55,133 44k INFO Losses: [2.6250998973846436, 2.1974072456359863, 8.369928359985352, 16.472946166992188, 0.9878252148628235], step: 173200, lr: 9.753083879807726e-05 2023-03-17 09:51:40,682 44k INFO ====> Epoch: 201, cost 662.95 s 2023-03-17 09:53:35,472 44k INFO Train Epoch: 202 [16%] 2023-03-17 09:53:35,472 44k INFO Losses: [2.4998514652252197, 2.417166233062744, 6.3212785720825195, 14.049249649047852, 1.0236608982086182], step: 173400, lr: 9.75186474432275e-05 2023-03-17 09:56:03,589 44k INFO Train Epoch: 202 [39%] 2023-03-17 09:56:03,589 44k INFO Losses: [2.6413731575012207, 2.0844497680664062, 6.916252136230469, 11.174819946289062, 1.3679370880126953], step: 173600, lr: 9.75186474432275e-05 2023-03-17 09:56:07,671 44k INFO Saving model and optimizer state at iteration 202 to ./logs\44k\G_173600.pth 2023-03-17 09:56:12,434 44k INFO Saving model and optimizer state at iteration 202 to ./logs\44k\D_173600.pth 2023-03-17 09:58:43,497 44k INFO Train Epoch: 202 [62%] 2023-03-17 09:58:43,498 44k INFO Losses: [2.555248975753784, 2.2752015590667725, 6.9692230224609375, 14.406518936157227, 0.8741418719291687], step: 173800, lr: 9.75186474432275e-05 2023-03-17 10:01:11,665 44k INFO Train Epoch: 202 [86%] 2023-03-17 10:01:11,666 44k INFO Losses: [2.2684645652770996, 2.2311954498291016, 10.817280769348145, 18.21678352355957, 1.0403252840042114], step: 174000, lr: 9.75186474432275e-05 2023-03-17 10:02:42,892 44k INFO ====> Epoch: 202, cost 662.21 s 2023-03-17 10:03:51,685 44k INFO Train Epoch: 203 [9%] 2023-03-17 10:03:51,686 44k INFO Losses: [2.5615012645721436, 2.211942195892334, 8.056472778320312, 17.631420135498047, 1.182600736618042], step: 174200, lr: 9.750645761229709e-05 2023-03-17 10:06:19,847 44k INFO Train Epoch: 203 [32%] 2023-03-17 10:06:19,847 44k INFO Losses: [2.607423782348633, 2.188082218170166, 10.9568452835083, 20.655040740966797, 1.0519952774047852], step: 174400, lr: 9.750645761229709e-05 2023-03-17 10:06:23,898 44k INFO Saving model and optimizer state at iteration 203 to ./logs\44k\G_174400.pth 2023-03-17 10:06:28,866 44k INFO Saving model and optimizer state at iteration 203 to ./logs\44k\D_174400.pth 2023-03-17 10:08:59,857 44k INFO Train Epoch: 203 [55%] 2023-03-17 10:08:59,857 44k INFO Losses: [3.1685585975646973, 1.8343582153320312, 4.069422245025635, 10.40465259552002, 1.0731842517852783], step: 174600, lr: 9.750645761229709e-05 2023-03-17 10:11:28,085 44k INFO Train Epoch: 203 [78%] 2023-03-17 10:11:28,086 44k INFO Losses: [2.5464415550231934, 2.0372121334075928, 7.425950050354004, 16.727643966674805, 0.8232383131980896], step: 174800, lr: 9.750645761229709e-05 2023-03-17 10:13:45,318 44k INFO ====> Epoch: 203, cost 662.43 s 2023-03-17 10:14:13,902 44k INFO Train Epoch: 204 [2%] 2023-03-17 10:14:13,902 44k INFO Losses: [2.5360448360443115, 2.3464231491088867, 8.739291191101074, 17.8365421295166, 1.144738793373108], step: 175000, lr: 9.749426930509556e-05 2023-03-17 10:16:42,323 44k INFO Train Epoch: 204 [25%] 2023-03-17 10:16:42,323 44k INFO Losses: [2.6097846031188965, 2.2107274532318115, 4.630366325378418, 13.608458518981934, 0.9448657035827637], step: 175200, lr: 9.749426930509556e-05 2023-03-17 10:16:46,521 44k INFO Saving model and optimizer state at iteration 204 to ./logs\44k\G_175200.pth 2023-03-17 10:16:50,326 44k INFO Saving model and optimizer state at iteration 204 to ./logs\44k\D_175200.pth 2023-03-17 10:19:21,182 44k INFO Train Epoch: 204 [48%] 2023-03-17 10:19:21,183 44k INFO Losses: [2.5362911224365234, 1.9789808988571167, 9.342599868774414, 14.89836311340332, 0.717599093914032], step: 175400, lr: 9.749426930509556e-05 2023-03-17 10:21:49,508 44k INFO Train Epoch: 204 [71%] 2023-03-17 10:21:49,508 44k INFO Losses: [2.82528018951416, 1.950885534286499, 4.445422649383545, 13.994622230529785, 0.8465632796287537], step: 175600, lr: 9.749426930509556e-05 2023-03-17 10:24:17,627 44k INFO Train Epoch: 204 [94%] 2023-03-17 10:24:17,627 44k INFO Losses: [2.7166213989257812, 2.220306396484375, 4.663797378540039, 15.217893600463867, 1.1904174089431763], step: 175800, lr: 9.749426930509556e-05 2023-03-17 10:24:52,770 44k INFO ====> Epoch: 204, cost 667.45 s 2023-03-17 10:26:57,933 44k INFO Train Epoch: 205 [18%] 2023-03-17 10:26:57,934 44k INFO Losses: [2.764173984527588, 1.7019580602645874, 6.473657608032227, 10.570134162902832, 1.304600715637207], step: 176000, lr: 9.748208252143241e-05 2023-03-17 10:27:01,906 44k INFO Saving model and optimizer state at iteration 205 to ./logs\44k\G_176000.pth 2023-03-17 10:27:06,743 44k INFO Saving model and optimizer state at iteration 205 to ./logs\44k\D_176000.pth 2023-03-17 10:29:37,844 44k INFO Train Epoch: 205 [41%] 2023-03-17 10:29:37,844 44k INFO Losses: [2.5911731719970703, 2.133925437927246, 11.730096817016602, 17.595157623291016, 1.1066572666168213], step: 176200, lr: 9.748208252143241e-05 2023-03-17 10:32:05,999 44k INFO Train Epoch: 205 [64%] 2023-03-17 10:32:05,999 44k INFO Losses: [2.537604570388794, 2.1031599044799805, 9.911170959472656, 16.117441177368164, 1.1631410121917725], step: 176400, lr: 9.748208252143241e-05 2023-03-17 10:34:34,176 44k INFO Train Epoch: 205 [87%] 2023-03-17 10:34:34,177 44k INFO Losses: [2.7191948890686035, 1.9823147058486938, 9.727699279785156, 15.545928955078125, 0.7031592130661011], step: 176600, lr: 9.748208252143241e-05 2023-03-17 10:35:54,979 44k INFO ====> Epoch: 205, cost 662.21 s 2023-03-17 10:37:14,361 44k INFO Train Epoch: 206 [10%] 2023-03-17 10:37:14,361 44k INFO Losses: [2.792447805404663, 2.1095123291015625, 8.448044776916504, 13.067750930786133, 0.8707250356674194], step: 176800, lr: 9.746989726111722e-05 2023-03-17 10:37:18,319 44k INFO Saving model and optimizer state at iteration 206 to ./logs\44k\G_176800.pth 2023-03-17 10:37:22,398 44k INFO Saving model and optimizer state at iteration 206 to ./logs\44k\D_176800.pth 2023-03-17 10:39:53,465 44k INFO Train Epoch: 206 [34%] 2023-03-17 10:39:53,466 44k INFO Losses: [2.7468769550323486, 1.8027114868164062, 9.005203247070312, 16.677953720092773, 0.83258056640625], step: 177000, lr: 9.746989726111722e-05 2023-03-17 10:42:21,503 44k INFO Train Epoch: 206 [57%] 2023-03-17 10:42:21,504 44k INFO Losses: [2.5056653022766113, 2.149698257446289, 7.11444616317749, 15.501490592956543, 0.957089900970459], step: 177200, lr: 9.746989726111722e-05 2023-03-17 10:44:49,836 44k INFO Train Epoch: 206 [80%] 2023-03-17 10:44:49,836 44k INFO Losses: [2.5026164054870605, 2.158889055252075, 6.843050479888916, 14.034244537353516, 0.7643479108810425], step: 177400, lr: 9.746989726111722e-05 2023-03-17 10:46:56,531 44k INFO ====> Epoch: 206, cost 661.55 s 2023-03-17 10:47:29,884 44k INFO Train Epoch: 207 [3%] 2023-03-17 10:47:29,884 44k INFO Losses: [2.4140219688415527, 2.2386295795440674, 9.40619945526123, 17.539182662963867, 0.8784763813018799], step: 177600, lr: 9.745771352395957e-05 2023-03-17 10:47:33,909 44k INFO Saving model and optimizer state at iteration 207 to ./logs\44k\G_177600.pth 2023-03-17 10:47:38,904 44k INFO Saving model and optimizer state at iteration 207 to ./logs\44k\D_177600.pth 2023-03-17 10:50:10,277 44k INFO Train Epoch: 207 [26%] 2023-03-17 10:50:10,277 44k INFO Losses: [2.583224296569824, 2.827446937561035, 7.077727317810059, 12.71306037902832, 0.8684049248695374], step: 177800, lr: 9.745771352395957e-05 2023-03-17 10:52:38,245 44k INFO Train Epoch: 207 [50%] 2023-03-17 10:52:38,245 44k INFO Losses: [2.5082831382751465, 2.2843453884124756, 8.707429885864258, 14.647415161132812, 1.1417350769042969], step: 178000, lr: 9.745771352395957e-05 2023-03-17 10:55:06,688 44k INFO Train Epoch: 207 [73%] 2023-03-17 10:55:06,688 44k INFO Losses: [2.4839611053466797, 2.2821407318115234, 7.812560081481934, 18.880170822143555, 0.9914988279342651], step: 178200, lr: 9.745771352395957e-05 2023-03-17 10:57:34,705 44k INFO Train Epoch: 207 [96%] 2023-03-17 10:57:34,705 44k INFO Losses: [2.8633673191070557, 1.783204436302185, 6.052620887756348, 11.623414039611816, 1.3063948154449463], step: 178400, lr: 9.745771352395957e-05 2023-03-17 10:57:38,747 44k INFO Saving model and optimizer state at iteration 207 to ./logs\44k\G_178400.pth 2023-03-17 10:57:43,377 44k INFO Saving model and optimizer state at iteration 207 to ./logs\44k\D_178400.pth 2023-03-17 10:58:11,271 44k INFO ====> Epoch: 207, cost 674.74 s 2023-03-17 11:00:26,868 44k INFO Train Epoch: 208 [19%] 2023-03-17 11:00:26,868 44k INFO Losses: [2.577471971511841, 2.276533603668213, 8.219725608825684, 15.711836814880371, 1.2218472957611084], step: 178600, lr: 9.744553130976908e-05 2023-03-17 11:02:54,949 44k INFO Train Epoch: 208 [42%] 2023-03-17 11:02:54,949 44k INFO Losses: [2.6859867572784424, 2.0573649406433105, 6.594308376312256, 15.362957954406738, 0.9622623920440674], step: 178800, lr: 9.744553130976908e-05 2023-03-17 11:05:23,312 44k INFO Train Epoch: 208 [66%] 2023-03-17 11:05:23,313 44k INFO Losses: [2.611052989959717, 2.0365684032440186, 10.141921043395996, 18.834077835083008, 0.6628416776657104], step: 179000, lr: 9.744553130976908e-05 2023-03-17 11:07:51,618 44k INFO Train Epoch: 208 [89%] 2023-03-17 11:07:51,618 44k INFO Losses: [2.750168561935425, 1.8721081018447876, 5.399033546447754, 15.745501518249512, 1.3914042711257935], step: 179200, lr: 9.744553130976908e-05 2023-03-17 11:07:55,669 44k INFO Saving model and optimizer state at iteration 208 to ./logs\44k\G_179200.pth 2023-03-17 11:08:00,513 44k INFO Saving model and optimizer state at iteration 208 to ./logs\44k\D_179200.pth 2023-03-17 11:09:14,012 44k INFO ====> Epoch: 208, cost 662.74 s 2023-03-17 11:10:43,632 44k INFO Train Epoch: 209 [12%] 2023-03-17 11:10:43,632 44k INFO Losses: [2.2802538871765137, 2.439211130142212, 14.24564266204834, 19.527437210083008, 1.2453557252883911], step: 179400, lr: 9.743335061835535e-05 2023-03-17 11:13:11,985 44k INFO Train Epoch: 209 [35%] 2023-03-17 11:13:11,985 44k INFO Losses: [2.603546619415283, 2.0905964374542236, 5.596753120422363, 15.311141014099121, 1.4274730682373047], step: 179600, lr: 9.743335061835535e-05 2023-03-17 11:15:40,082 44k INFO Train Epoch: 209 [58%] 2023-03-17 11:15:40,082 44k INFO Losses: [2.649585723876953, 2.3273868560791016, 10.6399564743042, 19.96438980102539, 1.0884876251220703], step: 179800, lr: 9.743335061835535e-05 2023-03-17 11:18:08,420 44k INFO Train Epoch: 209 [82%] 2023-03-17 11:18:08,420 44k INFO Losses: [2.63979172706604, 2.2789690494537354, 9.143485069274902, 19.341110229492188, 1.0809619426727295], step: 180000, lr: 9.743335061835535e-05 2023-03-17 11:18:12,480 44k INFO Saving model and optimizer state at iteration 209 to ./logs\44k\G_180000.pth 2023-03-17 11:18:17,612 44k INFO Saving model and optimizer state at iteration 209 to ./logs\44k\D_180000.pth 2023-03-17 11:20:16,873 44k INFO ====> Epoch: 209, cost 662.86 s 2023-03-17 11:21:00,590 44k INFO Train Epoch: 210 [5%] 2023-03-17 11:21:00,591 44k INFO Losses: [2.47117280960083, 2.1864118576049805, 11.310925483703613, 20.054471969604492, 1.0518211126327515], step: 180200, lr: 9.742117144952805e-05 2023-03-17 11:23:29,155 44k INFO Train Epoch: 210 [28%] 2023-03-17 11:23:29,155 44k INFO Losses: [2.766324758529663, 2.135094404220581, 8.51941967010498, 14.730029106140137, 1.0966026782989502], step: 180400, lr: 9.742117144952805e-05 2023-03-17 11:25:57,549 44k INFO Train Epoch: 210 [51%] 2023-03-17 11:25:57,550 44k INFO Losses: [2.430985450744629, 2.1862454414367676, 11.431011199951172, 19.463998794555664, 0.8494487404823303], step: 180600, lr: 9.742117144952805e-05 2023-03-17 11:28:26,005 44k INFO Train Epoch: 210 [74%] 2023-03-17 11:28:26,006 44k INFO Losses: [2.5233566761016846, 2.372490882873535, 9.429157257080078, 16.993858337402344, 1.0475859642028809], step: 180800, lr: 9.742117144952805e-05 2023-03-17 11:28:30,008 44k INFO Saving model and optimizer state at iteration 210 to ./logs\44k\G_180800.pth 2023-03-17 11:28:40,752 44k INFO Saving model and optimizer state at iteration 210 to ./logs\44k\D_180800.pth 2023-03-17 11:31:14,569 44k INFO Train Epoch: 210 [98%] 2023-03-17 11:31:14,570 44k INFO Losses: [2.554152488708496, 2.101454019546509, 5.078801155090332, 8.058226585388184, 0.8225134015083313], step: 181000, lr: 9.742117144952805e-05 2023-03-17 11:31:29,046 44k INFO ====> Epoch: 210, cost 672.17 s 2023-03-17 11:33:55,114 44k INFO Train Epoch: 211 [21%] 2023-03-17 11:33:55,114 44k INFO Losses: [2.7141408920288086, 1.7660748958587646, 10.120643615722656, 15.550559997558594, 0.6969175338745117], step: 181200, lr: 9.740899380309685e-05 2023-03-17 11:36:23,303 44k INFO Train Epoch: 211 [44%] 2023-03-17 11:36:23,304 44k INFO Losses: [2.586454153060913, 1.9200758934020996, 11.67911148071289, 21.77525520324707, 1.032416582107544], step: 181400, lr: 9.740899380309685e-05 2023-03-17 11:38:51,856 44k INFO Train Epoch: 211 [67%] 2023-03-17 11:38:51,856 44k INFO Losses: [2.541215658187866, 2.105384111404419, 10.684589385986328, 16.128833770751953, 1.1080601215362549], step: 181600, lr: 9.740899380309685e-05 2023-03-17 11:38:56,131 44k INFO Saving model and optimizer state at iteration 211 to ./logs\44k\G_181600.pth 2023-03-17 11:39:10,408 44k INFO Saving model and optimizer state at iteration 211 to ./logs\44k\D_181600.pth 2023-03-17 11:41:45,247 44k INFO Train Epoch: 211 [90%] 2023-03-17 11:41:45,248 44k INFO Losses: [2.339505672454834, 2.2265841960906982, 10.317011833190918, 14.147934913635254, 0.9677112102508545], step: 181800, lr: 9.740899380309685e-05 2023-03-17 11:42:45,684 44k INFO ====> Epoch: 211, cost 676.64 s 2023-03-17 11:44:26,123 44k INFO Train Epoch: 212 [14%] 2023-03-17 11:44:26,123 44k INFO Losses: [2.5529842376708984, 2.1374449729919434, 10.958593368530273, 18.155786514282227, 1.2598228454589844], step: 182000, lr: 9.739681767887146e-05 2023-03-17 11:46:54,729 44k INFO Train Epoch: 212 [37%] 2023-03-17 11:46:54,729 44k INFO Losses: [2.70794415473938, 2.0037922859191895, 9.614531517028809, 18.303199768066406, 0.7753605842590332], step: 182200, lr: 9.739681767887146e-05 2023-03-17 11:49:23,276 44k INFO Train Epoch: 212 [60%] 2023-03-17 11:49:23,276 44k INFO Losses: [2.530256748199463, 2.3375983238220215, 7.835972309112549, 18.090478897094727, 1.0682733058929443], step: 182400, lr: 9.739681767887146e-05 2023-03-17 11:49:27,453 44k INFO Saving model and optimizer state at iteration 212 to ./logs\44k\G_182400.pth 2023-03-17 11:49:31,542 44k INFO Saving model and optimizer state at iteration 212 to ./logs\44k\D_182400.pth 2023-03-17 11:52:03,157 44k INFO Train Epoch: 212 [83%] 2023-03-17 11:52:03,157 44k INFO Losses: [2.3500232696533203, 2.6102757453918457, 12.493837356567383, 16.457101821899414, 0.8494613170623779], step: 182600, lr: 9.739681767887146e-05 2023-03-17 11:53:49,244 44k INFO ====> Epoch: 212, cost 663.56 s 2023-03-17 11:54:43,429 44k INFO Train Epoch: 213 [6%] 2023-03-17 11:54:43,429 44k INFO Losses: [2.5222573280334473, 2.229140281677246, 10.748860359191895, 18.661226272583008, 1.0347487926483154], step: 182800, lr: 9.73846430766616e-05 2023-03-17 11:57:12,208 44k INFO Train Epoch: 213 [30%] 2023-03-17 11:57:12,209 44k INFO Losses: [2.5406060218811035, 2.385230541229248, 11.919771194458008, 18.70151138305664, 0.786359429359436], step: 183000, lr: 9.73846430766616e-05 2023-03-17 11:59:40,471 44k INFO Train Epoch: 213 [53%] 2023-03-17 11:59:40,471 44k INFO Losses: [2.7808661460876465, 2.1547937393188477, 7.618335723876953, 19.255605697631836, 1.4744954109191895], step: 183200, lr: 9.73846430766616e-05 2023-03-17 11:59:44,649 44k INFO Saving model and optimizer state at iteration 213 to ./logs\44k\G_183200.pth 2023-03-17 11:59:49,214 44k INFO Saving model and optimizer state at iteration 213 to ./logs\44k\D_183200.pth 2023-03-17 12:02:20,694 44k INFO Train Epoch: 213 [76%] 2023-03-17 12:02:20,694 44k INFO Losses: [2.735657215118408, 2.2200517654418945, 10.00018310546875, 15.235458374023438, 1.0425728559494019], step: 183400, lr: 9.73846430766616e-05 2023-03-17 12:04:49,191 44k INFO Train Epoch: 213 [99%] 2023-03-17 12:04:49,191 44k INFO Losses: [2.4856042861938477, 2.3658363819122314, 11.26566219329834, 18.84331703186035, 0.8903632164001465], step: 183600, lr: 9.73846430766616e-05 2023-03-17 12:04:53,286 44k INFO ====> Epoch: 213, cost 664.04 s 2023-03-17 12:07:29,958 44k INFO Train Epoch: 214 [23%] 2023-03-17 12:07:29,958 44k INFO Losses: [2.545485496520996, 2.2913284301757812, 9.560129165649414, 15.989890098571777, 1.0593316555023193], step: 183800, lr: 9.7372469996277e-05 2023-03-17 12:09:58,376 44k INFO Train Epoch: 214 [46%] 2023-03-17 12:09:58,376 44k INFO Losses: [2.532182216644287, 2.253351926803589, 4.348514556884766, 11.807778358459473, 1.007555365562439], step: 184000, lr: 9.7372469996277e-05 2023-03-17 12:10:02,415 44k INFO Saving model and optimizer state at iteration 214 to ./logs\44k\G_184000.pth 2023-03-17 12:10:06,726 44k INFO Saving model and optimizer state at iteration 214 to ./logs\44k\D_184000.pth 2023-03-17 12:12:38,383 44k INFO Train Epoch: 214 [69%] 2023-03-17 12:12:38,383 44k INFO Losses: [2.454798936843872, 2.1754846572875977, 7.94597053527832, 18.843412399291992, 1.190887689590454], step: 184200, lr: 9.7372469996277e-05 2023-03-17 12:15:06,964 44k INFO Train Epoch: 214 [92%] 2023-03-17 12:15:06,965 44k INFO Losses: [2.4073896408081055, 2.1157522201538086, 10.424430847167969, 15.691438674926758, 0.46595528721809387], step: 184400, lr: 9.7372469996277e-05 2023-03-17 12:15:57,168 44k INFO ====> Epoch: 214, cost 663.88 s 2023-03-17 12:17:47,895 44k INFO Train Epoch: 215 [15%] 2023-03-17 12:17:47,895 44k INFO Losses: [2.564326763153076, 2.0675528049468994, 6.4317169189453125, 12.483842849731445, 1.1223472356796265], step: 184600, lr: 9.736029843752747e-05 2023-03-17 12:20:16,552 44k INFO Train Epoch: 215 [39%] 2023-03-17 12:20:16,552 44k INFO Losses: [2.7275071144104004, 1.954361915588379, 8.725077629089355, 17.148757934570312, 0.9475346207618713], step: 184800, lr: 9.736029843752747e-05 2023-03-17 12:20:20,750 44k INFO Saving model and optimizer state at iteration 215 to ./logs\44k\G_184800.pth 2023-03-17 12:20:25,529 44k INFO Saving model and optimizer state at iteration 215 to ./logs\44k\D_184800.pth 2023-03-17 12:22:57,203 44k INFO Train Epoch: 215 [62%] 2023-03-17 12:22:57,204 44k INFO Losses: [2.5221645832061768, 2.1541635990142822, 5.887071132659912, 12.647122383117676, 0.9053148031234741], step: 185000, lr: 9.736029843752747e-05 2023-03-17 12:25:26,002 44k INFO Train Epoch: 215 [85%] 2023-03-17 12:25:26,002 44k INFO Losses: [2.82702374458313, 1.9524348974227905, 6.983026504516602, 14.883855819702148, 0.8379077315330505], step: 185200, lr: 9.736029843752747e-05 2023-03-17 12:27:01,949 44k INFO ====> Epoch: 215, cost 664.78 s 2023-03-17 12:28:06,688 44k INFO Train Epoch: 216 [8%] 2023-03-17 12:28:06,688 44k INFO Losses: [2.572071075439453, 2.218459129333496, 7.251830101013184, 16.82018280029297, 0.9855557680130005], step: 185400, lr: 9.734812840022278e-05 2023-03-17 12:30:35,371 44k INFO Train Epoch: 216 [31%] 2023-03-17 12:30:35,371 44k INFO Losses: [2.8976993560791016, 1.8449798822402954, 6.881504058837891, 16.650012969970703, 1.086397647857666], step: 185600, lr: 9.734812840022278e-05 2023-03-17 12:30:39,502 44k INFO Saving model and optimizer state at iteration 216 to ./logs\44k\G_185600.pth 2023-03-17 12:30:43,659 44k INFO Saving model and optimizer state at iteration 216 to ./logs\44k\D_185600.pth 2023-03-17 12:33:15,352 44k INFO Train Epoch: 216 [55%] 2023-03-17 12:33:15,353 44k INFO Losses: [2.5632877349853516, 2.35676908493042, 7.553692817687988, 16.773469924926758, 1.0663241147994995], step: 185800, lr: 9.734812840022278e-05 2023-03-17 12:35:44,104 44k INFO Train Epoch: 216 [78%] 2023-03-17 12:35:44,104 44k INFO Losses: [2.5480990409851074, 2.2369399070739746, 9.43968391418457, 16.13621711730957, 0.9748097062110901], step: 186000, lr: 9.734812840022278e-05 2023-03-17 12:38:06,279 44k INFO ====> Epoch: 216, cost 664.33 s 2023-03-17 12:38:24,975 44k INFO Train Epoch: 217 [1%] 2023-03-17 12:38:24,975 44k INFO Losses: [2.4646682739257812, 2.337564468383789, 6.540798187255859, 8.852884292602539, 1.167514681816101], step: 186200, lr: 9.733595988417275e-05 2023-03-17 12:40:53,805 44k INFO Train Epoch: 217 [24%] 2023-03-17 12:40:53,805 44k INFO Losses: [2.3899407386779785, 2.222371816635132, 7.8649582862854, 17.537654876708984, 0.8362668752670288], step: 186400, lr: 9.733595988417275e-05 2023-03-17 12:40:57,771 44k INFO Saving model and optimizer state at iteration 217 to ./logs\44k\G_186400.pth 2023-03-17 12:41:01,909 44k INFO Saving model and optimizer state at iteration 217 to ./logs\44k\D_186400.pth 2023-03-17 12:43:33,504 44k INFO Train Epoch: 217 [47%] 2023-03-17 12:43:33,505 44k INFO Losses: [2.32983660697937, 2.498037576675415, 11.407770156860352, 15.372930526733398, 1.1758216619491577], step: 186600, lr: 9.733595988417275e-05 2023-03-17 12:46:02,258 44k INFO Train Epoch: 217 [71%] 2023-03-17 12:46:02,258 44k INFO Losses: [2.729607582092285, 2.142976999282837, 11.411117553710938, 17.761390686035156, 0.9604809284210205], step: 186800, lr: 9.733595988417275e-05 2023-03-17 12:48:30,842 44k INFO Train Epoch: 217 [94%] 2023-03-17 12:48:30,843 44k INFO Losses: [2.4959394931793213, 2.09059476852417, 7.631901264190674, 15.851984024047852, 1.0440446138381958], step: 187000, lr: 9.733595988417275e-05 2023-03-17 12:49:10,560 44k INFO ====> Epoch: 217, cost 664.28 s 2023-03-17 12:51:11,887 44k INFO Train Epoch: 218 [17%] 2023-03-17 12:51:11,887 44k INFO Losses: [2.430086135864258, 2.4179129600524902, 7.861630439758301, 16.468002319335938, 0.6401231288909912], step: 187200, lr: 9.732379288918723e-05 2023-03-17 12:51:16,013 44k INFO Saving model and optimizer state at iteration 218 to ./logs\44k\G_187200.pth 2023-03-17 12:51:21,199 44k INFO Saving model and optimizer state at iteration 218 to ./logs\44k\D_187200.pth 2023-03-17 12:53:52,971 44k INFO Train Epoch: 218 [40%] 2023-03-17 12:53:52,971 44k INFO Losses: [2.5498321056365967, 2.112281322479248, 9.90023136138916, 18.50973892211914, 0.8108535408973694], step: 187400, lr: 9.732379288918723e-05 2023-03-17 12:56:21,735 44k INFO Train Epoch: 218 [63%] 2023-03-17 12:56:21,735 44k INFO Losses: [2.7466282844543457, 1.9207072257995605, 6.659596920013428, 12.943429946899414, 1.006450891494751], step: 187600, lr: 9.732379288918723e-05 2023-03-17 12:58:50,531 44k INFO Train Epoch: 218 [87%] 2023-03-17 12:58:50,531 44k INFO Losses: [2.642427682876587, 2.271390676498413, 9.884720802307129, 14.518815994262695, 1.296195149421692], step: 187800, lr: 9.732379288918723e-05 2023-03-17 13:00:16,131 44k INFO ====> Epoch: 218, cost 665.57 s 2023-03-17 13:01:31,272 44k INFO Train Epoch: 219 [10%] 2023-03-17 13:01:31,272 44k INFO Losses: [2.810072422027588, 2.112945795059204, 8.64144515991211, 15.03377628326416, 1.192238450050354], step: 188000, lr: 9.731162741507607e-05 2023-03-17 13:01:35,339 44k INFO Saving model and optimizer state at iteration 219 to ./logs\44k\G_188000.pth 2023-03-17 13:01:40,330 44k INFO Saving model and optimizer state at iteration 219 to ./logs\44k\D_188000.pth 2023-03-17 13:04:12,093 44k INFO Train Epoch: 219 [33%] 2023-03-17 13:04:12,093 44k INFO Losses: [2.657827615737915, 1.937217116355896, 11.489752769470215, 18.096744537353516, 0.8327619433403015], step: 188200, lr: 9.731162741507607e-05 2023-03-17 13:06:40,683 44k INFO Train Epoch: 219 [56%] 2023-03-17 13:06:40,683 44k INFO Losses: [2.5412118434906006, 2.1989290714263916, 10.83952522277832, 17.69643211364746, 0.809262216091156], step: 188400, lr: 9.731162741507607e-05 2023-03-17 13:09:09,509 44k INFO Train Epoch: 219 [79%] 2023-03-17 13:09:09,510 44k INFO Losses: [2.840298891067505, 1.8612078428268433, 6.323737144470215, 14.539979934692383, 1.1242356300354004], step: 188600, lr: 9.731162741507607e-05 2023-03-17 13:11:21,055 44k INFO ====> Epoch: 219, cost 664.92 s 2023-03-17 13:11:50,131 44k INFO Train Epoch: 220 [3%] 2023-03-17 13:11:50,132 44k INFO Losses: [2.530442953109741, 2.3697240352630615, 9.328776359558105, 18.99569320678711, 1.2080276012420654], step: 188800, lr: 9.729946346164919e-05 2023-03-17 13:11:54,187 44k INFO Saving model and optimizer state at iteration 220 to ./logs\44k\G_188800.pth 2023-03-17 13:11:58,568 44k INFO Saving model and optimizer state at iteration 220 to ./logs\44k\D_188800.pth 2023-03-17 13:14:30,444 44k INFO Train Epoch: 220 [26%] 2023-03-17 13:14:30,444 44k INFO Losses: [2.6506426334381104, 2.153394937515259, 7.862729072570801, 15.249713897705078, 1.0481107234954834], step: 189000, lr: 9.729946346164919e-05 2023-03-17 13:16:59,027 44k INFO Train Epoch: 220 [49%] 2023-03-17 13:16:59,028 44k INFO Losses: [2.5761334896087646, 2.3226208686828613, 9.163739204406738, 20.233177185058594, 1.2344282865524292], step: 189200, lr: 9.729946346164919e-05 2023-03-17 13:19:27,868 44k INFO Train Epoch: 220 [72%] 2023-03-17 13:19:27,869 44k INFO Losses: [2.6959023475646973, 2.213590383529663, 11.19178581237793, 18.101360321044922, 0.9561485648155212], step: 189400, lr: 9.729946346164919e-05 2023-03-17 13:21:56,654 44k INFO Train Epoch: 220 [95%] 2023-03-17 13:21:56,654 44k INFO Losses: [2.206878662109375, 2.731790065765381, 10.837057113647461, 16.47359848022461, 1.331818699836731], step: 189600, lr: 9.729946346164919e-05 2023-03-17 13:22:00,926 44k INFO Saving model and optimizer state at iteration 220 to ./logs\44k\G_189600.pth 2023-03-17 13:22:05,473 44k INFO Saving model and optimizer state at iteration 220 to ./logs\44k\D_189600.pth 2023-03-17 13:22:37,776 44k INFO ====> Epoch: 220, cost 676.72 s 2023-03-17 13:24:49,458 44k INFO Train Epoch: 221 [19%] 2023-03-17 13:24:49,458 44k INFO Losses: [2.6142475605010986, 2.0344841480255127, 8.518631935119629, 18.67056655883789, 0.9936672449111938], step: 189800, lr: 9.728730102871649e-05 2023-03-17 13:27:18,152 44k INFO Train Epoch: 221 [42%] 2023-03-17 13:27:18,152 44k INFO Losses: [2.631883144378662, 2.095200777053833, 7.491177082061768, 14.36466121673584, 1.2082356214523315], step: 190000, lr: 9.728730102871649e-05 2023-03-17 13:29:46,833 44k INFO Train Epoch: 221 [65%] 2023-03-17 13:29:46,833 44k INFO Losses: [2.5411605834960938, 2.292088508605957, 9.461869239807129, 17.898487091064453, 0.9604562520980835], step: 190200, lr: 9.728730102871649e-05 2023-03-17 13:32:15,652 44k INFO Train Epoch: 221 [88%] 2023-03-17 13:32:15,653 44k INFO Losses: [2.5916378498077393, 2.249612331390381, 9.52937126159668, 17.526330947875977, 0.9899482727050781], step: 190400, lr: 9.728730102871649e-05 2023-03-17 13:32:26,012 44k INFO Saving model and optimizer state at iteration 221 to ./logs\44k\G_190400.pth 2023-03-17 13:32:31,491 44k INFO Saving model and optimizer state at iteration 221 to ./logs\44k\D_190400.pth 2023-03-17 13:33:49,536 44k INFO ====> Epoch: 221, cost 671.76 s 2023-03-17 13:35:15,243 44k INFO Train Epoch: 222 [11%] 2023-03-17 13:35:15,243 44k INFO Losses: [2.561826229095459, 1.8541224002838135, 9.440753936767578, 13.526259422302246, 0.8567599654197693], step: 190600, lr: 9.727514011608789e-05 2023-03-17 13:37:44,108 44k INFO Train Epoch: 222 [35%] 2023-03-17 13:37:44,109 44k INFO Losses: [2.728055000305176, 2.0776479244232178, 7.323089599609375, 15.653533935546875, 0.8804511427879333], step: 190800, lr: 9.727514011608789e-05 2023-03-17 13:40:12,719 44k INFO Train Epoch: 222 [58%] 2023-03-17 13:40:12,719 44k INFO Losses: [2.492541551589966, 2.564396381378174, 9.699569702148438, 17.523771286010742, 0.9223407506942749], step: 191000, lr: 9.727514011608789e-05 2023-03-17 13:42:41,607 44k INFO Train Epoch: 222 [81%] 2023-03-17 13:42:41,607 44k INFO Losses: [2.557429313659668, 2.3032734394073486, 10.027416229248047, 19.260507583618164, 1.3174904584884644], step: 191200, lr: 9.727514011608789e-05 2023-03-17 13:42:45,742 44k INFO Saving model and optimizer state at iteration 222 to ./logs\44k\G_191200.pth 2023-03-17 13:42:50,952 44k INFO Saving model and optimizer state at iteration 222 to ./logs\44k\D_191200.pth 2023-03-17 13:44:55,101 44k INFO ====> Epoch: 222, cost 665.57 s 2023-03-17 13:45:34,453 44k INFO Train Epoch: 223 [4%] 2023-03-17 13:45:34,453 44k INFO Losses: [2.5423548221588135, 2.249544858932495, 10.846890449523926, 15.011199951171875, 0.8705413937568665], step: 191400, lr: 9.726298072357337e-05 2023-03-17 13:48:03,336 44k INFO Train Epoch: 223 [27%] 2023-03-17 13:48:03,337 44k INFO Losses: [2.6625397205352783, 2.279538154602051, 8.525496482849121, 17.53712272644043, 0.9734299182891846], step: 191600, lr: 9.726298072357337e-05 2023-03-17 13:50:32,090 44k INFO Train Epoch: 223 [51%] 2023-03-17 13:50:32,090 44k INFO Losses: [2.4544599056243896, 2.4523825645446777, 9.23486042022705, 15.361289024353027, 1.3087626695632935], step: 191800, lr: 9.726298072357337e-05 2023-03-17 13:53:01,033 44k INFO Train Epoch: 223 [74%] 2023-03-17 13:53:01,034 44k INFO Losses: [2.7235829830169678, 2.1110243797302246, 6.3612260818481445, 11.458047866821289, 1.240590214729309], step: 192000, lr: 9.726298072357337e-05 2023-03-17 13:53:05,157 44k INFO Saving model and optimizer state at iteration 223 to ./logs\44k\G_192000.pth 2023-03-17 13:53:10,410 44k INFO Saving model and optimizer state at iteration 223 to ./logs\44k\D_192000.pth 2023-03-17 13:55:42,063 44k INFO Train Epoch: 223 [97%] 2023-03-17 13:55:42,064 44k INFO Losses: [2.5958967208862305, 1.9889823198318481, 7.128242492675781, 13.590474128723145, 0.9796684384346008], step: 192200, lr: 9.726298072357337e-05 2023-03-17 13:56:01,048 44k INFO ====> Epoch: 223, cost 665.95 s 2023-03-17 13:58:23,070 44k INFO Train Epoch: 224 [20%] 2023-03-17 13:58:23,071 44k INFO Losses: [2.761589527130127, 2.4184250831604004, 8.915008544921875, 12.952095985412598, 0.9741305112838745], step: 192400, lr: 9.725082285098293e-05 2023-03-17 14:00:51,701 44k INFO Train Epoch: 224 [43%] 2023-03-17 14:00:51,702 44k INFO Losses: [2.67128324508667, 2.051316976547241, 6.337381839752197, 12.918463706970215, 0.8970950245857239], step: 192600, lr: 9.725082285098293e-05 2023-03-17 14:03:20,623 44k INFO Train Epoch: 224 [67%] 2023-03-17 14:03:20,623 44k INFO Losses: [2.609917402267456, 2.146012544631958, 8.347381591796875, 19.613054275512695, 1.0089319944381714], step: 192800, lr: 9.725082285098293e-05 2023-03-17 14:03:24,731 44k INFO Saving model and optimizer state at iteration 224 to ./logs\44k\G_192800.pth 2023-03-17 14:03:30,038 44k INFO Saving model and optimizer state at iteration 224 to ./logs\44k\D_192800.pth 2023-03-17 14:06:02,066 44k INFO Train Epoch: 224 [90%] 2023-03-17 14:06:02,067 44k INFO Losses: [2.595521926879883, 1.9811025857925415, 9.1431303024292, 16.269472122192383, 1.1952967643737793], step: 193000, lr: 9.725082285098293e-05 2023-03-17 14:07:06,929 44k INFO ====> Epoch: 224, cost 665.88 s 2023-03-17 14:08:42,857 44k INFO Train Epoch: 225 [13%] 2023-03-17 14:08:42,857 44k INFO Losses: [2.436035633087158, 2.0675344467163086, 9.940725326538086, 18.530784606933594, 1.2609487771987915], step: 193200, lr: 9.723866649812655e-05 2023-03-17 14:11:11,712 44k INFO Train Epoch: 225 [36%] 2023-03-17 14:11:11,712 44k INFO Losses: [2.512924909591675, 2.333123207092285, 9.257488250732422, 16.66678237915039, 1.118751049041748], step: 193400, lr: 9.723866649812655e-05 2023-03-17 14:13:40,348 44k INFO Train Epoch: 225 [59%] 2023-03-17 14:13:40,349 44k INFO Losses: [2.449540376663208, 2.2732973098754883, 11.068074226379395, 18.650503158569336, 1.0709233283996582], step: 193600, lr: 9.723866649812655e-05 2023-03-17 14:13:44,420 44k INFO Saving model and optimizer state at iteration 225 to ./logs\44k\G_193600.pth 2023-03-17 14:13:49,865 44k INFO Saving model and optimizer state at iteration 225 to ./logs\44k\D_193600.pth 2023-03-17 14:16:21,859 44k INFO Train Epoch: 225 [83%] 2023-03-17 14:16:21,859 44k INFO Losses: [2.693375587463379, 2.075554847717285, 8.380537986755371, 13.101231575012207, 1.10446298122406], step: 193800, lr: 9.723866649812655e-05 2023-03-17 14:18:12,819 44k INFO ====> Epoch: 225, cost 665.89 s 2023-03-17 14:19:02,670 44k INFO Train Epoch: 226 [6%] 2023-03-17 14:19:02,671 44k INFO Losses: [2.465904474258423, 2.3409485816955566, 9.715896606445312, 18.7215633392334, 1.2501780986785889], step: 194000, lr: 9.722651166481428e-05 2023-03-17 14:21:31,359 44k INFO Train Epoch: 226 [29%] 2023-03-17 14:21:31,360 44k INFO Losses: [2.4311766624450684, 2.3104987144470215, 10.393233299255371, 18.371274948120117, 0.733143150806427], step: 194200, lr: 9.722651166481428e-05 2023-03-17 14:23:59,786 44k INFO Train Epoch: 226 [52%] 2023-03-17 14:23:59,786 44k INFO Losses: [2.468031406402588, 2.3239574432373047, 10.98659610748291, 18.766687393188477, 1.2292083501815796], step: 194400, lr: 9.722651166481428e-05 2023-03-17 14:24:03,941 44k INFO Saving model and optimizer state at iteration 226 to ./logs\44k\G_194400.pth 2023-03-17 14:24:09,434 44k INFO Saving model and optimizer state at iteration 226 to ./logs\44k\D_194400.pth 2023-03-17 14:26:41,232 44k INFO Train Epoch: 226 [75%] 2023-03-17 14:26:41,232 44k INFO Losses: [2.670288562774658, 2.5119869709014893, 9.171467781066895, 18.571945190429688, 1.2188271284103394], step: 194600, lr: 9.722651166481428e-05 2023-03-17 14:29:09,849 44k INFO Train Epoch: 226 [99%] 2023-03-17 14:29:09,849 44k INFO Losses: [2.599092960357666, 2.3690249919891357, 9.993413925170898, 16.404794692993164, 1.1141105890274048], step: 194800, lr: 9.722651166481428e-05 2023-03-17 14:29:18,364 44k INFO ====> Epoch: 226, cost 665.54 s 2023-03-17 14:31:50,608 44k INFO Train Epoch: 227 [22%] 2023-03-17 14:31:50,608 44k INFO Losses: [2.4305405616760254, 2.047884225845337, 9.744362831115723, 16.757911682128906, 1.451468825340271], step: 195000, lr: 9.721435835085619e-05 2023-03-17 14:34:19,116 44k INFO Train Epoch: 227 [45%] 2023-03-17 14:34:19,116 44k INFO Losses: [2.566953420639038, 2.2417612075805664, 10.173957824707031, 20.42435073852539, 0.9842907786369324], step: 195200, lr: 9.721435835085619e-05 2023-03-17 14:34:23,135 44k INFO Saving model and optimizer state at iteration 227 to ./logs\44k\G_195200.pth 2023-03-17 14:34:28,618 44k INFO Saving model and optimizer state at iteration 227 to ./logs\44k\D_195200.pth 2023-03-17 14:37:00,345 44k INFO Train Epoch: 227 [68%] 2023-03-17 14:37:00,346 44k INFO Losses: [2.6388444900512695, 2.0928947925567627, 9.001906394958496, 17.522878646850586, 0.8402576446533203], step: 195400, lr: 9.721435835085619e-05 2023-03-17 14:39:28,929 44k INFO Train Epoch: 227 [91%] 2023-03-17 14:39:28,930 44k INFO Losses: [2.5021305084228516, 2.1301567554473877, 9.687324523925781, 15.971527099609375, 1.0983365774154663], step: 195600, lr: 9.721435835085619e-05 2023-03-17 14:40:23,516 44k INFO ====> Epoch: 227, cost 665.15 s 2023-03-17 14:42:09,845 44k INFO Train Epoch: 228 [15%] 2023-03-17 14:42:09,845 44k INFO Losses: [2.679429054260254, 2.0890603065490723, 6.837780952453613, 15.540587425231934, 1.1949204206466675], step: 195800, lr: 9.720220655606233e-05 2023-03-17 14:44:38,377 44k INFO Train Epoch: 228 [38%] 2023-03-17 14:44:38,377 44k INFO Losses: [2.600149154663086, 2.223633289337158, 11.069351196289062, 18.491432189941406, 1.030755877494812], step: 196000, lr: 9.720220655606233e-05 2023-03-17 14:44:42,473 44k INFO Saving model and optimizer state at iteration 228 to ./logs\44k\G_196000.pth 2023-03-17 14:44:47,643 44k INFO Saving model and optimizer state at iteration 228 to ./logs\44k\D_196000.pth 2023-03-17 14:47:19,405 44k INFO Train Epoch: 228 [61%] 2023-03-17 14:47:19,405 44k INFO Losses: [2.3543787002563477, 2.357489824295044, 15.372823715209961, 18.736597061157227, 1.2448182106018066], step: 196200, lr: 9.720220655606233e-05 2023-03-17 14:49:48,122 44k INFO Train Epoch: 228 [84%] 2023-03-17 14:49:48,122 44k INFO Losses: [2.1771042346954346, 2.4816083908081055, 12.191100120544434, 18.785642623901367, 0.5963658690452576], step: 196400, lr: 9.720220655606233e-05 2023-03-17 14:51:28,549 44k INFO ====> Epoch: 228, cost 665.03 s 2023-03-17 14:52:28,752 44k INFO Train Epoch: 229 [7%] 2023-03-17 14:52:28,752 44k INFO Losses: [2.517145872116089, 2.0755419731140137, 5.247823715209961, 12.34117317199707, 0.9581663608551025], step: 196600, lr: 9.719005628024282e-05 2023-03-17 14:54:57,610 44k INFO Train Epoch: 229 [31%] 2023-03-17 14:54:57,610 44k INFO Losses: [2.4619650840759277, 2.1951112747192383, 7.815654277801514, 16.106212615966797, 1.2049601078033447], step: 196800, lr: 9.719005628024282e-05 2023-03-17 14:55:01,719 44k INFO Saving model and optimizer state at iteration 229 to ./logs\44k\G_196800.pth 2023-03-17 14:55:06,849 44k INFO Saving model and optimizer state at iteration 229 to ./logs\44k\D_196800.pth 2023-03-17 14:57:38,378 44k INFO Train Epoch: 229 [54%] 2023-03-17 14:57:38,379 44k INFO Losses: [2.6864798069000244, 1.940460443496704, 8.005288124084473, 13.744611740112305, 0.9005299210548401], step: 197000, lr: 9.719005628024282e-05 2023-03-17 15:00:07,197 44k INFO Train Epoch: 229 [77%] 2023-03-17 15:00:07,197 44k INFO Losses: [2.604574680328369, 1.956700086593628, 13.42198657989502, 18.64751625061035, 1.090871810913086], step: 197200, lr: 9.719005628024282e-05 2023-03-17 15:02:33,515 44k INFO ====> Epoch: 229, cost 664.97 s 2023-03-17 15:02:47,848 44k INFO Train Epoch: 230 [0%] 2023-03-17 15:02:47,848 44k INFO Losses: [2.3813328742980957, 2.160189628601074, 11.233779907226562, 14.865093231201172, 1.1333894729614258], step: 197400, lr: 9.717790752320778e-05 2023-03-17 15:05:16,556 44k INFO Train Epoch: 230 [23%] 2023-03-17 15:05:16,556 44k INFO Losses: [2.52127742767334, 2.201789140701294, 6.3546142578125, 15.339993476867676, 1.1312516927719116], step: 197600, lr: 9.717790752320778e-05 2023-03-17 15:05:20,767 44k INFO Saving model and optimizer state at iteration 230 to ./logs\44k\G_197600.pth 2023-03-17 15:05:26,071 44k INFO Saving model and optimizer state at iteration 230 to ./logs\44k\D_197600.pth 2023-03-17 15:07:57,554 44k INFO Train Epoch: 230 [47%] 2023-03-17 15:07:57,554 44k INFO Losses: [2.097142457962036, 2.82875919342041, 9.40123176574707, 18.09060287475586, 0.9439107775688171], step: 197800, lr: 9.717790752320778e-05 2023-03-17 15:10:26,237 44k INFO Train Epoch: 230 [70%] 2023-03-17 15:10:26,238 44k INFO Losses: [2.712658643722534, 1.9829151630401611, 5.816107749938965, 15.878138542175293, 1.2445687055587769], step: 198000, lr: 9.717790752320778e-05 2023-03-17 15:12:54,786 44k INFO Train Epoch: 230 [93%] 2023-03-17 15:12:54,787 44k INFO Losses: [2.4807076454162598, 1.9391447305679321, 13.669112205505371, 18.54149627685547, 1.306135892868042], step: 198200, lr: 9.717790752320778e-05 2023-03-17 15:13:38,966 44k INFO ====> Epoch: 230, cost 665.45 s 2023-03-17 15:15:35,676 44k INFO Train Epoch: 231 [16%] 2023-03-17 15:15:35,677 44k INFO Losses: [2.457102060317993, 2.4441370964050293, 9.716008186340332, 15.521600723266602, 1.403611421585083], step: 198400, lr: 9.716576028476738e-05 2023-03-17 15:15:39,745 44k INFO Saving model and optimizer state at iteration 231 to ./logs\44k\G_198400.pth 2023-03-17 15:15:45,388 44k INFO Saving model and optimizer state at iteration 231 to ./logs\44k\D_198400.pth 2023-03-17 15:18:16,766 44k INFO Train Epoch: 231 [39%] 2023-03-17 15:18:16,766 44k INFO Losses: [2.5164568424224854, 2.5682382583618164, 10.48430347442627, 15.969467163085938, 0.9392551779747009], step: 198600, lr: 9.716576028476738e-05 2023-03-17 15:20:45,426 44k INFO Train Epoch: 231 [63%] 2023-03-17 15:20:45,426 44k INFO Losses: [2.4967703819274902, 2.0969231128692627, 6.472311019897461, 11.747690200805664, 0.9787726402282715], step: 198800, lr: 9.716576028476738e-05 2023-03-17 15:23:14,207 44k INFO Train Epoch: 231 [86%] 2023-03-17 15:23:14,207 44k INFO Losses: [2.5587406158447266, 2.183486223220825, 9.904661178588867, 19.842782974243164, 1.0780577659606934], step: 199000, lr: 9.716576028476738e-05 2023-03-17 15:24:44,202 44k INFO ====> Epoch: 231, cost 665.24 s 2023-03-17 15:25:54,887 44k INFO Train Epoch: 232 [9%] 2023-03-17 15:25:54,888 44k INFO Losses: [2.5499799251556396, 2.166936159133911, 9.443096160888672, 19.037874221801758, 1.0833159685134888], step: 199200, lr: 9.715361456473177e-05 2023-03-17 15:25:59,016 44k INFO Saving model and optimizer state at iteration 232 to ./logs\44k\G_199200.pth 2023-03-17 15:26:04,330 44k INFO Saving model and optimizer state at iteration 232 to ./logs\44k\D_199200.pth 2023-03-17 15:28:35,812 44k INFO Train Epoch: 232 [32%] 2023-03-17 15:28:35,812 44k INFO Losses: [2.420562982559204, 2.2692060470581055, 8.919405937194824, 19.03289222717285, 1.035207748413086], step: 199400, lr: 9.715361456473177e-05 2023-03-17 15:31:04,266 44k INFO Train Epoch: 232 [55%] 2023-03-17 15:31:04,267 44k INFO Losses: [2.625706195831299, 2.2677788734436035, 10.874268531799316, 19.48659324645996, 1.2157964706420898], step: 199600, lr: 9.715361456473177e-05 2023-03-17 15:33:33,004 44k INFO Train Epoch: 232 [79%] 2023-03-17 15:33:33,004 44k INFO Losses: [2.506282091140747, 2.124828815460205, 7.050907135009766, 13.573482513427734, 1.4110721349716187], step: 199800, lr: 9.715361456473177e-05 2023-03-17 15:35:49,212 44k INFO ====> Epoch: 232, cost 665.01 s 2023-03-17 15:36:13,929 44k INFO Train Epoch: 233 [2%] 2023-03-17 15:36:13,929 44k INFO Losses: [2.400456190109253, 2.144167423248291, 10.53719425201416, 17.07789421081543, 1.1368826627731323], step: 200000, lr: 9.714147036291117e-05 2023-03-17 15:36:17,905 44k INFO Saving model and optimizer state at iteration 233 to ./logs\44k\G_200000.pth 2023-03-17 15:36:23,763 44k INFO Saving model and optimizer state at iteration 233 to ./logs\44k\D_200000.pth 2023-03-17 15:38:55,570 44k INFO Train Epoch: 233 [25%] 2023-03-17 15:38:55,570 44k INFO Losses: [2.6731455326080322, 2.07804536819458, 7.24778938293457, 13.30770492553711, 0.8594455718994141], step: 200200, lr: 9.714147036291117e-05 2023-03-17 15:41:24,015 44k INFO Train Epoch: 233 [48%] 2023-03-17 15:41:24,016 44k INFO Losses: [2.5296335220336914, 2.1215343475341797, 8.765459060668945, 13.810646057128906, 0.6411775350570679], step: 200400, lr: 9.714147036291117e-05 2023-03-17 15:43:52,831 44k INFO Train Epoch: 233 [71%] 2023-03-17 15:43:52,832 44k INFO Losses: [2.440690040588379, 2.4695966243743896, 7.98685884475708, 15.21174430847168, 1.302851676940918], step: 200600, lr: 9.714147036291117e-05 2023-03-17 15:46:21,227 44k INFO Train Epoch: 233 [95%] 2023-03-17 15:46:21,228 44k INFO Losses: [2.8792924880981445, 1.9184268712997437, 8.647856712341309, 15.299065589904785, 1.0259262323379517], step: 200800, lr: 9.714147036291117e-05 2023-03-17 15:46:25,241 44k INFO Saving model and optimizer state at iteration 233 to ./logs\44k\G_200800.pth 2023-03-17 15:46:31,046 44k INFO Saving model and optimizer state at iteration 233 to ./logs\44k\D_200800.pth 2023-03-17 15:47:07,847 44k INFO ====> Epoch: 233, cost 678.64 s 2023-03-17 15:49:14,934 44k INFO Train Epoch: 234 [18%] 2023-03-17 15:49:14,934 44k INFO Losses: [2.508195161819458, 2.3392746448516846, 9.59250259399414, 17.736095428466797, 1.258073329925537], step: 201000, lr: 9.71293276791158e-05 2023-03-17 15:51:43,459 44k INFO Train Epoch: 234 [41%] 2023-03-17 15:51:43,460 44k INFO Losses: [2.5706210136413574, 2.241269111633301, 5.97560453414917, 16.11514663696289, 1.0873310565948486], step: 201200, lr: 9.71293276791158e-05 2023-03-17 15:54:12,094 44k INFO Train Epoch: 234 [64%] 2023-03-17 15:54:12,095 44k INFO Losses: [2.624234199523926, 2.191706657409668, 11.572145462036133, 18.984264373779297, 1.1585451364517212], step: 201400, lr: 9.71293276791158e-05 2023-03-17 15:56:40,757 44k INFO Train Epoch: 234 [87%] 2023-03-17 15:56:40,758 44k INFO Losses: [2.576266288757324, 2.13730525970459, 8.524592399597168, 15.70046329498291, 1.2214363813400269], step: 201600, lr: 9.71293276791158e-05 2023-03-17 15:56:44,833 44k INFO Saving model and optimizer state at iteration 234 to ./logs\44k\G_201600.pth 2023-03-17 15:56:50,545 44k INFO Saving model and optimizer state at iteration 234 to ./logs\44k\D_201600.pth 2023-03-17 15:58:13,319 44k INFO ====> Epoch: 234, cost 665.47 s 2023-03-17 15:59:34,421 44k INFO Train Epoch: 235 [11%] 2023-03-17 15:59:34,421 44k INFO Losses: [2.4227588176727295, 2.2159347534179688, 9.189245223999023, 17.550403594970703, 1.2102501392364502], step: 201800, lr: 9.711718651315591e-05 2023-03-17 16:02:02,970 44k INFO Train Epoch: 235 [34%] 2023-03-17 16:02:02,970 44k INFO Losses: [2.640246868133545, 2.1371076107025146, 11.62421703338623, 19.091136932373047, 1.071480631828308], step: 202000, lr: 9.711718651315591e-05 2023-03-17 16:04:31,529 44k INFO Train Epoch: 235 [57%] 2023-03-17 16:04:31,529 44k INFO Losses: [2.2804982662200928, 2.9040310382843018, 4.445284843444824, 8.341607093811035, 0.9329047799110413], step: 202200, lr: 9.711718651315591e-05 2023-03-17 16:07:00,223 44k INFO Train Epoch: 235 [80%] 2023-03-17 16:07:00,224 44k INFO Losses: [2.5171236991882324, 2.504962205886841, 9.801101684570312, 19.016921997070312, 1.3551393747329712], step: 202400, lr: 9.711718651315591e-05 2023-03-17 16:07:04,336 44k INFO Saving model and optimizer state at iteration 235 to ./logs\44k\G_202400.pth 2023-03-17 16:07:09,950 44k INFO Saving model and optimizer state at iteration 235 to ./logs\44k\D_202400.pth 2023-03-17 16:09:18,624 44k INFO ====> Epoch: 235, cost 665.31 s 2023-03-17 16:09:53,594 44k INFO Train Epoch: 236 [3%] 2023-03-17 16:09:53,595 44k INFO Losses: [2.5622763633728027, 2.2078099250793457, 9.637804985046387, 18.942119598388672, 0.839465320110321], step: 202600, lr: 9.710504686484176e-05 2023-03-17 16:12:22,502 44k INFO Train Epoch: 236 [27%] 2023-03-17 16:12:22,503 44k INFO Losses: [2.6290385723114014, 2.349421739578247, 9.31667423248291, 17.537330627441406, 1.0202364921569824], step: 202800, lr: 9.710504686484176e-05 2023-03-17 16:14:51,277 44k INFO Train Epoch: 236 [50%] 2023-03-17 16:14:51,278 44k INFO Losses: [2.092291831970215, 2.834592580795288, 9.795149803161621, 15.552566528320312, 0.9724324345588684], step: 203000, lr: 9.710504686484176e-05 2023-03-17 16:17:20,215 44k INFO Train Epoch: 236 [73%] 2023-03-17 16:17:20,216 44k INFO Losses: [2.2673473358154297, 2.345621347427368, 11.319207191467285, 18.20125961303711, 0.8604238033294678], step: 203200, lr: 9.710504686484176e-05 2023-03-17 16:17:24,323 44k INFO Saving model and optimizer state at iteration 236 to ./logs\44k\G_203200.pth 2023-03-17 16:17:29,563 44k INFO Saving model and optimizer state at iteration 236 to ./logs\44k\D_203200.pth 2023-03-17 16:20:00,886 44k INFO Train Epoch: 236 [96%] 2023-03-17 16:20:00,886 44k INFO Losses: [2.6008219718933105, 2.0529165267944336, 10.718063354492188, 14.779176712036133, 0.9714707136154175], step: 203400, lr: 9.710504686484176e-05 2023-03-17 16:20:24,372 44k INFO ====> Epoch: 236, cost 665.75 s 2023-03-17 16:22:42,005 44k INFO Train Epoch: 237 [19%] 2023-03-17 16:22:42,005 44k INFO Losses: [2.7453389167785645, 1.9580488204956055, 5.225891590118408, 14.202869415283203, 1.0745266675949097], step: 203600, lr: 9.709290873398365e-05 2023-03-17 16:25:10,463 44k INFO Train Epoch: 237 [43%] 2023-03-17 16:25:10,464 44k INFO Losses: [2.834702730178833, 1.9080661535263062, 5.137028694152832, 10.840741157531738, 1.015594482421875], step: 203800, lr: 9.709290873398365e-05 2023-03-17 16:27:39,281 44k INFO Train Epoch: 237 [66%] 2023-03-17 16:27:39,282 44k INFO Losses: [2.567270517349243, 2.2205941677093506, 7.654080390930176, 17.127168655395508, 0.8098355531692505], step: 204000, lr: 9.709290873398365e-05 2023-03-17 16:27:43,279 44k INFO Saving model and optimizer state at iteration 237 to ./logs\44k\G_204000.pth 2023-03-17 16:27:49,098 44k INFO Saving model and optimizer state at iteration 237 to ./logs\44k\D_204000.pth 2023-03-17 16:30:20,631 44k INFO Train Epoch: 237 [89%] 2023-03-17 16:30:20,631 44k INFO Losses: [2.64326810836792, 2.4724719524383545, 10.108359336853027, 18.115074157714844, 1.023614764213562], step: 204200, lr: 9.709290873398365e-05 2023-03-17 16:31:29,977 44k INFO ====> Epoch: 237, cost 665.60 s 2023-03-17 16:33:01,368 44k INFO Train Epoch: 238 [12%] 2023-03-17 16:33:01,369 44k INFO Losses: [2.5235095024108887, 2.2948079109191895, 10.865445137023926, 16.403430938720703, 1.1577202081680298], step: 204400, lr: 9.70807721203919e-05 2023-03-17 16:35:30,092 44k INFO Train Epoch: 238 [35%] 2023-03-17 16:35:30,093 44k INFO Losses: [2.5873236656188965, 2.0520684719085693, 9.559269905090332, 18.107418060302734, 0.6032602787017822], step: 204600, lr: 9.70807721203919e-05 2023-03-17 16:37:58,700 44k INFO Train Epoch: 238 [59%] 2023-03-17 16:37:58,701 44k INFO Losses: [2.5115976333618164, 2.318246603012085, 9.956771850585938, 16.967744827270508, 1.1064894199371338], step: 204800, lr: 9.70807721203919e-05 2023-03-17 16:38:02,602 44k INFO Saving model and optimizer state at iteration 238 to ./logs\44k\G_204800.pth 2023-03-17 16:38:08,462 44k INFO Saving model and optimizer state at iteration 238 to ./logs\44k\D_204800.pth 2023-03-17 16:40:40,247 44k INFO Train Epoch: 238 [82%] 2023-03-17 16:40:40,247 44k INFO Losses: [2.250046491622925, 2.8448798656463623, 7.6005539894104, 12.498329162597656, 1.0400444269180298], step: 205000, lr: 9.70807721203919e-05 2023-03-17 16:42:35,479 44k INFO ====> Epoch: 238, cost 665.50 s 2023-03-17 16:43:20,837 44k INFO Train Epoch: 239 [5%] 2023-03-17 16:43:20,837 44k INFO Losses: [2.5056934356689453, 2.2115530967712402, 8.641222953796387, 19.095346450805664, 0.9283493161201477], step: 205200, lr: 9.706863702387684e-05 2023-03-17 16:45:49,584 44k INFO Train Epoch: 239 [28%] 2023-03-17 16:45:49,584 44k INFO Losses: [2.6127889156341553, 2.179553508758545, 14.09622573852539, 19.081279754638672, 1.1499409675598145], step: 205400, lr: 9.706863702387684e-05 2023-03-17 16:48:18,199 44k INFO Train Epoch: 239 [52%] 2023-03-17 16:48:18,199 44k INFO Losses: [2.3593802452087402, 2.331904888153076, 9.735432624816895, 13.299386978149414, 1.0744471549987793], step: 205600, lr: 9.706863702387684e-05 2023-03-17 16:48:22,239 44k INFO Saving model and optimizer state at iteration 239 to ./logs\44k\G_205600.pth 2023-03-17 16:48:27,769 44k INFO Saving model and optimizer state at iteration 239 to ./logs\44k\D_205600.pth 2023-03-17 16:50:59,469 44k INFO Train Epoch: 239 [75%] 2023-03-17 16:50:59,470 44k INFO Losses: [2.418717622756958, 2.35750150680542, 9.56441879272461, 14.481440544128418, 0.9200985431671143], step: 205800, lr: 9.706863702387684e-05 2023-03-17 16:53:28,036 44k INFO Train Epoch: 239 [98%] 2023-03-17 16:53:28,037 44k INFO Losses: [2.335541009902954, 2.36275053024292, 12.873456954956055, 20.815841674804688, 1.4445617198944092], step: 206000, lr: 9.706863702387684e-05 2023-03-17 16:53:41,085 44k INFO ====> Epoch: 239, cost 665.61 s 2023-03-17 16:56:08,895 44k INFO Train Epoch: 240 [21%] 2023-03-17 16:56:08,895 44k INFO Losses: [2.5226449966430664, 2.1738381385803223, 7.629820346832275, 15.307448387145996, 1.169005036354065], step: 206200, lr: 9.705650344424885e-05 2023-03-17 16:58:37,317 44k INFO Train Epoch: 240 [44%] 2023-03-17 16:58:37,318 44k INFO Losses: [2.6207222938537598, 2.1511855125427246, 10.258373260498047, 19.412805557250977, 1.189267635345459], step: 206400, lr: 9.705650344424885e-05 2023-03-17 16:58:41,381 44k INFO Saving model and optimizer state at iteration 240 to ./logs\44k\G_206400.pth 2023-03-17 16:58:46,815 44k INFO Saving model and optimizer state at iteration 240 to ./logs\44k\D_206400.pth 2023-03-17 17:01:18,659 44k INFO Train Epoch: 240 [68%] 2023-03-17 17:01:18,659 44k INFO Losses: [2.8246371746063232, 2.1030972003936768, 8.30989933013916, 15.512967109680176, 0.6903784871101379], step: 206600, lr: 9.705650344424885e-05 2023-03-17 17:03:47,248 44k INFO Train Epoch: 240 [91%] 2023-03-17 17:03:47,248 44k INFO Losses: [2.5724680423736572, 2.0924079418182373, 10.980094909667969, 17.45736312866211, 1.1545689105987549], step: 206800, lr: 9.705650344424885e-05 2023-03-17 17:04:46,279 44k INFO ====> Epoch: 240, cost 665.19 s 2023-03-17 17:06:28,164 44k INFO Train Epoch: 241 [14%] 2023-03-17 17:06:28,165 44k INFO Losses: [2.58249568939209, 2.1056160926818848, 9.455649375915527, 17.980022430419922, 1.191334843635559], step: 207000, lr: 9.704437138131832e-05 2023-03-17 17:08:56,776 44k INFO Train Epoch: 241 [37%] 2023-03-17 17:08:56,777 44k INFO Losses: [2.5438454151153564, 2.230915069580078, 9.839457511901855, 16.814075469970703, 0.9965714812278748], step: 207200, lr: 9.704437138131832e-05 2023-03-17 17:09:00,979 44k INFO Saving model and optimizer state at iteration 241 to ./logs\44k\G_207200.pth 2023-03-17 17:09:06,443 44k INFO Saving model and optimizer state at iteration 241 to ./logs\44k\D_207200.pth 2023-03-17 17:11:38,180 44k INFO Train Epoch: 241 [60%] 2023-03-17 17:11:38,181 44k INFO Losses: [2.8173441886901855, 1.9954330921173096, 6.4055562019348145, 12.972664833068848, 1.3563295602798462], step: 207400, lr: 9.704437138131832e-05 2023-03-17 17:14:07,252 44k INFO Train Epoch: 241 [84%] 2023-03-17 17:14:07,253 44k INFO Losses: [2.287961483001709, 2.56512451171875, 10.456830024719238, 15.428139686584473, 0.6395201683044434], step: 207600, lr: 9.704437138131832e-05 2023-03-17 17:15:52,218 44k INFO ====> Epoch: 241, cost 665.94 s 2023-03-17 17:16:48,080 44k INFO Train Epoch: 242 [7%] 2023-03-17 17:16:48,080 44k INFO Losses: [2.427669048309326, 2.336991310119629, 9.722973823547363, 20.34481430053711, 1.2947756052017212], step: 207800, lr: 9.703224083489565e-05 2023-03-17 17:19:16,827 44k INFO Train Epoch: 242 [30%] 2023-03-17 17:19:16,828 44k INFO Losses: [2.423325300216675, 2.1781022548675537, 7.724421501159668, 16.331951141357422, 0.9470195770263672], step: 208000, lr: 9.703224083489565e-05 2023-03-17 17:19:21,025 44k INFO Saving model and optimizer state at iteration 242 to ./logs\44k\G_208000.pth 2023-03-17 17:19:26,185 44k INFO Saving model and optimizer state at iteration 242 to ./logs\44k\D_208000.pth 2023-03-17 17:21:57,564 44k INFO Train Epoch: 242 [53%] 2023-03-17 17:21:57,565 44k INFO Losses: [2.523815870285034, 2.000518798828125, 10.435490608215332, 18.928640365600586, 1.0011423826217651], step: 208200, lr: 9.703224083489565e-05 2023-03-17 17:24:26,287 44k INFO Train Epoch: 242 [76%] 2023-03-17 17:24:26,287 44k INFO Losses: [2.547152042388916, 2.2050766944885254, 7.882791042327881, 17.46825408935547, 1.1590994596481323], step: 208400, lr: 9.703224083489565e-05 2023-03-17 17:26:54,891 44k INFO Train Epoch: 242 [100%] 2023-03-17 17:26:54,891 44k INFO Losses: [2.266099452972412, 2.3341236114501953, 13.1543550491333, 19.073802947998047, 1.0339959859848022], step: 208600, lr: 9.703224083489565e-05 2023-03-17 17:26:57,578 44k INFO ====> Epoch: 242, cost 665.36 s 2023-03-17 17:29:35,920 44k INFO Train Epoch: 243 [23%] 2023-03-17 17:29:35,920 44k INFO Losses: [2.7374954223632812, 1.9340837001800537, 5.022820472717285, 14.751914978027344, 0.8928235173225403], step: 208800, lr: 9.702011180479129e-05 2023-03-17 17:29:39,986 44k INFO Saving model and optimizer state at iteration 243 to ./logs\44k\G_208800.pth 2023-03-17 17:29:45,600 44k INFO Saving model and optimizer state at iteration 243 to ./logs\44k\D_208800.pth 2023-03-17 17:32:16,892 44k INFO Train Epoch: 243 [46%] 2023-03-17 17:32:16,892 44k INFO Losses: [2.4482176303863525, 1.9838125705718994, 11.300121307373047, 17.00035285949707, 0.6994619965553284], step: 209000, lr: 9.702011180479129e-05 2023-03-17 17:34:45,711 44k INFO Train Epoch: 243 [69%] 2023-03-17 17:34:45,712 44k INFO Losses: [2.276243209838867, 2.6333541870117188, 8.862131118774414, 17.875717163085938, 0.5967745184898376], step: 209200, lr: 9.702011180479129e-05 2023-03-17 17:37:14,418 44k INFO Train Epoch: 243 [92%] 2023-03-17 17:37:14,418 44k INFO Losses: [2.510952949523926, 2.671553134918213, 12.880221366882324, 19.95314598083496, 0.7622604966163635], step: 209400, lr: 9.702011180479129e-05 2023-03-17 17:38:03,082 44k INFO ====> Epoch: 243, cost 665.50 s 2023-03-17 17:39:55,485 44k INFO Train Epoch: 244 [16%] 2023-03-17 17:39:55,486 44k INFO Losses: [2.4701452255249023, 2.238422155380249, 7.619907855987549, 15.274914741516113, 0.9031874537467957], step: 209600, lr: 9.700798429081568e-05 2023-03-17 17:39:59,483 44k INFO Saving model and optimizer state at iteration 244 to ./logs\44k\G_209600.pth 2023-03-17 17:40:04,997 44k INFO Saving model and optimizer state at iteration 244 to ./logs\44k\D_209600.pth 2023-03-17 17:42:36,511 44k INFO Train Epoch: 244 [39%] 2023-03-17 17:42:36,511 44k INFO Losses: [2.507706642150879, 2.221693754196167, 7.842609882354736, 15.47082233428955, 1.0688772201538086], step: 209800, lr: 9.700798429081568e-05 2023-03-17 17:45:05,071 44k INFO Train Epoch: 244 [62%] 2023-03-17 17:45:05,072 44k INFO Losses: [2.296048402786255, 2.2771358489990234, 14.0059814453125, 19.33745765686035, 0.9447991847991943], step: 210000, lr: 9.700798429081568e-05 2023-03-17 17:47:33,911 44k INFO Train Epoch: 244 [85%] 2023-03-17 17:47:33,911 44k INFO Losses: [2.5018081665039062, 2.070875406265259, 9.429227828979492, 17.94031524658203, 1.0997055768966675], step: 210200, lr: 9.700798429081568e-05 2023-03-17 17:49:08,377 44k INFO ====> Epoch: 244, cost 665.30 s 2023-03-17 17:50:14,637 44k INFO Train Epoch: 245 [8%] 2023-03-17 17:50:14,637 44k INFO Losses: [2.3818631172180176, 2.431035041809082, 8.298633575439453, 13.966656684875488, 1.0808711051940918], step: 210400, lr: 9.699585829277933e-05 2023-03-17 17:50:18,646 44k INFO Saving model and optimizer state at iteration 245 to ./logs\44k\G_210400.pth 2023-03-17 17:50:24,073 44k INFO Saving model and optimizer state at iteration 245 to ./logs\44k\D_210400.pth 2023-03-17 17:52:55,915 44k INFO Train Epoch: 245 [32%] 2023-03-17 17:52:55,915 44k INFO Losses: [2.6066946983337402, 2.1221511363983154, 6.599742889404297, 15.423958778381348, 0.9501236081123352], step: 210600, lr: 9.699585829277933e-05 2023-03-17 17:55:24,475 44k INFO Train Epoch: 245 [55%] 2023-03-17 17:55:24,475 44k INFO Losses: [2.3868541717529297, 2.4323737621307373, 8.560200691223145, 15.432801246643066, 0.9987204670906067], step: 210800, lr: 9.699585829277933e-05 2023-03-17 17:57:53,340 44k INFO Train Epoch: 245 [78%] 2023-03-17 17:57:53,341 44k INFO Losses: [2.379307508468628, 2.3240344524383545, 9.28307819366455, 16.053302764892578, 1.029953122138977], step: 211000, lr: 9.699585829277933e-05 2023-03-17 18:00:13,927 44k INFO ====> Epoch: 245, cost 665.55 s 2023-03-17 18:00:34,186 44k INFO Train Epoch: 246 [1%] 2023-03-17 18:00:34,186 44k INFO Losses: [2.4695162773132324, 2.1580052375793457, 9.301414489746094, 16.992305755615234, 0.9821569323539734], step: 211200, lr: 9.698373381049272e-05 2023-03-17 18:00:38,237 44k INFO Saving model and optimizer state at iteration 246 to ./logs\44k\G_211200.pth 2023-03-17 18:00:43,375 44k INFO Saving model and optimizer state at iteration 246 to ./logs\44k\D_211200.pth 2023-03-17 18:03:15,252 44k INFO Train Epoch: 246 [24%] 2023-03-17 18:03:15,253 44k INFO Losses: [2.5139143466949463, 1.8416531085968018, 10.962339401245117, 17.351701736450195, 1.0765135288238525], step: 211400, lr: 9.698373381049272e-05 2023-03-17 18:05:43,837 44k INFO Train Epoch: 246 [48%] 2023-03-17 18:05:43,837 44k INFO Losses: [2.419605016708374, 2.4420852661132812, 11.886148452758789, 19.773958206176758, 1.1696627140045166], step: 211600, lr: 9.698373381049272e-05 2023-03-17 18:08:12,699 44k INFO Train Epoch: 246 [71%] 2023-03-17 18:08:12,699 44k INFO Losses: [2.7021899223327637, 2.0932984352111816, 11.002020835876465, 18.922321319580078, 1.3769209384918213], step: 211800, lr: 9.698373381049272e-05 2023-03-17 18:10:41,388 44k INFO Train Epoch: 246 [94%] 2023-03-17 18:10:41,389 44k INFO Losses: [2.6809566020965576, 2.0962507724761963, 9.016879081726074, 14.09681510925293, 1.0077897310256958], step: 212000, lr: 9.698373381049272e-05 2023-03-17 18:10:45,453 44k INFO Saving model and optimizer state at iteration 246 to ./logs\44k\G_212000.pth 2023-03-17 18:10:50,919 44k INFO Saving model and optimizer state at iteration 246 to ./logs\44k\D_212000.pth 2023-03-17 18:11:32,080 44k INFO ====> Epoch: 246, cost 678.15 s 2023-03-17 18:13:34,630 44k INFO Train Epoch: 247 [17%] 2023-03-17 18:13:34,631 44k INFO Losses: [2.3043406009674072, 2.3824198246002197, 11.604597091674805, 17.997526168823242, 1.1003150939941406], step: 212200, lr: 9.69716108437664e-05 2023-03-17 18:16:03,245 44k INFO Train Epoch: 247 [40%] 2023-03-17 18:16:03,245 44k INFO Losses: [2.547475814819336, 2.1928908824920654, 12.722697257995605, 20.382463455200195, 1.037685751914978], step: 212400, lr: 9.69716108437664e-05 2023-03-17 18:18:31,877 44k INFO Train Epoch: 247 [64%] 2023-03-17 18:18:31,878 44k INFO Losses: [2.655686378479004, 2.3901515007019043, 7.564862251281738, 20.416671752929688, 1.2178974151611328], step: 212600, lr: 9.69716108437664e-05 2023-03-17 18:21:00,605 44k INFO Train Epoch: 247 [87%] 2023-03-17 18:21:00,605 44k INFO Losses: [2.579512119293213, 2.4513344764709473, 12.428638458251953, 17.15456771850586, 1.1550939083099365], step: 212800, lr: 9.69716108437664e-05 2023-03-17 18:21:04,488 44k INFO Saving model and optimizer state at iteration 247 to ./logs\44k\G_212800.pth 2023-03-17 18:21:09,932 44k INFO Saving model and optimizer state at iteration 247 to ./logs\44k\D_212800.pth 2023-03-17 18:22:37,108 44k INFO ====> Epoch: 247, cost 665.03 s 2023-03-17 18:23:53,716 44k INFO Train Epoch: 248 [10%] 2023-03-17 18:23:53,716 44k INFO Losses: [2.5445871353149414, 2.1069750785827637, 6.316710472106934, 15.731362342834473, 0.9012361168861389], step: 213000, lr: 9.695948939241093e-05 2023-03-17 18:26:22,487 44k INFO Train Epoch: 248 [33%] 2023-03-17 18:26:22,488 44k INFO Losses: [2.5866920948028564, 2.1157233715057373, 10.094100952148438, 18.878273010253906, 1.0003118515014648], step: 213200, lr: 9.695948939241093e-05 2023-03-17 18:28:51,079 44k INFO Train Epoch: 248 [56%] 2023-03-17 18:28:51,080 44k INFO Losses: [2.3531813621520996, 2.264845371246338, 7.409755706787109, 15.115735054016113, 0.9910041689872742], step: 213400, lr: 9.695948939241093e-05 2023-03-17 18:31:19,951 44k INFO Train Epoch: 248 [80%] 2023-03-17 18:31:19,951 44k INFO Losses: [2.7256290912628174, 1.9394527673721313, 9.030439376831055, 16.496347427368164, 1.02840256690979], step: 213600, lr: 9.695948939241093e-05 2023-03-17 18:31:23,908 44k INFO Saving model and optimizer state at iteration 248 to ./logs\44k\G_213600.pth 2023-03-17 18:31:28,783 44k INFO Saving model and optimizer state at iteration 248 to ./logs\44k\D_213600.pth 2023-03-17 18:33:42,012 44k INFO ====> Epoch: 248, cost 664.90 s 2023-03-17 18:34:12,547 44k INFO Train Epoch: 249 [3%] 2023-03-17 18:34:12,548 44k INFO Losses: [2.4440393447875977, 2.2098476886749268, 10.132878303527832, 18.682300567626953, 1.3624874353408813], step: 213800, lr: 9.694736945623688e-05 2023-03-17 18:36:41,569 44k INFO Train Epoch: 249 [26%] 2023-03-17 18:36:41,570 44k INFO Losses: [2.5526065826416016, 2.201293706893921, 8.122711181640625, 14.770496368408203, 0.9359431862831116], step: 214000, lr: 9.694736945623688e-05 2023-03-17 18:39:10,005 44k INFO Train Epoch: 249 [49%] 2023-03-17 18:39:10,005 44k INFO Losses: [2.6540513038635254, 2.311683416366577, 8.09728717803955, 18.515748977661133, 0.8149881362915039], step: 214200, lr: 9.694736945623688e-05 2023-03-17 18:41:38,838 44k INFO Train Epoch: 249 [72%] 2023-03-17 18:41:38,838 44k INFO Losses: [2.4448344707489014, 2.3112850189208984, 9.360565185546875, 16.446609497070312, 0.9183065295219421], step: 214400, lr: 9.694736945623688e-05 2023-03-17 18:41:42,712 44k INFO Saving model and optimizer state at iteration 249 to ./logs\44k\G_214400.pth 2023-03-17 18:41:47,898 44k INFO Saving model and optimizer state at iteration 249 to ./logs\44k\D_214400.pth 2023-03-17 18:44:19,516 44k INFO Train Epoch: 249 [96%] 2023-03-17 18:44:19,517 44k INFO Losses: [2.376763343811035, 2.511617660522461, 8.77884292602539, 18.024221420288086, 1.164048194885254], step: 214600, lr: 9.694736945623688e-05 2023-03-17 18:44:47,424 44k INFO ====> Epoch: 249, cost 665.41 s 2023-03-17 18:47:00,521 44k INFO Train Epoch: 250 [19%] 2023-03-17 18:47:00,522 44k INFO Losses: [2.7842838764190674, 1.8164045810699463, 10.033393859863281, 16.445518493652344, 1.0169700384140015], step: 214800, lr: 9.693525103505484e-05 2023-03-17 18:49:29,216 44k INFO Train Epoch: 250 [42%] 2023-03-17 18:49:29,216 44k INFO Losses: [2.5511832237243652, 2.1774814128875732, 6.854366779327393, 14.541833877563477, 0.8941159844398499], step: 215000, lr: 9.693525103505484e-05 2023-03-17 18:51:57,879 44k INFO Train Epoch: 250 [65%] 2023-03-17 18:51:57,879 44k INFO Losses: [2.227362632751465, 2.597642421722412, 12.642425537109375, 19.209394454956055, 0.6660289168357849], step: 215200, lr: 9.693525103505484e-05 2023-03-17 18:52:02,054 44k INFO Saving model and optimizer state at iteration 250 to ./logs\44k\G_215200.pth 2023-03-17 18:52:07,035 44k INFO Saving model and optimizer state at iteration 250 to ./logs\44k\D_215200.pth 2023-03-17 18:54:38,555 44k INFO Train Epoch: 250 [88%] 2023-03-17 18:54:38,555 44k INFO Losses: [2.111694097518921, 2.7110676765441895, 7.699777126312256, 10.479747772216797, 1.2236326932907104], step: 215400, lr: 9.693525103505484e-05 2023-03-17 18:55:52,293 44k INFO ====> Epoch: 250, cost 664.87 s 2023-03-17 18:57:19,241 44k INFO Train Epoch: 251 [12%] 2023-03-17 18:57:19,242 44k INFO Losses: [2.4239485263824463, 2.3173563480377197, 9.496387481689453, 18.252941131591797, 1.085935115814209], step: 215600, lr: 9.692313412867544e-05 2023-03-17 18:59:48,031 44k INFO Train Epoch: 251 [35%] 2023-03-17 18:59:48,031 44k INFO Losses: [2.4882400035858154, 2.789170980453491, 14.073586463928223, 22.267608642578125, 1.1383341550827026], step: 215800, lr: 9.692313412867544e-05 2023-03-17 19:02:16,574 44k INFO Train Epoch: 251 [58%] 2023-03-17 19:02:16,575 44k INFO Losses: [2.8006834983825684, 1.9691064357757568, 7.5655035972595215, 15.708795547485352, 0.8920174837112427], step: 216000, lr: 9.692313412867544e-05 2023-03-17 19:02:20,643 44k INFO Saving model and optimizer state at iteration 251 to ./logs\44k\G_216000.pth 2023-03-17 19:02:26,037 44k INFO Saving model and optimizer state at iteration 251 to ./logs\44k\D_216000.pth 2023-03-17 19:04:57,838 44k INFO Train Epoch: 251 [81%] 2023-03-17 19:04:57,839 44k INFO Losses: [2.5597496032714844, 2.310119867324829, 9.339887619018555, 18.025188446044922, 1.0828696489334106], step: 216200, lr: 9.692313412867544e-05 2023-03-17 19:06:57,577 44k INFO ====> Epoch: 251, cost 665.28 s 2023-03-17 19:07:38,472 44k INFO Train Epoch: 252 [4%] 2023-03-17 19:07:38,472 44k INFO Losses: [2.63761568069458, 2.217679262161255, 9.036330223083496, 17.809154510498047, 0.8789425492286682], step: 216400, lr: 9.691101873690936e-05 2023-03-17 19:10:07,336 44k INFO Train Epoch: 252 [28%] 2023-03-17 19:10:07,337 44k INFO Losses: [2.2811362743377686, 2.3322718143463135, 11.797417640686035, 19.384056091308594, 1.006174921989441], step: 216600, lr: 9.691101873690936e-05 2023-03-17 19:12:35,915 44k INFO Train Epoch: 252 [51%] 2023-03-17 19:12:35,916 44k INFO Losses: [2.427903890609741, 2.212338924407959, 9.317178726196289, 17.95871353149414, 0.951920747756958], step: 216800, lr: 9.691101873690936e-05 2023-03-17 19:12:39,931 44k INFO Saving model and optimizer state at iteration 252 to ./logs\44k\G_216800.pth 2023-03-17 19:12:44,903 44k INFO Saving model and optimizer state at iteration 252 to ./logs\44k\D_216800.pth 2023-03-17 19:15:16,661 44k INFO Train Epoch: 252 [74%] 2023-03-17 19:15:16,662 44k INFO Losses: [2.371642589569092, 2.4221906661987305, 11.150038719177246, 18.38361358642578, 0.9036511182785034], step: 217000, lr: 9.691101873690936e-05 2023-03-17 19:17:45,205 44k INFO Train Epoch: 252 [97%] 2023-03-17 19:17:45,206 44k INFO Losses: [2.580371379852295, 2.1819770336151123, 8.336734771728516, 19.77094268798828, 1.117093563079834], step: 217200, lr: 9.691101873690936e-05 2023-03-17 19:18:02,731 44k INFO ====> Epoch: 252, cost 665.15 s 2023-03-17 19:20:26,168 44k INFO Train Epoch: 253 [20%] 2023-03-17 19:20:26,168 44k INFO Losses: [2.5545949935913086, 2.324941635131836, 10.40185832977295, 17.782445907592773, 0.928059458732605], step: 217400, lr: 9.689890485956725e-05 2023-03-17 19:22:54,675 44k INFO Train Epoch: 253 [44%] 2023-03-17 19:22:54,676 44k INFO Losses: [2.2923777103424072, 2.521892786026001, 10.94232177734375, 18.609844207763672, 1.119205355644226], step: 217600, lr: 9.689890485956725e-05 2023-03-17 19:22:58,897 44k INFO Saving model and optimizer state at iteration 253 to ./logs\44k\G_217600.pth 2023-03-17 19:23:03,133 44k INFO Saving model and optimizer state at iteration 253 to ./logs\44k\D_217600.pth 2023-03-17 19:25:35,049 44k INFO Train Epoch: 253 [67%] 2023-03-17 19:25:35,050 44k INFO Losses: [2.66736102104187, 2.238185167312622, 9.254158020019531, 16.942943572998047, 1.2662285566329956], step: 217800, lr: 9.689890485956725e-05 2023-03-17 19:28:03,987 44k INFO Train Epoch: 253 [90%] 2023-03-17 19:28:03,987 44k INFO Losses: [2.6128733158111572, 2.0967979431152344, 7.471611022949219, 14.223177909851074, 1.0701401233673096], step: 218000, lr: 9.689890485956725e-05 2023-03-17 19:29:07,410 44k INFO ====> Epoch: 253, cost 664.68 s 2023-03-17 19:30:44,671 44k INFO Train Epoch: 254 [13%] 2023-03-17 19:30:44,672 44k INFO Losses: [2.5110671520233154, 2.2740025520324707, 6.097906112670898, 16.071006774902344, 1.139433741569519], step: 218200, lr: 9.68867924964598e-05 2023-03-17 19:33:13,578 44k INFO Train Epoch: 254 [36%] 2023-03-17 19:33:13,578 44k INFO Losses: [2.5327086448669434, 2.3518149852752686, 10.405281066894531, 12.996733665466309, 1.0470114946365356], step: 218400, lr: 9.68867924964598e-05 2023-03-17 19:33:17,554 44k INFO Saving model and optimizer state at iteration 254 to ./logs\44k\G_218400.pth 2023-03-17 19:33:22,293 44k INFO Saving model and optimizer state at iteration 254 to ./logs\44k\D_218400.pth 2023-03-17 19:35:53,838 44k INFO Train Epoch: 254 [60%] 2023-03-17 19:35:53,838 44k INFO Losses: [2.397531747817993, 2.227149724960327, 8.591649055480957, 17.540754318237305, 1.39363431930542], step: 218600, lr: 9.68867924964598e-05 2023-03-17 19:38:22,951 44k INFO Train Epoch: 254 [83%] 2023-03-17 19:38:22,952 44k INFO Losses: [2.411314010620117, 2.419491767883301, 8.69364070892334, 16.005477905273438, 1.1677926778793335], step: 218800, lr: 9.68867924964598e-05 2023-03-17 19:40:12,315 44k INFO ====> Epoch: 254, cost 664.91 s 2023-03-17 19:41:03,666 44k INFO Train Epoch: 255 [6%] 2023-03-17 19:41:03,666 44k INFO Losses: [2.5307345390319824, 2.112950086593628, 10.488541603088379, 19.24176597595215, 1.0753731727600098], step: 219000, lr: 9.687468164739773e-05 2023-03-17 19:43:32,473 44k INFO Train Epoch: 255 [29%] 2023-03-17 19:43:32,473 44k INFO Losses: [2.727084159851074, 2.1202125549316406, 6.506443500518799, 12.522029876708984, 0.9287866353988647], step: 219200, lr: 9.687468164739773e-05 2023-03-17 19:43:36,422 44k INFO Saving model and optimizer state at iteration 255 to ./logs\44k\G_219200.pth 2023-03-17 19:43:41,094 44k INFO Saving model and optimizer state at iteration 255 to ./logs\44k\D_219200.pth 2023-03-17 19:46:12,390 44k INFO Train Epoch: 255 [52%] 2023-03-17 19:46:12,391 44k INFO Losses: [2.512345314025879, 2.4369640350341797, 7.879225730895996, 16.276880264282227, 1.074000597000122], step: 219400, lr: 9.687468164739773e-05 2023-03-17 19:48:41,568 44k INFO Train Epoch: 255 [76%] 2023-03-17 19:48:41,568 44k INFO Losses: [2.7767398357391357, 2.2276532649993896, 7.905902862548828, 15.106683731079102, 0.7996993660926819], step: 219600, lr: 9.687468164739773e-05 2023-03-17 19:51:10,339 44k INFO Train Epoch: 255 [99%] 2023-03-17 19:51:10,340 44k INFO Losses: [2.6587071418762207, 2.2464632987976074, 7.905294895172119, 15.644745826721191, 0.589992105960846], step: 219800, lr: 9.687468164739773e-05 2023-03-17 19:51:17,417 44k INFO ====> Epoch: 255, cost 665.10 s 2023-03-17 19:53:51,108 44k INFO Train Epoch: 256 [22%] 2023-03-17 19:53:51,108 44k INFO Losses: [2.694847583770752, 2.0906763076782227, 8.436903953552246, 13.724844932556152, 1.343156099319458], step: 220000, lr: 9.68625723121918e-05 2023-03-17 19:53:55,277 44k INFO Saving model and optimizer state at iteration 256 to ./logs\44k\G_220000.pth 2023-03-17 19:53:59,375 44k INFO Saving model and optimizer state at iteration 256 to ./logs\44k\D_220000.pth 2023-03-17 19:56:30,957 44k INFO Train Epoch: 256 [45%] 2023-03-17 19:56:30,958 44k INFO Losses: [2.720512866973877, 2.004629135131836, 10.482501029968262, 16.165878295898438, 1.2343673706054688], step: 220200, lr: 9.68625723121918e-05 2023-03-17 19:58:59,846 44k INFO Train Epoch: 256 [68%] 2023-03-17 19:58:59,847 44k INFO Losses: [2.3542098999023438, 2.2753427028656006, 9.820260047912598, 18.580533981323242, 0.8565078377723694], step: 220400, lr: 9.68625723121918e-05 2023-03-17 20:01:28,474 44k INFO Train Epoch: 256 [92%] 2023-03-17 20:01:28,475 44k INFO Losses: [2.6340622901916504, 2.111405372619629, 8.459310531616211, 17.651168823242188, 0.9421473741531372], step: 220600, lr: 9.68625723121918e-05 2023-03-17 20:02:21,693 44k INFO ====> Epoch: 256, cost 664.28 s 2023-03-17 20:04:09,354 44k INFO Train Epoch: 257 [15%] 2023-03-17 20:04:09,354 44k INFO Losses: [2.400045394897461, 2.5488388538360596, 9.583595275878906, 15.110175132751465, 1.0275472402572632], step: 220800, lr: 9.685046449065278e-05 2023-03-17 20:04:13,436 44k INFO Saving model and optimizer state at iteration 257 to ./logs\44k\G_220800.pth 2023-03-17 20:04:18,137 44k INFO Saving model and optimizer state at iteration 257 to ./logs\44k\D_220800.pth 2023-03-17 20:06:49,716 44k INFO Train Epoch: 257 [38%] 2023-03-17 20:06:49,717 44k INFO Losses: [2.4021542072296143, 2.722050189971924, 10.092765808105469, 14.828892707824707, 0.911776065826416], step: 221000, lr: 9.685046449065278e-05 2023-03-17 20:09:18,474 44k INFO Train Epoch: 257 [61%] 2023-03-17 20:09:18,475 44k INFO Losses: [2.5938100814819336, 2.0840301513671875, 7.432872295379639, 14.266127586364746, 1.2212316989898682], step: 221200, lr: 9.685046449065278e-05 2023-03-17 20:11:47,362 44k INFO Train Epoch: 257 [84%] 2023-03-17 20:11:47,363 44k INFO Losses: [2.584453582763672, 2.1264610290527344, 10.824204444885254, 18.14215087890625, 1.3122103214263916], step: 221400, lr: 9.685046449065278e-05 2023-03-17 20:13:26,343 44k INFO ====> Epoch: 257, cost 664.65 s 2023-03-17 20:14:27,962 44k INFO Train Epoch: 258 [8%] 2023-03-17 20:14:27,962 44k INFO Losses: [2.774747133255005, 2.3495070934295654, 11.169397354125977, 17.981096267700195, 1.1110442876815796], step: 221600, lr: 9.683835818259144e-05 2023-03-17 20:14:32,062 44k INFO Saving model and optimizer state at iteration 258 to ./logs\44k\G_221600.pth 2023-03-17 20:14:35,934 44k INFO Saving model and optimizer state at iteration 258 to ./logs\44k\D_221600.pth 2023-03-17 20:17:07,529 44k INFO Train Epoch: 258 [31%] 2023-03-17 20:17:07,530 44k INFO Losses: [2.781733989715576, 1.967185139656067, 11.113765716552734, 20.68283462524414, 1.0918433666229248], step: 221800, lr: 9.683835818259144e-05 2023-03-17 20:19:35,968 44k INFO Train Epoch: 258 [54%] 2023-03-17 20:19:35,968 44k INFO Losses: [2.502654790878296, 2.1860857009887695, 9.920573234558105, 16.471521377563477, 1.1380540132522583], step: 222000, lr: 9.683835818259144e-05 2023-03-17 20:22:04,921 44k INFO Train Epoch: 258 [77%] 2023-03-17 20:22:04,921 44k INFO Losses: [2.430751323699951, 2.2706761360168457, 9.613455772399902, 15.780194282531738, 0.5820989012718201], step: 222200, lr: 9.683835818259144e-05 2023-03-17 20:24:29,984 44k INFO ====> Epoch: 258, cost 663.64 s 2023-03-17 20:24:45,793 44k INFO Train Epoch: 259 [0%] 2023-03-17 20:24:45,794 44k INFO Losses: [2.409038543701172, 2.264101505279541, 9.922218322753906, 15.593085289001465, 0.7849195599555969], step: 222400, lr: 9.68262533878186e-05 2023-03-17 20:24:49,842 44k INFO Saving model and optimizer state at iteration 259 to ./logs\44k\G_222400.pth 2023-03-17 20:24:54,597 44k INFO Saving model and optimizer state at iteration 259 to ./logs\44k\D_222400.pth 2023-03-17 20:27:26,301 44k INFO Train Epoch: 259 [24%] 2023-03-17 20:27:26,301 44k INFO Losses: [2.4954745769500732, 2.3976314067840576, 10.448341369628906, 18.134727478027344, 1.048158884048462], step: 222600, lr: 9.68262533878186e-05 2023-03-17 20:29:55,017 44k INFO Train Epoch: 259 [47%] 2023-03-17 20:29:55,017 44k INFO Losses: [2.7616665363311768, 2.4322383403778076, 8.76364803314209, 18.174236297607422, 1.0275840759277344], step: 222800, lr: 9.68262533878186e-05 2023-03-17 20:32:23,806 44k INFO Train Epoch: 259 [70%] 2023-03-17 20:32:23,807 44k INFO Losses: [2.5142550468444824, 2.18548321723938, 9.374931335449219, 19.225500106811523, 1.1535069942474365], step: 223000, lr: 9.68262533878186e-05 2023-03-17 20:34:52,370 44k INFO Train Epoch: 259 [93%] 2023-03-17 20:34:52,370 44k INFO Losses: [2.5413713455200195, 2.396261215209961, 9.934036254882812, 19.39922332763672, 1.1556220054626465], step: 223200, lr: 9.68262533878186e-05 2023-03-17 20:34:56,550 44k INFO Saving model and optimizer state at iteration 259 to ./logs\44k\G_223200.pth 2023-03-17 20:35:01,314 44k INFO Saving model and optimizer state at iteration 259 to ./logs\44k\D_223200.pth 2023-03-17 20:35:47,180 44k INFO ====> Epoch: 259, cost 677.19 s 2023-03-17 20:37:45,467 44k INFO Train Epoch: 260 [16%] 2023-03-17 20:37:45,468 44k INFO Losses: [2.46391224861145, 2.5170953273773193, 11.194847106933594, 19.65937042236328, 0.8643940091133118], step: 223400, lr: 9.681415010614512e-05 2023-03-17 20:40:14,125 44k INFO Train Epoch: 260 [40%] 2023-03-17 20:40:14,126 44k INFO Losses: [2.554598569869995, 2.1136550903320312, 10.381006240844727, 17.137250900268555, 0.993293046951294], step: 223600, lr: 9.681415010614512e-05 2023-03-17 20:42:42,892 44k INFO Train Epoch: 260 [63%] 2023-03-17 20:42:42,893 44k INFO Losses: [2.464993715286255, 2.4390177726745605, 11.6216459274292, 18.89177131652832, 1.1174051761627197], step: 223800, lr: 9.681415010614512e-05 2023-03-17 20:45:11,656 44k INFO Train Epoch: 260 [86%] 2023-03-17 20:45:11,657 44k INFO Losses: [2.590552806854248, 2.471965789794922, 10.62920093536377, 20.06413459777832, 0.9648613929748535], step: 224000, lr: 9.681415010614512e-05 2023-03-17 20:45:15,647 44k INFO Saving model and optimizer state at iteration 260 to ./logs\44k\G_224000.pth 2023-03-17 20:45:20,056 44k INFO Saving model and optimizer state at iteration 260 to ./logs\44k\D_224000.pth 2023-03-17 20:46:51,720 44k INFO ====> Epoch: 260, cost 664.54 s 2023-03-17 20:48:03,672 44k INFO Train Epoch: 261 [9%] 2023-03-17 20:48:03,673 44k INFO Losses: [2.4034621715545654, 2.324824810028076, 10.621574401855469, 17.508193969726562, 0.9797905087471008], step: 224200, lr: 9.680204833738185e-05 2023-03-17 20:50:32,329 44k INFO Train Epoch: 261 [32%] 2023-03-17 20:50:32,330 44k INFO Losses: [2.6825544834136963, 1.9647705554962158, 8.411619186401367, 12.953214645385742, 1.0208605527877808], step: 224400, lr: 9.680204833738185e-05 2023-03-17 20:53:00,902 44k INFO Train Epoch: 261 [56%] 2023-03-17 20:53:00,903 44k INFO Losses: [2.5380868911743164, 2.1644158363342285, 7.361418724060059, 15.692902565002441, 0.8131267428398132], step: 224600, lr: 9.680204833738185e-05 2023-03-17 20:55:29,646 44k INFO Train Epoch: 261 [79%] 2023-03-17 20:55:29,647 44k INFO Losses: [2.54449462890625, 2.043187379837036, 7.359434127807617, 15.034453392028809, 0.9357821941375732], step: 224800, lr: 9.680204833738185e-05 2023-03-17 20:55:33,827 44k INFO Saving model and optimizer state at iteration 261 to ./logs\44k\G_224800.pth 2023-03-17 20:55:38,709 44k INFO Saving model and optimizer state at iteration 261 to ./logs\44k\D_224800.pth 2023-03-17 20:57:56,157 44k INFO ====> Epoch: 261, cost 664.44 s 2023-03-17 20:58:22,292 44k INFO Train Epoch: 262 [2%] 2023-03-17 20:58:22,292 44k INFO Losses: [2.3918704986572266, 2.4293487071990967, 9.977806091308594, 15.36561393737793, 1.0191128253936768], step: 225000, lr: 9.678994808133967e-05 2023-03-17 21:00:51,047 44k INFO Train Epoch: 262 [25%] 2023-03-17 21:00:51,047 44k INFO Losses: [2.6000428199768066, 2.2593159675598145, 7.594107627868652, 18.313201904296875, 1.0730328559875488], step: 225200, lr: 9.678994808133967e-05 2023-03-17 21:03:19,568 44k INFO Train Epoch: 262 [48%] 2023-03-17 21:03:19,568 44k INFO Losses: [2.573227882385254, 2.2914576530456543, 5.6243438720703125, 12.04599380493164, 1.236507773399353], step: 225400, lr: 9.678994808133967e-05 2023-03-17 21:05:48,269 44k INFO Train Epoch: 262 [72%] 2023-03-17 21:05:48,269 44k INFO Losses: [2.52951717376709, 2.3725714683532715, 8.184581756591797, 16.627742767333984, 1.0730905532836914], step: 225600, lr: 9.678994808133967e-05 2023-03-17 21:05:52,431 44k INFO Saving model and optimizer state at iteration 262 to ./logs\44k\G_225600.pth 2023-03-17 21:05:57,039 44k INFO Saving model and optimizer state at iteration 262 to ./logs\44k\D_225600.pth 2023-03-17 21:08:28,407 44k INFO Train Epoch: 262 [95%] 2023-03-17 21:08:28,408 44k INFO Losses: [2.5211029052734375, 2.2302324771881104, 9.241366386413574, 18.38920783996582, 0.8859434723854065], step: 225800, lr: 9.678994808133967e-05 2023-03-17 21:09:00,789 44k INFO ====> Epoch: 262, cost 664.63 s 2023-03-17 21:11:09,123 44k INFO Train Epoch: 263 [18%] 2023-03-17 21:11:09,124 44k INFO Losses: [2.5973148345947266, 2.2963790893554688, 12.68445873260498, 19.53127098083496, 1.0525000095367432], step: 226000, lr: 9.67778493378295e-05 2023-03-17 21:13:37,642 44k INFO Train Epoch: 263 [41%] 2023-03-17 21:13:37,642 44k INFO Losses: [2.6036102771759033, 2.3279170989990234, 9.552419662475586, 15.999889373779297, 1.2597017288208008], step: 226200, lr: 9.67778493378295e-05 2023-03-17 21:16:06,068 44k INFO Train Epoch: 263 [65%] 2023-03-17 21:16:06,068 44k INFO Losses: [2.553645133972168, 2.276315927505493, 9.725811004638672, 15.316329002380371, 0.8707449436187744], step: 226400, lr: 9.67778493378295e-05 2023-03-17 21:16:10,183 44k INFO Saving model and optimizer state at iteration 263 to ./logs\44k\G_226400.pth 2023-03-17 21:16:14,431 44k INFO Saving model and optimizer state at iteration 263 to ./logs\44k\D_226400.pth 2023-03-17 21:18:46,110 44k INFO Train Epoch: 263 [88%] 2023-03-17 21:18:46,111 44k INFO Losses: [2.622520685195923, 2.2267491817474365, 13.247958183288574, 20.989274978637695, 0.8936261534690857], step: 226600, lr: 9.67778493378295e-05 2023-03-17 21:20:04,229 44k INFO ====> Epoch: 263, cost 663.44 s 2023-03-17 21:21:26,616 44k INFO Train Epoch: 264 [11%] 2023-03-17 21:21:26,616 44k INFO Losses: [2.4889283180236816, 2.389678955078125, 8.783621788024902, 17.644044876098633, 1.0876904726028442], step: 226800, lr: 9.676575210666227e-05 2023-03-17 21:23:55,248 44k INFO Train Epoch: 264 [34%] 2023-03-17 21:23:55,249 44k INFO Losses: [2.639899730682373, 2.1363914012908936, 6.854709625244141, 12.588411331176758, 0.9268391728401184], step: 227000, lr: 9.676575210666227e-05 2023-03-17 21:26:23,818 44k INFO Train Epoch: 264 [57%] 2023-03-17 21:26:23,818 44k INFO Losses: [2.794687271118164, 1.9363850355148315, 8.000356674194336, 17.002792358398438, 1.2452175617218018], step: 227200, lr: 9.676575210666227e-05 2023-03-17 21:26:27,876 44k INFO Saving model and optimizer state at iteration 264 to ./logs\44k\G_227200.pth 2023-03-17 21:26:31,837 44k INFO Saving model and optimizer state at iteration 264 to ./logs\44k\D_227200.pth 2023-03-17 21:29:42,602 44k INFO Train Epoch: 264 [81%] 2023-03-17 21:29:42,603 44k INFO Losses: [2.6195077896118164, 2.288182258605957, 6.474330902099609, 16.591398239135742, 0.9210454225540161], step: 227400, lr: 9.676575210666227e-05 2023-03-17 21:32:43,525 44k INFO ====> Epoch: 264, cost 759.30 s 2023-03-17 21:33:31,708 44k INFO Train Epoch: 265 [4%] 2023-03-17 21:33:31,709 44k INFO Losses: [2.5876357555389404, 2.249162197113037, 8.150452613830566, 14.337936401367188, 1.072008490562439], step: 227600, lr: 9.675365638764893e-05 2023-03-17 21:37:09,957 44k INFO Train Epoch: 265 [27%] 2023-03-17 21:37:09,958 44k INFO Losses: [2.5649352073669434, 1.8784890174865723, 8.434042930603027, 16.665578842163086, 1.3120524883270264], step: 227800, lr: 9.675365638764893e-05 2023-03-17 21:41:23,832 44k INFO Train Epoch: 265 [50%] 2023-03-17 21:41:23,832 44k INFO Losses: [2.4197001457214355, 2.1895041465759277, 10.030590057373047, 15.384061813354492, 1.0790989398956299], step: 228000, lr: 9.675365638764893e-05 2023-03-17 21:41:28,741 44k INFO Saving model and optimizer state at iteration 265 to ./logs\44k\G_228000.pth 2023-03-17 21:41:33,635 44k INFO Saving model and optimizer state at iteration 265 to ./logs\44k\D_228000.pth 2023-03-17 21:45:13,546 44k INFO Train Epoch: 265 [73%] 2023-03-17 21:45:13,547 44k INFO Losses: [2.6910886764526367, 2.1508283615112305, 11.868209838867188, 18.869173049926758, 1.1882718801498413], step: 228200, lr: 9.675365638764893e-05 2023-03-17 21:48:49,957 44k INFO Train Epoch: 265 [97%] 2023-03-17 21:48:49,958 44k INFO Losses: [2.4887924194335938, 2.29697847366333, 8.725053787231445, 16.372045516967773, 1.264711856842041], step: 228400, lr: 9.675365638764893e-05 2023-03-17 21:49:21,457 44k INFO ====> Epoch: 265, cost 997.93 s 2023-03-17 21:52:39,242 44k INFO Train Epoch: 266 [20%] 2023-03-17 21:52:39,243 44k INFO Losses: [2.5074782371520996, 2.051872730255127, 9.115462303161621, 18.673646926879883, 0.9522430896759033], step: 228600, lr: 9.674156218060047e-05 2023-03-17 21:55:26,082 44k INFO Train Epoch: 266 [43%] 2023-03-17 21:55:26,082 44k INFO Losses: [2.5140020847320557, 2.488682746887207, 7.406935691833496, 14.004145622253418, 1.1744632720947266], step: 228800, lr: 9.674156218060047e-05 2023-03-17 21:55:30,304 44k INFO Saving model and optimizer state at iteration 266 to ./logs\44k\G_228800.pth 2023-03-17 21:55:34,732 44k INFO Saving model and optimizer state at iteration 266 to ./logs\44k\D_228800.pth 2023-03-17 21:58:13,632 44k INFO Train Epoch: 266 [66%] 2023-03-17 21:58:13,633 44k INFO Losses: [2.4958739280700684, 2.4104509353637695, 7.120170593261719, 14.954367637634277, 1.0886118412017822], step: 229000, lr: 9.674156218060047e-05 2023-03-17 22:00:42,053 44k INFO Train Epoch: 266 [89%] 2023-03-17 22:00:42,053 44k INFO Losses: [2.6394057273864746, 2.177856683731079, 8.912389755249023, 16.240968704223633, 0.7674042582511902], step: 229200, lr: 9.674156218060047e-05 2023-03-17 22:01:49,830 44k INFO ====> Epoch: 266, cost 748.37 s 2023-03-17 22:03:22,662 44k INFO Train Epoch: 267 [13%] 2023-03-17 22:03:22,663 44k INFO Losses: [2.6287999153137207, 2.222546339035034, 7.179944038391113, 16.887020111083984, 0.8950282335281372], step: 229400, lr: 9.67294694853279e-05 2023-03-17 22:05:50,696 44k INFO Train Epoch: 267 [36%] 2023-03-17 22:05:50,696 44k INFO Losses: [2.6165153980255127, 2.115004062652588, 8.355877876281738, 17.448650360107422, 0.5136034488677979], step: 229600, lr: 9.67294694853279e-05 2023-03-17 22:05:54,932 44k INFO Saving model and optimizer state at iteration 267 to ./logs\44k\G_229600.pth 2023-03-17 22:05:59,879 44k INFO Saving model and optimizer state at iteration 267 to ./logs\44k\D_229600.pth 2023-03-17 22:08:31,188 44k INFO Train Epoch: 267 [59%] 2023-03-17 22:08:31,188 44k INFO Losses: [2.791266441345215, 2.0954222679138184, 8.415900230407715, 13.660215377807617, 1.1469316482543945], step: 229800, lr: 9.67294694853279e-05 2023-03-17 22:10:59,893 44k INFO Train Epoch: 267 [82%] 2023-03-17 22:10:59,894 44k INFO Losses: [2.533937692642212, 2.06038236618042, 9.110525131225586, 12.792787551879883, 1.100142240524292], step: 230000, lr: 9.67294694853279e-05 2023-03-17 22:12:53,446 44k INFO ====> Epoch: 267, cost 663.62 s 2023-03-17 22:13:39,979 44k INFO Train Epoch: 268 [5%] 2023-03-17 22:13:39,979 44k INFO Losses: [2.475224733352661, 2.2741947174072266, 9.81482982635498, 18.708608627319336, 0.9731842279434204], step: 230200, lr: 9.671737830164223e-05 2023-03-17 22:16:08,543 44k INFO Train Epoch: 268 [29%] 2023-03-17 22:16:08,543 44k INFO Losses: [2.5475518703460693, 2.323190212249756, 7.866419792175293, 17.060890197753906, 0.6118853688240051], step: 230400, lr: 9.671737830164223e-05 2023-03-17 22:16:12,708 44k INFO Saving model and optimizer state at iteration 268 to ./logs\44k\G_230400.pth 2023-03-17 22:16:17,457 44k INFO Saving model and optimizer state at iteration 268 to ./logs\44k\D_230400.pth 2023-03-17 22:18:48,564 44k INFO Train Epoch: 268 [52%] 2023-03-17 22:18:48,565 44k INFO Losses: [2.5554583072662354, 2.0851550102233887, 9.58615779876709, 16.703725814819336, 0.8617737293243408], step: 230600, lr: 9.671737830164223e-05 2023-03-17 22:21:17,290 44k INFO Train Epoch: 268 [75%] 2023-03-17 22:21:17,290 44k INFO Losses: [2.4197189807891846, 2.274223566055298, 8.855903625488281, 17.483070373535156, 1.3833073377609253], step: 230800, lr: 9.671737830164223e-05 2023-03-17 22:23:45,844 44k INFO Train Epoch: 268 [98%] 2023-03-17 22:23:45,844 44k INFO Losses: [2.594696521759033, 2.2671005725860596, 9.133495330810547, 18.057565689086914, 0.8475462198257446], step: 231000, lr: 9.671737830164223e-05 2023-03-17 22:23:57,415 44k INFO ====> Epoch: 268, cost 663.97 s 2023-03-17 22:26:26,233 44k INFO Train Epoch: 269 [21%] 2023-03-17 22:26:26,233 44k INFO Losses: [2.4539995193481445, 2.0929765701293945, 11.651734352111816, 19.366907119750977, 1.2506920099258423], step: 231200, lr: 9.670528862935451e-05 2023-03-17 22:26:30,269 44k INFO Saving model and optimizer state at iteration 269 to ./logs\44k\G_231200.pth 2023-03-17 22:26:34,225 44k INFO Saving model and optimizer state at iteration 269 to ./logs\44k\D_231200.pth 2023-03-17 22:29:04,640 44k INFO Train Epoch: 269 [45%] 2023-03-17 22:29:04,640 44k INFO Losses: [2.5699570178985596, 2.525722026824951, 9.038848876953125, 16.947683334350586, 1.1084723472595215], step: 231400, lr: 9.670528862935451e-05 2023-03-17 22:31:32,628 44k INFO Train Epoch: 269 [68%] 2023-03-17 22:31:32,629 44k INFO Losses: [2.3976283073425293, 2.491459369659424, 10.207386016845703, 20.09332275390625, 1.0352375507354736], step: 231600, lr: 9.670528862935451e-05 2023-03-17 22:34:00,403 44k INFO Train Epoch: 269 [91%] 2023-03-17 22:34:00,403 44k INFO Losses: [2.5986404418945312, 1.991756558418274, 7.863900184631348, 18.13053321838379, 0.6969850659370422], step: 231800, lr: 9.670528862935451e-05 2023-03-17 22:34:57,642 44k INFO ====> Epoch: 269, cost 660.23 s 2023-03-17 22:36:40,104 44k INFO Train Epoch: 270 [14%] 2023-03-17 22:36:40,105 44k INFO Losses: [2.6328208446502686, 2.1661016941070557, 11.088205337524414, 20.3510799407959, 1.3005845546722412], step: 232000, lr: 9.669320046827584e-05 2023-03-17 22:36:43,986 44k INFO Saving model and optimizer state at iteration 270 to ./logs\44k\G_232000.pth 2023-03-17 22:36:47,984 44k INFO Saving model and optimizer state at iteration 270 to ./logs\44k\D_232000.pth 2023-03-17 22:39:18,718 44k INFO Train Epoch: 270 [37%] 2023-03-17 22:39:18,718 44k INFO Losses: [2.2481582164764404, 2.2850453853607178, 14.425029754638672, 19.719783782958984, 1.032261848449707], step: 232200, lr: 9.669320046827584e-05 2023-03-17 22:41:46,478 44k INFO Train Epoch: 270 [61%] 2023-03-17 22:41:46,479 44k INFO Losses: [2.403054714202881, 2.0611650943756104, 7.394612789154053, 14.338013648986816, 1.0150249004364014], step: 232400, lr: 9.669320046827584e-05 2023-03-17 22:44:17,452 44k INFO Train Epoch: 270 [84%] 2023-03-17 22:44:17,452 44k INFO Losses: [2.4982309341430664, 2.5596816539764404, 7.616784572601318, 13.388416290283203, 0.9093899130821228], step: 232600, lr: 9.669320046827584e-05 2023-03-17 22:46:02,931 44k INFO ====> Epoch: 270, cost 665.29 s 2023-03-17 22:46:59,890 44k INFO Train Epoch: 271 [7%] 2023-03-17 22:46:59,891 44k INFO Losses: [2.568209648132324, 2.1637065410614014, 10.847233772277832, 19.981050491333008, 1.3960648775100708], step: 232800, lr: 9.668111381821731e-05 2023-03-17 22:47:04,069 44k INFO Saving model and optimizer state at iteration 271 to ./logs\44k\G_232800.pth 2023-03-17 22:47:08,228 44k INFO Saving model and optimizer state at iteration 271 to ./logs\44k\D_232800.pth 2023-03-17 22:49:39,781 44k INFO Train Epoch: 271 [30%] 2023-03-17 22:49:39,781 44k INFO Losses: [2.4130280017852783, 2.671171188354492, 13.774401664733887, 19.366914749145508, 1.0080703496932983], step: 233000, lr: 9.668111381821731e-05 2023-03-17 22:52:08,087 44k INFO Train Epoch: 271 [53%] 2023-03-17 22:52:08,088 44k INFO Losses: [2.4431843757629395, 2.3087289333343506, 10.377523422241211, 16.614233016967773, 1.3562787771224976], step: 233200, lr: 9.668111381821731e-05 2023-03-17 22:54:51,046 44k INFO Train Epoch: 271 [77%] 2023-03-17 22:54:51,047 44k INFO Losses: [2.72518253326416, 1.972596526145935, 9.214388847351074, 17.40682029724121, 1.0168976783752441], step: 233400, lr: 9.668111381821731e-05 2023-03-17 22:57:25,925 44k INFO Train Epoch: 271 [100%] 2023-03-17 22:57:25,926 44k INFO Losses: [2.5054638385772705, 2.271117925643921, 9.035709381103516, 18.824216842651367, 0.7908955812454224], step: 233600, lr: 9.668111381821731e-05 2023-03-17 22:57:30,004 44k INFO Saving model and optimizer state at iteration 271 to ./logs\44k\G_233600.pth 2023-03-17 22:57:33,973 44k INFO Saving model and optimizer state at iteration 271 to ./logs\44k\D_233600.pth 2023-03-17 22:57:38,103 44k INFO ====> Epoch: 271, cost 695.17 s 2023-03-17 23:00:16,854 44k INFO Train Epoch: 272 [23%] 2023-03-17 23:00:16,855 44k INFO Losses: [2.5773770809173584, 2.0896551609039307, 7.549198627471924, 18.260549545288086, 1.1322238445281982], step: 233800, lr: 9.666902867899003e-05 2023-03-17 23:02:44,567 44k INFO Train Epoch: 272 [46%] 2023-03-17 23:02:44,568 44k INFO Losses: [2.666886806488037, 1.931557297706604, 9.940131187438965, 15.786439895629883, 0.9981145858764648], step: 234000, lr: 9.666902867899003e-05 2023-03-17 23:05:12,423 44k INFO Train Epoch: 272 [69%] 2023-03-17 23:05:12,423 44k INFO Losses: [2.475759983062744, 2.374624252319336, 7.411770820617676, 13.981178283691406, 1.1824582815170288], step: 234200, lr: 9.666902867899003e-05 2023-03-17 23:07:40,038 44k INFO Train Epoch: 272 [93%] 2023-03-17 23:07:40,038 44k INFO Losses: [2.6999454498291016, 2.3451085090637207, 7.2498273849487305, 15.31335735321045, 0.8130855560302734], step: 234400, lr: 9.666902867899003e-05 2023-03-17 23:07:44,005 44k INFO Saving model and optimizer state at iteration 272 to ./logs\44k\G_234400.pth 2023-03-17 23:07:48,067 44k INFO Saving model and optimizer state at iteration 272 to ./logs\44k\D_234400.pth 2023-03-17 23:08:37,893 44k INFO ====> Epoch: 272, cost 659.79 s 2023-03-17 23:10:30,813 44k INFO Train Epoch: 273 [16%] 2023-03-17 23:10:30,814 44k INFO Losses: [2.56567645072937, 2.2657155990600586, 8.286746978759766, 15.275848388671875, 1.0112247467041016], step: 234600, lr: 9.665694505040515e-05 2023-03-17 23:12:58,542 44k INFO Train Epoch: 273 [39%] 2023-03-17 23:12:58,542 44k INFO Losses: [2.717872142791748, 1.89005446434021, 7.565140724182129, 17.2933292388916, 1.2266182899475098], step: 234800, lr: 9.665694505040515e-05 2023-03-17 23:15:26,278 44k INFO Train Epoch: 273 [62%] 2023-03-17 23:15:26,278 44k INFO Losses: [2.430832862854004, 2.1809263229370117, 10.814997673034668, 19.36716651916504, 0.990162193775177], step: 235000, lr: 9.665694505040515e-05 2023-03-17 23:17:54,099 44k INFO Train Epoch: 273 [85%] 2023-03-17 23:17:54,100 44k INFO Losses: [2.703779935836792, 1.743408203125, 9.455378532409668, 13.252389907836914, 0.8896471858024597], step: 235200, lr: 9.665694505040515e-05 2023-03-17 23:17:58,113 44k INFO Saving model and optimizer state at iteration 273 to ./logs\44k\G_235200.pth 2023-03-17 23:18:02,450 44k INFO Saving model and optimizer state at iteration 273 to ./logs\44k\D_235200.pth 2023-03-17 23:19:37,842 44k INFO ====> Epoch: 273, cost 659.95 s 2023-03-17 23:20:44,986 44k INFO Train Epoch: 274 [9%] 2023-03-17 23:20:44,987 44k INFO Losses: [2.4332082271575928, 2.316800594329834, 10.0436372756958, 17.680395126342773, 0.9568864107131958], step: 235400, lr: 9.664486293227385e-05 2023-03-17 23:23:12,762 44k INFO Train Epoch: 274 [32%] 2023-03-17 23:23:12,762 44k INFO Losses: [2.3250718116760254, 2.416285991668701, 12.016742706298828, 19.859439849853516, 1.14457368850708], step: 235600, lr: 9.664486293227385e-05 2023-03-17 23:25:40,447 44k INFO Train Epoch: 274 [55%] 2023-03-17 23:25:40,447 44k INFO Losses: [2.781931161880493, 2.4109630584716797, 7.53769588470459, 14.249716758728027, 1.198853850364685], step: 235800, lr: 9.664486293227385e-05 2023-03-17 23:28:08,465 44k INFO Train Epoch: 274 [78%] 2023-03-17 23:28:08,465 44k INFO Losses: [2.5192947387695312, 2.0239076614379883, 7.026861190795898, 18.464208602905273, 0.9711569547653198], step: 236000, lr: 9.664486293227385e-05 2023-03-17 23:28:12,507 44k INFO Saving model and optimizer state at iteration 274 to ./logs\44k\G_236000.pth 2023-03-17 23:28:17,251 44k INFO Saving model and optimizer state at iteration 274 to ./logs\44k\D_236000.pth 2023-03-17 23:30:38,492 44k INFO ====> Epoch: 274, cost 660.65 s 2023-03-17 23:30:59,880 44k INFO Train Epoch: 275 [1%] 2023-03-17 23:30:59,881 44k INFO Losses: [2.3901028633117676, 2.134244203567505, 11.59443473815918, 16.712764739990234, 0.8625909090042114], step: 236200, lr: 9.663278232440732e-05 2023-03-17 23:33:27,872 44k INFO Train Epoch: 275 [25%] 2023-03-17 23:33:27,872 44k INFO Losses: [2.744637966156006, 1.7884724140167236, 4.932567119598389, 15.23929214477539, 0.8861264586448669], step: 236400, lr: 9.663278232440732e-05 2023-03-17 23:35:55,469 44k INFO Train Epoch: 275 [48%] 2023-03-17 23:35:55,469 44k INFO Losses: [2.715526580810547, 1.9395854473114014, 3.2276804447174072, 10.929450988769531, 1.0327212810516357], step: 236600, lr: 9.663278232440732e-05 2023-03-17 23:38:23,460 44k INFO Train Epoch: 275 [71%] 2023-03-17 23:38:23,461 44k INFO Losses: [2.205415964126587, 2.4718003273010254, 9.869319915771484, 18.87130355834961, 1.3688536882400513], step: 236800, lr: 9.663278232440732e-05 2023-03-17 23:38:27,404 44k INFO Saving model and optimizer state at iteration 275 to ./logs\44k\G_236800.pth 2023-03-17 23:38:31,476 44k INFO Saving model and optimizer state at iteration 275 to ./logs\44k\D_236800.pth 2023-03-17 23:41:02,061 44k INFO Train Epoch: 275 [94%] 2023-03-17 23:41:02,061 44k INFO Losses: [2.518397331237793, 2.2428958415985107, 11.045549392700195, 15.924239158630371, 1.2538613080978394], step: 237000, lr: 9.663278232440732e-05 2023-03-17 23:41:38,556 44k INFO ====> Epoch: 275, cost 660.06 s 2023-03-17 23:43:41,787 44k INFO Train Epoch: 276 [17%] 2023-03-17 23:43:41,788 44k INFO Losses: [2.8887434005737305, 1.9295215606689453, 4.615327835083008, 11.834853172302246, 1.2174434661865234], step: 237200, lr: 9.662070322661676e-05 2023-03-17 23:46:09,602 44k INFO Train Epoch: 276 [41%] 2023-03-17 23:46:09,602 44k INFO Losses: [2.3358585834503174, 2.4806721210479736, 13.451333999633789, 18.610034942626953, 1.1342171430587769], step: 237400, lr: 9.662070322661676e-05 2023-03-17 23:48:37,312 44k INFO Train Epoch: 276 [64%] 2023-03-17 23:48:37,312 44k INFO Losses: [2.3656249046325684, 2.301614999771118, 9.84799575805664, 18.969322204589844, 0.8708007335662842], step: 237600, lr: 9.662070322661676e-05 2023-03-17 23:48:41,393 44k INFO Saving model and optimizer state at iteration 276 to ./logs\44k\G_237600.pth 2023-03-17 23:48:45,569 44k INFO Saving model and optimizer state at iteration 276 to ./logs\44k\D_237600.pth 2023-03-17 23:51:16,413 44k INFO Train Epoch: 276 [87%] 2023-03-17 23:51:16,413 44k INFO Losses: [2.7045986652374268, 2.391636610031128, 8.071473121643066, 16.12633514404297, 0.9247919321060181], step: 237800, lr: 9.662070322661676e-05 2023-03-17 23:52:38,545 44k INFO ====> Epoch: 276, cost 659.99 s 2023-03-17 23:53:56,036 44k INFO Train Epoch: 277 [10%] 2023-03-17 23:53:56,036 44k INFO Losses: [2.711296558380127, 2.0566978454589844, 11.676040649414062, 19.56591796875, 1.0245392322540283], step: 238000, lr: 9.660862563871342e-05 2023-03-17 23:56:23,861 44k INFO Train Epoch: 277 [33%] 2023-03-17 23:56:23,861 44k INFO Losses: [2.5565388202667236, 2.130202293395996, 6.38773775100708, 13.073478698730469, 1.0707546472549438], step: 238200, lr: 9.660862563871342e-05 2023-03-17 23:58:51,397 44k INFO Train Epoch: 277 [57%] 2023-03-17 23:58:51,397 44k INFO Losses: [2.4684700965881348, 2.3695857524871826, 5.611832618713379, 15.102468490600586, 0.9677608013153076], step: 238400, lr: 9.660862563871342e-05 2023-03-17 23:58:55,477 44k INFO Saving model and optimizer state at iteration 277 to ./logs\44k\G_238400.pth 2023-03-17 23:58:59,333 44k INFO Saving model and optimizer state at iteration 277 to ./logs\44k\D_238400.pth 2023-03-18 00:01:30,139 44k INFO Train Epoch: 277 [80%] 2023-03-18 00:01:30,140 44k INFO Losses: [2.7031123638153076, 1.8321759700775146, 9.460073471069336, 16.456613540649414, 1.1824390888214111], step: 238600, lr: 9.660862563871342e-05 2023-03-18 00:03:38,084 44k INFO ====> Epoch: 277, cost 659.54 s 2023-03-18 00:04:09,767 44k INFO Train Epoch: 278 [3%] 2023-03-18 00:04:09,767 44k INFO Losses: [2.5658786296844482, 2.0820274353027344, 6.172895908355713, 11.854188919067383, 1.1379474401474], step: 238800, lr: 9.659654956050859e-05 2023-03-18 00:06:37,814 44k INFO Train Epoch: 278 [26%] 2023-03-18 00:06:37,815 44k INFO Losses: [2.6795976161956787, 2.138878345489502, 8.866223335266113, 15.000415802001953, 1.346008539199829], step: 239000, lr: 9.659654956050859e-05 2023-03-18 00:09:05,286 44k INFO Train Epoch: 278 [49%] 2023-03-18 00:09:05,286 44k INFO Losses: [2.4360463619232178, 2.2598087787628174, 13.019302368164062, 20.37201499938965, 0.8275392651557922], step: 239200, lr: 9.659654956050859e-05 2023-03-18 00:09:09,378 44k INFO Saving model and optimizer state at iteration 278 to ./logs\44k\G_239200.pth 2023-03-18 00:09:13,872 44k INFO Saving model and optimizer state at iteration 278 to ./logs\44k\D_239200.pth 2023-03-18 00:11:44,789 44k INFO Train Epoch: 278 [73%] 2023-03-18 00:11:44,790 44k INFO Losses: [2.5164215564727783, 2.141946315765381, 12.625482559204102, 19.034217834472656, 0.8097273111343384], step: 239400, lr: 9.659654956050859e-05 2023-03-18 00:14:12,415 44k INFO Train Epoch: 278 [96%] 2023-03-18 00:14:12,416 44k INFO Losses: [2.3109006881713867, 2.540977716445923, 10.639290809631348, 16.232418060302734, 1.373613715171814], step: 239600, lr: 9.659654956050859e-05 2023-03-18 00:14:38,663 44k INFO ====> Epoch: 278, cost 660.58 s 2023-03-18 00:16:52,292 44k INFO Train Epoch: 279 [19%] 2023-03-18 00:16:52,292 44k INFO Losses: [2.6631815433502197, 1.9571481943130493, 6.462707996368408, 19.163463592529297, 1.2009185552597046], step: 239800, lr: 9.658447499181352e-05 2023-03-18 00:19:20,001 44k INFO Train Epoch: 279 [42%] 2023-03-18 00:19:20,002 44k INFO Losses: [2.5134682655334473, 2.3561744689941406, 11.316415786743164, 19.552181243896484, 1.350816011428833], step: 240000, lr: 9.658447499181352e-05 2023-03-18 00:19:23,908 44k INFO Saving model and optimizer state at iteration 279 to ./logs\44k\G_240000.pth 2023-03-18 00:19:29,190 44k INFO Saving model and optimizer state at iteration 279 to ./logs\44k\D_240000.pth 2023-03-18 00:21:59,888 44k INFO Train Epoch: 279 [65%] 2023-03-18 00:21:59,888 44k INFO Losses: [2.6072287559509277, 2.507739543914795, 7.393299102783203, 15.980974197387695, 0.822520911693573], step: 240200, lr: 9.658447499181352e-05 2023-03-18 00:24:27,554 44k INFO Train Epoch: 279 [89%] 2023-03-18 00:24:27,554 44k INFO Losses: [2.4952900409698486, 2.0760698318481445, 5.630001544952393, 16.676719665527344, 1.0904755592346191], step: 240400, lr: 9.658447499181352e-05 2023-03-18 00:25:39,363 44k INFO ====> Epoch: 279, cost 660.70 s 2023-03-18 00:27:07,150 44k INFO Train Epoch: 280 [12%] 2023-03-18 00:27:07,151 44k INFO Losses: [2.6302921772003174, 1.8012170791625977, 8.736234664916992, 16.2388858795166, 1.47483491897583], step: 240600, lr: 9.657240193243954e-05 2023-03-18 00:29:34,959 44k INFO Train Epoch: 280 [35%] 2023-03-18 00:29:34,959 44k INFO Losses: [2.333644390106201, 2.4555652141571045, 11.581939697265625, 18.524831771850586, 1.3251746892929077], step: 240800, lr: 9.657240193243954e-05 2023-03-18 00:29:38,992 44k INFO Saving model and optimizer state at iteration 280 to ./logs\44k\G_240800.pth 2023-03-18 00:29:43,529 44k INFO Saving model and optimizer state at iteration 280 to ./logs\44k\D_240800.pth 2023-03-18 00:32:14,047 44k INFO Train Epoch: 280 [58%] 2023-03-18 00:32:14,048 44k INFO Losses: [2.501190662384033, 2.2672111988067627, 9.65142822265625, 15.692045211791992, 1.313480019569397], step: 241000, lr: 9.657240193243954e-05 2023-03-18 00:34:41,938 44k INFO Train Epoch: 280 [81%] 2023-03-18 00:34:41,938 44k INFO Losses: [2.5276191234588623, 2.1927273273468018, 9.042916297912598, 15.768512725830078, 1.1342521905899048], step: 241200, lr: 9.657240193243954e-05 2023-03-18 00:36:39,499 44k INFO ====> Epoch: 280, cost 660.14 s 2023-03-18 00:37:21,472 44k INFO Train Epoch: 281 [5%] 2023-03-18 00:37:21,473 44k INFO Losses: [2.6955928802490234, 2.0569701194763184, 8.776084899902344, 17.495983123779297, 1.1288179159164429], step: 241400, lr: 9.656033038219798e-05 2023-03-18 00:39:49,244 44k INFO Train Epoch: 281 [28%] 2023-03-18 00:39:49,245 44k INFO Losses: [2.710324287414551, 2.12835431098938, 6.007878303527832, 15.473275184631348, 1.102723479270935], step: 241600, lr: 9.656033038219798e-05 2023-03-18 00:39:53,269 44k INFO Saving model and optimizer state at iteration 281 to ./logs\44k\G_241600.pth 2023-03-18 00:39:57,499 44k INFO Saving model and optimizer state at iteration 281 to ./logs\44k\D_241600.pth 2023-03-18 00:42:28,406 44k INFO Train Epoch: 281 [51%] 2023-03-18 00:42:28,407 44k INFO Losses: [2.5468122959136963, 2.2675681114196777, 11.782535552978516, 18.492237091064453, 1.1001876592636108], step: 241800, lr: 9.656033038219798e-05 2023-03-18 00:44:56,185 44k INFO Train Epoch: 281 [74%] 2023-03-18 00:44:56,185 44k INFO Losses: [2.3472518920898438, 2.6510846614837646, 12.308174133300781, 18.75900650024414, 0.9547584056854248], step: 242000, lr: 9.656033038219798e-05 2023-03-18 00:47:23,872 44k INFO Train Epoch: 281 [97%] 2023-03-18 00:47:23,872 44k INFO Losses: [2.4751765727996826, 2.3560335636138916, 8.500299453735352, 15.856581687927246, 0.9647173881530762], step: 242200, lr: 9.656033038219798e-05 2023-03-18 00:47:39,720 44k INFO ====> Epoch: 281, cost 660.22 s 2023-03-18 00:50:03,598 44k INFO Train Epoch: 282 [21%] 2023-03-18 00:50:03,598 44k INFO Losses: [2.594684362411499, 2.344916343688965, 11.832525253295898, 19.295616149902344, 1.0399483442306519], step: 242400, lr: 9.65482603409002e-05 2023-03-18 00:50:07,682 44k INFO Saving model and optimizer state at iteration 282 to ./logs\44k\G_242400.pth 2023-03-18 00:50:11,670 44k INFO Saving model and optimizer state at iteration 282 to ./logs\44k\D_242400.pth 2023-03-18 00:52:42,198 44k INFO Train Epoch: 282 [44%] 2023-03-18 00:52:42,199 44k INFO Losses: [2.593282699584961, 2.27103590965271, 6.941946983337402, 15.755401611328125, 0.9856503009796143], step: 242600, lr: 9.65482603409002e-05 2023-03-18 00:55:10,038 44k INFO Train Epoch: 282 [67%] 2023-03-18 00:55:10,039 44k INFO Losses: [2.5902724266052246, 2.140986680984497, 9.077738761901855, 16.240509033203125, 0.7107651829719543], step: 242800, lr: 9.65482603409002e-05 2023-03-18 00:57:37,661 44k INFO Train Epoch: 282 [90%] 2023-03-18 00:57:37,661 44k INFO Losses: [2.487589120864868, 2.2057840824127197, 12.891619682312012, 15.629560470581055, 1.2120239734649658], step: 243000, lr: 9.65482603409002e-05 2023-03-18 00:58:39,292 44k INFO ====> Epoch: 282, cost 659.57 s 2023-03-18 01:00:17,355 44k INFO Train Epoch: 283 [13%] 2023-03-18 01:00:17,355 44k INFO Losses: [2.464796543121338, 2.420415163040161, 11.270381927490234, 18.668033599853516, 1.0343326330184937], step: 243200, lr: 9.653619180835758e-05 2023-03-18 01:00:21,380 44k INFO Saving model and optimizer state at iteration 283 to ./logs\44k\G_243200.pth 2023-03-18 01:00:26,359 44k INFO Saving model and optimizer state at iteration 283 to ./logs\44k\D_243200.pth 2023-03-18 01:02:57,329 44k INFO Train Epoch: 283 [37%] 2023-03-18 01:02:57,329 44k INFO Losses: [2.5350053310394287, 2.097182273864746, 7.777634143829346, 14.314918518066406, 1.198075771331787], step: 243400, lr: 9.653619180835758e-05 2023-03-18 01:05:24,873 44k INFO Train Epoch: 283 [60%] 2023-03-18 01:05:24,873 44k INFO Losses: [2.5608832836151123, 2.068300247192383, 12.591880798339844, 18.674291610717773, 0.7246514558792114], step: 243600, lr: 9.653619180835758e-05 2023-03-18 01:07:52,622 44k INFO Train Epoch: 283 [83%] 2023-03-18 01:07:52,623 44k INFO Losses: [2.5357537269592285, 2.222353219985962, 9.766161918640137, 18.10967254638672, 1.0482922792434692], step: 243800, lr: 9.653619180835758e-05 2023-03-18 01:09:39,649 44k INFO ====> Epoch: 283, cost 660.36 s 2023-03-18 01:10:31,853 44k INFO Train Epoch: 284 [6%] 2023-03-18 01:10:31,853 44k INFO Losses: [2.2735791206359863, 2.6900734901428223, 13.247554779052734, 19.78646469116211, 0.9248326420783997], step: 244000, lr: 9.652412478438153e-05 2023-03-18 01:10:35,911 44k INFO Saving model and optimizer state at iteration 284 to ./logs\44k\G_244000.pth 2023-03-18 01:10:40,314 44k INFO Saving model and optimizer state at iteration 284 to ./logs\44k\D_244000.pth 2023-03-18 01:13:11,137 44k INFO Train Epoch: 284 [29%] 2023-03-18 01:13:11,138 44k INFO Losses: [2.412928342819214, 2.2239511013031006, 13.40782356262207, 20.508115768432617, 1.0419044494628906], step: 244200, lr: 9.652412478438153e-05 2023-03-18 01:15:38,530 44k INFO Train Epoch: 284 [53%] 2023-03-18 01:15:38,530 44k INFO Losses: [2.3683557510375977, 2.190274238586426, 9.218567848205566, 16.530075073242188, 0.7622981071472168], step: 244400, lr: 9.652412478438153e-05 2023-03-18 01:18:06,351 44k INFO Train Epoch: 284 [76%] 2023-03-18 01:18:06,352 44k INFO Losses: [2.347097396850586, 2.327160358428955, 13.299599647521973, 19.277591705322266, 0.9909809231758118], step: 244600, lr: 9.652412478438153e-05 2023-03-18 01:20:33,879 44k INFO Train Epoch: 284 [99%] 2023-03-18 01:20:33,879 44k INFO Losses: [2.47080397605896, 2.0883285999298096, 9.805315971374512, 17.368850708007812, 0.8586620688438416], step: 244800, lr: 9.652412478438153e-05 2023-03-18 01:20:37,995 44k INFO Saving model and optimizer state at iteration 284 to ./logs\44k\G_244800.pth 2023-03-18 01:20:42,218 44k INFO Saving model and optimizer state at iteration 284 to ./logs\44k\D_244800.pth 2023-03-18 01:20:50,882 44k INFO ====> Epoch: 284, cost 671.23 s 2023-03-18 01:23:24,790 44k INFO Train Epoch: 285 [22%] 2023-03-18 01:23:24,791 44k INFO Losses: [2.682035446166992, 2.2256102561950684, 9.34567928314209, 16.893274307250977, 0.8593688011169434], step: 245000, lr: 9.651205926878348e-05 2023-03-18 01:25:52,228 44k INFO Train Epoch: 285 [45%] 2023-03-18 01:25:52,228 44k INFO Losses: [2.62172794342041, 2.087916612625122, 8.71512508392334, 15.830467224121094, 1.0782829523086548], step: 245200, lr: 9.651205926878348e-05 2023-03-18 01:28:19,911 44k INFO Train Epoch: 285 [69%] 2023-03-18 01:28:19,911 44k INFO Losses: [2.7457985877990723, 2.4076898097991943, 7.794620037078857, 16.558557510375977, 1.2062244415283203], step: 245400, lr: 9.651205926878348e-05 2023-03-18 01:30:47,472 44k INFO Train Epoch: 285 [92%] 2023-03-18 01:30:47,473 44k INFO Losses: [2.7839908599853516, 1.8376367092132568, 7.051756381988525, 13.562417030334473, 1.277392864227295], step: 245600, lr: 9.651205926878348e-05 2023-03-18 01:30:51,469 44k INFO Saving model and optimizer state at iteration 285 to ./logs\44k\G_245600.pth 2023-03-18 01:30:56,019 44k INFO Saving model and optimizer state at iteration 285 to ./logs\44k\D_245600.pth 2023-03-18 01:31:50,338 44k INFO ====> Epoch: 285, cost 659.46 s 2023-03-18 01:33:38,542 44k INFO Train Epoch: 286 [15%] 2023-03-18 01:33:38,542 44k INFO Losses: [2.5421786308288574, 1.9668813943862915, 8.201369285583496, 16.224056243896484, 1.1962703466415405], step: 245800, lr: 9.649999526137489e-05 2023-03-18 01:36:06,211 44k INFO Train Epoch: 286 [38%] 2023-03-18 01:36:06,212 44k INFO Losses: [2.4957871437072754, 2.2523553371429443, 9.501924514770508, 19.02039909362793, 0.9241561889648438], step: 246000, lr: 9.649999526137489e-05 2023-03-18 01:38:33,649 44k INFO Train Epoch: 286 [61%] 2023-03-18 01:38:33,649 44k INFO Losses: [2.6361353397369385, 2.337576389312744, 10.045315742492676, 18.641921997070312, 0.9223039150238037], step: 246200, lr: 9.649999526137489e-05 2023-03-18 01:41:01,423 44k INFO Train Epoch: 286 [85%] 2023-03-18 01:41:01,424 44k INFO Losses: [2.799109935760498, 1.8464752435684204, 4.8418707847595215, 7.773642063140869, 1.0345829725265503], step: 246400, lr: 9.649999526137489e-05 2023-03-18 01:41:05,308 44k INFO Saving model and optimizer state at iteration 286 to ./logs\44k\G_246400.pth 2023-03-18 01:41:09,908 44k INFO Saving model and optimizer state at iteration 286 to ./logs\44k\D_246400.pth 2023-03-18 01:42:49,813 44k INFO ====> Epoch: 286, cost 659.47 s 2023-03-18 01:43:52,407 44k INFO Train Epoch: 287 [8%] 2023-03-18 01:43:52,407 44k INFO Losses: [2.5730888843536377, 2.1842336654663086, 10.161998748779297, 16.30363655090332, 1.0605595111846924], step: 246600, lr: 9.64879327619672e-05 2023-03-18 01:46:19,931 44k INFO Train Epoch: 287 [31%] 2023-03-18 01:46:19,931 44k INFO Losses: [2.5766825675964355, 2.114689588546753, 6.658520698547363, 16.13082504272461, 1.3613238334655762], step: 246800, lr: 9.64879327619672e-05 2023-03-18 01:48:47,427 44k INFO Train Epoch: 287 [54%] 2023-03-18 01:48:47,428 44k INFO Losses: [2.5305843353271484, 2.1129326820373535, 11.451441764831543, 18.429025650024414, 0.9327443242073059], step: 247000, lr: 9.64879327619672e-05 2023-03-18 01:51:15,092 44k INFO Train Epoch: 287 [77%] 2023-03-18 01:51:15,093 44k INFO Losses: [2.729654312133789, 2.026021957397461, 6.321410179138184, 13.684746742248535, 0.7499884963035583], step: 247200, lr: 9.64879327619672e-05 2023-03-18 01:51:19,177 44k INFO Saving model and optimizer state at iteration 287 to ./logs\44k\G_247200.pth 2023-03-18 01:51:24,187 44k INFO Saving model and optimizer state at iteration 287 to ./logs\44k\D_247200.pth 2023-03-18 01:53:49,639 44k INFO ====> Epoch: 287, cost 659.83 s 2023-03-18 01:54:06,515 44k INFO Train Epoch: 288 [1%] 2023-03-18 01:54:06,516 44k INFO Losses: [2.483198404312134, 2.2362425327301025, 9.688189506530762, 18.83633804321289, 1.1992262601852417], step: 247400, lr: 9.647587177037196e-05 2023-03-18 01:56:34,223 44k INFO Train Epoch: 288 [24%] 2023-03-18 01:56:34,224 44k INFO Losses: [2.619871139526367, 2.1001980304718018, 9.839923858642578, 18.355504989624023, 0.7832295298576355], step: 247600, lr: 9.647587177037196e-05 2023-03-18 01:59:01,713 44k INFO Train Epoch: 288 [47%] 2023-03-18 01:59:01,714 44k INFO Losses: [2.586841106414795, 2.0856847763061523, 11.388495445251465, 17.067516326904297, 1.1251444816589355], step: 247800, lr: 9.647587177037196e-05 2023-03-18 02:01:29,441 44k INFO Train Epoch: 288 [70%] 2023-03-18 02:01:29,442 44k INFO Losses: [2.7178893089294434, 2.329334259033203, 10.698827743530273, 18.882877349853516, 1.0114142894744873], step: 248000, lr: 9.647587177037196e-05 2023-03-18 02:01:33,378 44k INFO Saving model and optimizer state at iteration 288 to ./logs\44k\G_248000.pth 2023-03-18 02:01:37,602 44k INFO Saving model and optimizer state at iteration 288 to ./logs\44k\D_248000.pth 2023-03-18 02:04:08,369 44k INFO Train Epoch: 288 [94%] 2023-03-18 02:04:08,369 44k INFO Losses: [2.6362199783325195, 1.8502901792526245, 8.38296127319336, 19.217512130737305, 1.2744885683059692], step: 248200, lr: 9.647587177037196e-05 2023-03-18 02:04:49,321 44k INFO ====> Epoch: 288, cost 659.68 s 2023-03-18 02:06:47,923 44k INFO Train Epoch: 289 [17%] 2023-03-18 02:06:47,924 44k INFO Losses: [2.578829526901245, 2.218141794204712, 6.940443992614746, 16.342334747314453, 1.0638431310653687], step: 248400, lr: 9.646381228640066e-05 2023-03-18 02:09:15,521 44k INFO Train Epoch: 289 [40%] 2023-03-18 02:09:15,521 44k INFO Losses: [2.558980941772461, 2.131531238555908, 11.034732818603516, 15.906431198120117, 1.3854923248291016], step: 248600, lr: 9.646381228640066e-05 2023-03-18 02:11:42,966 44k INFO Train Epoch: 289 [63%] 2023-03-18 02:11:42,967 44k INFO Losses: [2.5378193855285645, 2.2588088512420654, 11.056863784790039, 19.796785354614258, 1.0995619297027588], step: 248800, lr: 9.646381228640066e-05 2023-03-18 02:11:46,913 44k INFO Saving model and optimizer state at iteration 289 to ./logs\44k\G_248800.pth 2023-03-18 02:11:52,129 44k INFO Saving model and optimizer state at iteration 289 to ./logs\44k\D_248800.pth 2023-03-18 02:14:22,635 44k INFO Train Epoch: 289 [86%] 2023-03-18 02:14:22,635 44k INFO Losses: [2.2926788330078125, 2.4690651893615723, 10.57485294342041, 13.935493469238281, 0.8592012524604797], step: 249000, lr: 9.646381228640066e-05 2023-03-18 02:15:49,100 44k INFO ====> Epoch: 289, cost 659.78 s 2023-03-18 02:17:01,909 44k INFO Train Epoch: 290 [10%] 2023-03-18 02:17:01,910 44k INFO Losses: [2.2854349613189697, 2.3865134716033936, 10.878771781921387, 18.79351043701172, 1.1048716306686401], step: 249200, lr: 9.645175430986486e-05 2023-03-18 02:19:29,550 44k INFO Train Epoch: 290 [33%] 2023-03-18 02:19:29,550 44k INFO Losses: [2.528729200363159, 2.531832218170166, 8.112225532531738, 13.9092435836792, 1.135794758796692], step: 249400, lr: 9.645175430986486e-05 2023-03-18 02:21:56,937 44k INFO Train Epoch: 290 [56%] 2023-03-18 02:21:56,938 44k INFO Losses: [2.5006980895996094, 2.5194694995880127, 9.711161613464355, 16.961318969726562, 1.1651850938796997], step: 249600, lr: 9.645175430986486e-05 2023-03-18 02:22:00,899 44k INFO Saving model and optimizer state at iteration 290 to ./logs\44k\G_249600.pth 2023-03-18 02:22:05,711 44k INFO Saving model and optimizer state at iteration 290 to ./logs\44k\D_249600.pth 2023-03-18 02:24:36,255 44k INFO Train Epoch: 290 [79%] 2023-03-18 02:24:36,256 44k INFO Losses: [2.556608200073242, 2.08370304107666, 6.6997904777526855, 14.862691879272461, 1.1055376529693604], step: 249800, lr: 9.645175430986486e-05 2023-03-18 02:26:48,553 44k INFO ====> Epoch: 290, cost 659.45 s 2023-03-18 02:27:15,719 44k INFO Train Epoch: 291 [2%] 2023-03-18 02:27:15,719 44k INFO Losses: [2.5615246295928955, 2.012204647064209, 3.3651909828186035, 9.652511596679688, 1.2741183042526245], step: 250000, lr: 9.643969784057613e-05 2023-03-18 02:29:43,598 44k INFO Train Epoch: 291 [26%] 2023-03-18 02:29:43,598 44k INFO Losses: [2.524245500564575, 2.1253068447113037, 8.030020713806152, 19.306278228759766, 0.8423724174499512], step: 250200, lr: 9.643969784057613e-05 2023-03-18 02:32:11,051 44k INFO Train Epoch: 291 [49%] 2023-03-18 02:32:11,052 44k INFO Losses: [2.406970500946045, 2.3213279247283936, 9.058813095092773, 17.175378799438477, 0.4926415979862213], step: 250400, lr: 9.643969784057613e-05 2023-03-18 02:32:15,060 44k INFO Saving model and optimizer state at iteration 291 to ./logs\44k\G_250400.pth 2023-03-18 02:32:19,126 44k INFO Saving model and optimizer state at iteration 291 to ./logs\44k\D_250400.pth 2023-03-18 02:34:49,901 44k INFO Train Epoch: 291 [72%] 2023-03-18 02:34:49,902 44k INFO Losses: [2.6034343242645264, 2.416505813598633, 8.828933715820312, 14.081535339355469, 1.2258107662200928], step: 250600, lr: 9.643969784057613e-05 2023-03-18 02:37:17,450 44k INFO Train Epoch: 291 [95%] 2023-03-18 02:37:17,451 44k INFO Losses: [2.540431022644043, 2.2247745990753174, 7.834095478057861, 15.217015266418457, 1.1483778953552246], step: 250800, lr: 9.643969784057613e-05 2023-03-18 02:37:48,128 44k INFO ====> Epoch: 291, cost 659.58 s 2023-03-18 02:39:57,028 44k INFO Train Epoch: 292 [18%] 2023-03-18 02:39:57,028 44k INFO Losses: [2.607766628265381, 2.1050374507904053, 11.19521427154541, 19.186355590820312, 1.2612848281860352], step: 251000, lr: 9.642764287834605e-05 2023-03-18 02:42:24,567 44k INFO Train Epoch: 292 [42%] 2023-03-18 02:42:24,568 44k INFO Losses: [2.8873112201690674, 1.7542568445205688, 6.698497295379639, 16.63821792602539, 1.206966757774353], step: 251200, lr: 9.642764287834605e-05 2023-03-18 02:42:28,509 44k INFO Saving model and optimizer state at iteration 292 to ./logs\44k\G_251200.pth 2023-03-18 02:42:33,423 44k INFO Saving model and optimizer state at iteration 292 to ./logs\44k\D_251200.pth 2023-03-18 02:45:03,998 44k INFO Train Epoch: 292 [65%] 2023-03-18 02:45:03,999 44k INFO Losses: [3.1419832706451416, 1.7853084802627563, 4.528061866760254, 10.82382583618164, 1.0370064973831177], step: 251400, lr: 9.642764287834605e-05 2023-03-18 02:47:31,579 44k INFO Train Epoch: 292 [88%] 2023-03-18 02:47:31,580 44k INFO Losses: [2.590625762939453, 2.0610811710357666, 9.13223934173584, 16.6143798828125, 1.2656123638153076], step: 251600, lr: 9.642764287834605e-05 2023-03-18 02:48:47,706 44k INFO ====> Epoch: 292, cost 659.58 s 2023-03-18 02:50:10,853 44k INFO Train Epoch: 293 [11%] 2023-03-18 02:50:10,854 44k INFO Losses: [2.627763271331787, 1.968294620513916, 9.877402305603027, 16.10942840576172, 0.9494482278823853], step: 251800, lr: 9.641558942298625e-05 2023-03-18 02:52:38,620 44k INFO Train Epoch: 293 [34%] 2023-03-18 02:52:38,621 44k INFO Losses: [2.5728139877319336, 2.277111291885376, 9.998357772827148, 18.389272689819336, 0.6428202390670776], step: 252000, lr: 9.641558942298625e-05 2023-03-18 02:52:42,561 44k INFO Saving model and optimizer state at iteration 293 to ./logs\44k\G_252000.pth 2023-03-18 02:52:46,512 44k INFO Saving model and optimizer state at iteration 293 to ./logs\44k\D_252000.pth 2023-03-18 02:55:16,787 44k INFO Train Epoch: 293 [58%] 2023-03-18 02:55:16,787 44k INFO Losses: [2.580615520477295, 2.0792622566223145, 9.52060604095459, 18.380048751831055, 1.138369083404541], step: 252200, lr: 9.641558942298625e-05 2023-03-18 02:57:44,435 44k INFO Train Epoch: 293 [81%] 2023-03-18 02:57:44,435 44k INFO Losses: [2.584914207458496, 2.12386155128479, 9.178543090820312, 16.399328231811523, 0.7559251189231873], step: 252400, lr: 9.641558942298625e-05 2023-03-18 02:59:46,196 44k INFO ====> Epoch: 293, cost 658.49 s 2023-03-18 03:00:23,537 44k INFO Train Epoch: 294 [4%] 2023-03-18 03:00:23,538 44k INFO Losses: [2.5634891986846924, 2.1437385082244873, 9.538727760314941, 18.403701782226562, 1.1897499561309814], step: 252600, lr: 9.640353747430838e-05 2023-03-18 03:02:51,207 44k INFO Train Epoch: 294 [27%] 2023-03-18 03:02:51,207 44k INFO Losses: [2.509675979614258, 2.370244026184082, 6.628093719482422, 12.177701950073242, 1.2096331119537354], step: 252800, lr: 9.640353747430838e-05 2023-03-18 03:02:55,079 44k INFO Saving model and optimizer state at iteration 294 to ./logs\44k\G_252800.pth 2023-03-18 03:02:59,884 44k INFO Saving model and optimizer state at iteration 294 to ./logs\44k\D_252800.pth 2023-03-18 03:05:30,679 44k INFO Train Epoch: 294 [50%] 2023-03-18 03:05:30,679 44k INFO Losses: [2.6262519359588623, 2.2191989421844482, 9.903592109680176, 21.38591766357422, 0.7976070642471313], step: 253000, lr: 9.640353747430838e-05 2023-03-18 03:07:58,341 44k INFO Train Epoch: 294 [74%] 2023-03-18 03:07:58,341 44k INFO Losses: [2.482346534729004, 2.203547716140747, 6.395986080169678, 10.012165069580078, 1.0263210535049438], step: 253200, lr: 9.640353747430838e-05 2023-03-18 03:10:25,907 44k INFO Train Epoch: 294 [97%] 2023-03-18 03:10:25,908 44k INFO Losses: [2.6159253120422363, 2.282864809036255, 7.366194248199463, 17.21733856201172, 0.8453957438468933], step: 253400, lr: 9.640353747430838e-05 2023-03-18 03:10:46,225 44k INFO ====> Epoch: 294, cost 660.03 s 2023-03-18 03:13:05,805 44k INFO Train Epoch: 295 [20%] 2023-03-18 03:13:05,805 44k INFO Losses: [2.6351993083953857, 2.103834867477417, 12.891521453857422, 16.682703018188477, 0.9389672875404358], step: 253600, lr: 9.639148703212408e-05 2023-03-18 03:13:09,886 44k INFO Saving model and optimizer state at iteration 295 to ./logs\44k\G_253600.pth 2023-03-18 03:13:14,444 44k INFO Saving model and optimizer state at iteration 295 to ./logs\44k\D_253600.pth 2023-03-18 03:15:44,769 44k INFO Train Epoch: 295 [43%] 2023-03-18 03:15:44,770 44k INFO Losses: [2.8577661514282227, 1.8055909872055054, 9.540867805480957, 16.214859008789062, 0.887354850769043], step: 253800, lr: 9.639148703212408e-05 2023-03-18 03:18:12,286 44k INFO Train Epoch: 295 [66%] 2023-03-18 03:18:12,286 44k INFO Losses: [2.316409111022949, 2.4481256008148193, 7.539494037628174, 14.176965713500977, 1.093185305595398], step: 254000, lr: 9.639148703212408e-05 2023-03-18 03:20:39,772 44k INFO Train Epoch: 295 [90%] 2023-03-18 03:20:39,772 44k INFO Losses: [2.520200729370117, 2.2779383659362793, 13.210063934326172, 18.234426498413086, 1.349271535873413], step: 254200, lr: 9.639148703212408e-05 2023-03-18 03:21:45,499 44k INFO ====> Epoch: 295, cost 659.27 s 2023-03-18 03:23:18,869 44k INFO Train Epoch: 296 [13%] 2023-03-18 03:23:18,869 44k INFO Losses: [2.7352867126464844, 1.9454984664916992, 7.726670742034912, 14.776769638061523, 1.0597037076950073], step: 254400, lr: 9.637943809624507e-05 2023-03-18 03:23:22,645 44k INFO Saving model and optimizer state at iteration 296 to ./logs\44k\G_254400.pth 2023-03-18 03:23:26,975 44k INFO Saving model and optimizer state at iteration 296 to ./logs\44k\D_254400.pth 2023-03-18 03:25:57,422 44k INFO Train Epoch: 296 [36%] 2023-03-18 03:25:57,423 44k INFO Losses: [2.421922206878662, 2.3072848320007324, 10.194067001342773, 15.563628196716309, 1.199292540550232], step: 254600, lr: 9.637943809624507e-05 2023-03-18 03:28:24,922 44k INFO Train Epoch: 296 [59%] 2023-03-18 03:28:24,923 44k INFO Losses: [2.605595111846924, 1.8015849590301514, 8.643299102783203, 10.710721969604492, 0.8607521057128906], step: 254800, lr: 9.637943809624507e-05 2023-03-18 03:30:52,475 44k INFO Train Epoch: 296 [82%] 2023-03-18 03:30:52,475 44k INFO Losses: [2.4917397499084473, 2.1493959426879883, 12.44526481628418, 16.96274757385254, 1.3532460927963257], step: 255000, lr: 9.637943809624507e-05 2023-03-18 03:32:44,010 44k INFO ====> Epoch: 296, cost 658.51 s 2023-03-18 03:33:31,782 44k INFO Train Epoch: 297 [6%] 2023-03-18 03:33:31,782 44k INFO Losses: [2.581202268600464, 2.1908066272735596, 8.1314115524292, 18.41191291809082, 1.2978360652923584], step: 255200, lr: 9.636739066648303e-05 2023-03-18 03:33:35,789 44k INFO Saving model and optimizer state at iteration 297 to ./logs\44k\G_255200.pth 2023-03-18 03:33:39,653 44k INFO Saving model and optimizer state at iteration 297 to ./logs\44k\D_255200.pth 2023-03-18 03:36:10,251 44k INFO Train Epoch: 297 [29%] 2023-03-18 03:36:10,252 44k INFO Losses: [2.563363790512085, 2.1440353393554688, 10.458234786987305, 16.452533721923828, 0.9943335056304932], step: 255400, lr: 9.636739066648303e-05 2023-03-18 03:38:37,635 44k INFO Train Epoch: 297 [52%] 2023-03-18 03:38:37,636 44k INFO Losses: [2.0324740409851074, 2.2846450805664062, 14.68381404876709, 19.306495666503906, 1.1012556552886963], step: 255600, lr: 9.636739066648303e-05 2023-03-18 03:41:05,369 44k INFO Train Epoch: 297 [75%] 2023-03-18 03:41:05,370 44k INFO Losses: [2.7356162071228027, 1.6936964988708496, 9.039511680603027, 15.378734588623047, 1.0101932287216187], step: 255800, lr: 9.636739066648303e-05 2023-03-18 03:43:32,912 44k INFO Train Epoch: 297 [98%] 2023-03-18 03:43:32,912 44k INFO Losses: [2.383648633956909, 2.1693320274353027, 9.475109100341797, 18.268829345703125, 0.9995424151420593], step: 256000, lr: 9.636739066648303e-05 2023-03-18 03:43:36,826 44k INFO Saving model and optimizer state at iteration 297 to ./logs\44k\G_256000.pth 2023-03-18 03:43:41,690 44k INFO Saving model and optimizer state at iteration 297 to ./logs\44k\D_256000.pth 2023-03-18 03:43:54,681 44k INFO ====> Epoch: 297, cost 670.67 s 2023-03-18 03:46:23,936 44k INFO Train Epoch: 298 [22%] 2023-03-18 03:46:23,936 44k INFO Losses: [2.7464470863342285, 1.9199779033660889, 8.611289024353027, 17.613431930541992, 1.1567903757095337], step: 256200, lr: 9.635534474264972e-05 2023-03-18 03:48:51,388 44k INFO Train Epoch: 298 [45%] 2023-03-18 03:48:51,388 44k INFO Losses: [2.4801676273345947, 2.2262167930603027, 8.407312393188477, 16.83437156677246, 1.2305282354354858], step: 256400, lr: 9.635534474264972e-05 2023-03-18 03:51:18,962 44k INFO Train Epoch: 298 [68%] 2023-03-18 03:51:18,962 44k INFO Losses: [2.4887373447418213, 2.2393100261688232, 8.097692489624023, 17.87863540649414, 0.9926599860191345], step: 256600, lr: 9.635534474264972e-05 2023-03-18 03:53:46,526 44k INFO Train Epoch: 298 [91%] 2023-03-18 03:53:46,526 44k INFO Losses: [2.437863826751709, 2.3269858360290527, 8.485676765441895, 12.902897834777832, 0.8661848902702332], step: 256800, lr: 9.635534474264972e-05 2023-03-18 03:53:50,588 44k INFO Saving model and optimizer state at iteration 298 to ./logs\44k\G_256800.pth 2023-03-18 03:53:55,201 44k INFO Saving model and optimizer state at iteration 298 to ./logs\44k\D_256800.pth 2023-03-18 03:54:54,030 44k INFO ====> Epoch: 298, cost 659.35 s 2023-03-18 03:56:37,937 44k INFO Train Epoch: 299 [14%] 2023-03-18 03:56:37,938 44k INFO Losses: [2.63643741607666, 2.0705008506774902, 9.217501640319824, 14.064180374145508, 1.047102451324463], step: 257000, lr: 9.634330032455689e-05 2023-03-18 03:59:05,471 44k INFO Train Epoch: 299 [38%] 2023-03-18 03:59:05,471 44k INFO Losses: [2.3958637714385986, 2.566847562789917, 9.297067642211914, 17.510353088378906, 0.9790557622909546], step: 257200, lr: 9.634330032455689e-05 2023-03-18 04:01:33,058 44k INFO Train Epoch: 299 [61%] 2023-03-18 04:01:33,059 44k INFO Losses: [2.4468822479248047, 2.1548850536346436, 9.92530632019043, 18.820446014404297, 0.9366519451141357], step: 257400, lr: 9.634330032455689e-05 2023-03-18 04:04:00,921 44k INFO Train Epoch: 299 [84%] 2023-03-18 04:04:00,922 44k INFO Losses: [2.625422954559326, 1.9816150665283203, 7.158151149749756, 13.381213188171387, 1.2042659521102905], step: 257600, lr: 9.634330032455689e-05 2023-03-18 04:04:04,859 44k INFO Saving model and optimizer state at iteration 299 to ./logs\44k\G_257600.pth 2023-03-18 04:04:09,753 44k INFO Saving model and optimizer state at iteration 299 to ./logs\44k\D_257600.pth 2023-03-18 04:05:53,777 44k INFO ====> Epoch: 299, cost 659.75 s 2023-03-18 04:06:51,763 44k INFO Train Epoch: 300 [7%] 2023-03-18 04:06:51,763 44k INFO Losses: [2.9318947792053223, 1.7443461418151855, 5.7020263671875, 11.413250923156738, 1.0357743501663208], step: 257800, lr: 9.633125741201631e-05 2023-03-18 04:09:19,490 44k INFO Train Epoch: 300 [30%] 2023-03-18 04:09:19,491 44k INFO Losses: [2.512955904006958, 2.3770365715026855, 9.182090759277344, 16.152082443237305, 0.8561323285102844], step: 258000, lr: 9.633125741201631e-05 2023-03-18 04:11:46,806 44k INFO Train Epoch: 300 [54%] 2023-03-18 04:11:46,806 44k INFO Losses: [2.4276418685913086, 2.5865676403045654, 5.9841508865356445, 13.290815353393555, 0.8625301718711853], step: 258200, lr: 9.633125741201631e-05 2023-03-18 04:14:14,409 44k INFO Train Epoch: 300 [77%] 2023-03-18 04:14:14,409 44k INFO Losses: [2.2719902992248535, 2.281226634979248, 10.645027160644531, 15.979100227355957, 0.8679187297821045], step: 258400, lr: 9.633125741201631e-05 2023-03-18 04:14:18,370 44k INFO Saving model and optimizer state at iteration 300 to ./logs\44k\G_258400.pth 2023-03-18 04:14:22,824 44k INFO Saving model and optimizer state at iteration 300 to ./logs\44k\D_258400.pth 2023-03-18 04:16:52,628 44k INFO ====> Epoch: 300, cost 658.85 s 2023-03-18 04:17:05,196 44k INFO Train Epoch: 301 [0%] 2023-03-18 04:17:05,196 44k INFO Losses: [2.3037421703338623, 2.48063325881958, 9.247903823852539, 16.666213989257812, 0.7230239510536194], step: 258600, lr: 9.631921600483981e-05 2023-03-18 04:19:32,722 44k INFO Train Epoch: 301 [23%] 2023-03-18 04:19:32,722 44k INFO Losses: [2.435718297958374, 2.1918082237243652, 8.697807312011719, 16.742462158203125, 0.9402417540550232], step: 258800, lr: 9.631921600483981e-05 2023-03-18 04:22:00,124 44k INFO Train Epoch: 301 [46%] 2023-03-18 04:22:00,124 44k INFO Losses: [2.4481639862060547, 2.096787452697754, 12.049969673156738, 18.92336654663086, 0.9536243677139282], step: 259000, lr: 9.631921600483981e-05 2023-03-18 04:24:27,834 44k INFO Train Epoch: 301 [70%] 2023-03-18 04:24:27,835 44k INFO Losses: [2.520697832107544, 2.1006808280944824, 9.24006462097168, 17.8380069732666, 1.4777989387512207], step: 259200, lr: 9.631921600483981e-05 2023-03-18 04:24:31,717 44k INFO Saving model and optimizer state at iteration 301 to ./logs\44k\G_259200.pth 2023-03-18 04:24:36,381 44k INFO Saving model and optimizer state at iteration 301 to ./logs\44k\D_259200.pth 2023-03-18 04:27:06,778 44k INFO Train Epoch: 301 [93%] 2023-03-18 04:27:06,779 44k INFO Losses: [2.6797983646392822, 1.827547550201416, 8.088133811950684, 15.853987693786621, 1.0754469633102417], step: 259400, lr: 9.631921600483981e-05 2023-03-18 04:27:52,082 44k INFO ====> Epoch: 301, cost 659.45 s 2023-03-18 04:29:46,169 44k INFO Train Epoch: 302 [16%] 2023-03-18 04:29:46,170 44k INFO Losses: [2.5720713138580322, 2.1545767784118652, 5.81773042678833, 14.121437072753906, 1.4307583570480347], step: 259600, lr: 9.63071761028392e-05 2023-03-18 04:32:13,692 44k INFO Train Epoch: 302 [39%] 2023-03-18 04:32:13,692 44k INFO Losses: [2.3253273963928223, 2.401402235031128, 11.215950012207031, 17.498769760131836, 0.9041968584060669], step: 259800, lr: 9.63071761028392e-05 2023-03-18 04:34:41,245 44k INFO Train Epoch: 302 [62%] 2023-03-18 04:34:41,245 44k INFO Losses: [2.4968008995056152, 2.255476951599121, 7.912073612213135, 16.233627319335938, 1.0909429788589478], step: 260000, lr: 9.63071761028392e-05 2023-03-18 04:34:45,138 44k INFO Saving model and optimizer state at iteration 302 to ./logs\44k\G_260000.pth 2023-03-18 04:34:50,224 44k INFO Saving model and optimizer state at iteration 302 to ./logs\44k\D_260000.pth 2023-03-18 04:37:20,785 44k INFO Train Epoch: 302 [86%] 2023-03-18 04:37:20,785 44k INFO Losses: [2.398594379425049, 2.2674617767333984, 11.565899848937988, 18.910404205322266, 0.9431103467941284], step: 260200, lr: 9.63071761028392e-05 2023-03-18 04:38:51,862 44k INFO ====> Epoch: 302, cost 659.78 s 2023-03-18 04:40:00,149 44k INFO Train Epoch: 303 [9%] 2023-03-18 04:40:00,150 44k INFO Losses: [2.383556365966797, 2.528165817260742, 10.354630470275879, 17.912443161010742, 1.0758883953094482], step: 260400, lr: 9.629513770582634e-05 2023-03-18 04:42:28,064 44k INFO Train Epoch: 303 [32%] 2023-03-18 04:42:28,064 44k INFO Losses: [2.62825345993042, 2.0051653385162354, 8.590018272399902, 15.8590726852417, 1.2917323112487793], step: 260600, lr: 9.629513770582634e-05 2023-03-18 04:44:55,629 44k INFO Train Epoch: 303 [55%] 2023-03-18 04:44:55,629 44k INFO Losses: [2.324495553970337, 2.2552356719970703, 10.283111572265625, 19.675804138183594, 1.264137625694275], step: 260800, lr: 9.629513770582634e-05 2023-03-18 04:44:59,504 44k INFO Saving model and optimizer state at iteration 303 to ./logs\44k\G_260800.pth 2023-03-18 04:45:03,615 44k INFO Saving model and optimizer state at iteration 303 to ./logs\44k\D_260800.pth 2023-03-18 04:47:34,210 44k INFO Train Epoch: 303 [78%] 2023-03-18 04:47:34,210 44k INFO Losses: [2.589219570159912, 1.951289415359497, 7.795058727264404, 13.850955963134766, 0.9555387496948242], step: 261000, lr: 9.629513770582634e-05 2023-03-18 04:49:50,815 44k INFO ====> Epoch: 303, cost 658.95 s 2023-03-18 04:50:13,518 44k INFO Train Epoch: 304 [2%] 2023-03-18 04:50:13,518 44k INFO Losses: [2.3898115158081055, 2.333345413208008, 13.48857593536377, 19.455856323242188, 1.1718494892120361], step: 261200, lr: 9.628310081361311e-05 2023-03-18 04:52:41,205 44k INFO Train Epoch: 304 [25%] 2023-03-18 04:52:41,205 44k INFO Losses: [2.2595629692077637, 2.479567527770996, 9.904922485351562, 19.332969665527344, 0.5922414064407349], step: 261400, lr: 9.628310081361311e-05 2023-03-18 04:55:08,483 44k INFO Train Epoch: 304 [48%] 2023-03-18 04:55:08,484 44k INFO Losses: [2.455087661743164, 2.4426543712615967, 9.775345802307129, 17.415014266967773, 1.066431999206543], step: 261600, lr: 9.628310081361311e-05 2023-03-18 04:55:12,569 44k INFO Saving model and optimizer state at iteration 304 to ./logs\44k\G_261600.pth 2023-03-18 04:55:17,608 44k INFO Saving model and optimizer state at iteration 304 to ./logs\44k\D_261600.pth 2023-03-18 04:57:48,278 44k INFO Train Epoch: 304 [71%] 2023-03-18 04:57:48,278 44k INFO Losses: [2.567990303039551, 2.3691225051879883, 8.682943344116211, 13.58768367767334, 1.1397109031677246], step: 261800, lr: 9.628310081361311e-05 2023-03-18 05:00:15,649 44k INFO Train Epoch: 304 [94%] 2023-03-18 05:00:15,650 44k INFO Losses: [2.549468755722046, 1.9933338165283203, 8.007755279541016, 18.286455154418945, 0.8915546536445618], step: 262000, lr: 9.628310081361311e-05 2023-03-18 05:00:50,668 44k INFO ====> Epoch: 304, cost 659.85 s 2023-03-18 05:02:55,056 44k INFO Train Epoch: 305 [18%] 2023-03-18 05:02:55,056 44k INFO Losses: [2.8141932487487793, 2.105635166168213, 5.824202537536621, 14.657718658447266, 1.294953465461731], step: 262200, lr: 9.627106542601141e-05 2023-03-18 05:05:22,540 44k INFO Train Epoch: 305 [41%] 2023-03-18 05:05:22,541 44k INFO Losses: [2.80517840385437, 1.9125975370407104, 10.822961807250977, 15.132400512695312, 1.0695571899414062], step: 262400, lr: 9.627106542601141e-05 2023-03-18 05:05:26,459 44k INFO Saving model and optimizer state at iteration 305 to ./logs\44k\G_262400.pth 2023-03-18 05:05:30,544 44k INFO Saving model and optimizer state at iteration 305 to ./logs\44k\D_262400.pth 2023-03-18 05:08:00,778 44k INFO Train Epoch: 305 [64%] 2023-03-18 05:08:00,778 44k INFO Losses: [2.464210033416748, 2.1384317874908447, 12.269012451171875, 19.365550994873047, 0.9601691961288452], step: 262600, lr: 9.627106542601141e-05 2023-03-18 05:10:28,257 44k INFO Train Epoch: 305 [87%] 2023-03-18 05:10:28,258 44k INFO Losses: [2.6032445430755615, 1.9922550916671753, 8.371760368347168, 13.017853736877441, 1.2953369617462158], step: 262800, lr: 9.627106542601141e-05 2023-03-18 05:11:48,750 44k INFO ====> Epoch: 305, cost 658.08 s 2023-03-18 05:13:07,397 44k INFO Train Epoch: 306 [10%] 2023-03-18 05:13:07,397 44k INFO Losses: [2.5612990856170654, 2.1093149185180664, 7.2210516929626465, 13.401510238647461, 0.9751430153846741], step: 263000, lr: 9.625903154283315e-05 2023-03-18 05:15:34,912 44k INFO Train Epoch: 306 [34%] 2023-03-18 05:15:34,913 44k INFO Losses: [2.6604926586151123, 2.0158982276916504, 8.526519775390625, 16.20676040649414, 1.0775986909866333], step: 263200, lr: 9.625903154283315e-05 2023-03-18 05:15:38,903 44k INFO Saving model and optimizer state at iteration 306 to ./logs\44k\G_263200.pth 2023-03-18 05:15:43,830 44k INFO Saving model and optimizer state at iteration 306 to ./logs\44k\D_263200.pth 2023-03-18 05:18:14,304 44k INFO Train Epoch: 306 [57%] 2023-03-18 05:18:14,304 44k INFO Losses: [2.5413780212402344, 2.2678699493408203, 6.909824371337891, 14.640081405639648, 1.1060787439346313], step: 263400, lr: 9.625903154283315e-05 2023-03-18 05:20:41,938 44k INFO Train Epoch: 306 [80%] 2023-03-18 05:20:41,938 44k INFO Losses: [2.408759117126465, 2.4366536140441895, 10.817977905273438, 17.720813751220703, 1.1700706481933594], step: 263600, lr: 9.625903154283315e-05 2023-03-18 05:22:48,217 44k INFO ====> Epoch: 306, cost 659.47 s 2023-03-18 05:23:21,184 44k INFO Train Epoch: 307 [3%] 2023-03-18 05:23:21,184 44k INFO Losses: [2.410114288330078, 2.353609085083008, 15.075169563293457, 18.603452682495117, 1.4139490127563477], step: 263800, lr: 9.62469991638903e-05 2023-03-18 05:25:48,831 44k INFO Train Epoch: 307 [26%] 2023-03-18 05:25:48,831 44k INFO Losses: [2.5436339378356934, 2.3836658000946045, 10.794401168823242, 18.68931770324707, 0.9329290390014648], step: 264000, lr: 9.62469991638903e-05 2023-03-18 05:25:52,809 44k INFO Saving model and optimizer state at iteration 307 to ./logs\44k\G_264000.pth 2023-03-18 05:25:56,577 44k INFO Saving model and optimizer state at iteration 307 to ./logs\44k\D_264000.pth 2023-03-18 05:28:27,088 44k INFO Train Epoch: 307 [50%] 2023-03-18 05:28:27,088 44k INFO Losses: [2.6140661239624023, 2.011467218399048, 8.696392059326172, 16.596277236938477, 1.1455514430999756], step: 264200, lr: 9.62469991638903e-05 2023-03-18 05:30:54,782 44k INFO Train Epoch: 307 [73%] 2023-03-18 05:30:54,782 44k INFO Losses: [2.328200340270996, 2.4949705600738525, 7.17250394821167, 12.110856056213379, 0.8289424180984497], step: 264400, lr: 9.62469991638903e-05 2023-03-18 05:33:22,201 44k INFO Train Epoch: 307 [96%] 2023-03-18 05:33:22,201 44k INFO Losses: [2.361063003540039, 2.5898795127868652, 10.843978881835938, 18.11753273010254, 0.9572955369949341], step: 264600, lr: 9.62469991638903e-05 2023-03-18 05:33:46,958 44k INFO ====> Epoch: 307, cost 658.74 s 2023-03-18 05:36:01,662 44k INFO Train Epoch: 308 [19%] 2023-03-18 05:36:01,662 44k INFO Losses: [2.6508774757385254, 2.0403382778167725, 6.139379501342773, 11.335732460021973, 1.1081377267837524], step: 264800, lr: 9.62349682889948e-05 2023-03-18 05:36:05,610 44k INFO Saving model and optimizer state at iteration 308 to ./logs\44k\G_264800.pth 2023-03-18 05:36:10,239 44k INFO Saving model and optimizer state at iteration 308 to ./logs\44k\D_264800.pth 2023-03-18 05:38:40,527 44k INFO Train Epoch: 308 [42%] 2023-03-18 05:38:40,528 44k INFO Losses: [2.491400957107544, 2.3204362392425537, 8.965377807617188, 17.494739532470703, 0.89682936668396], step: 265000, lr: 9.62349682889948e-05 2023-03-18 05:41:08,333 44k INFO Train Epoch: 308 [66%] 2023-03-18 05:41:08,334 44k INFO Losses: [2.6873865127563477, 2.03324031829834, 8.737339973449707, 16.078998565673828, 0.8429262042045593], step: 265200, lr: 9.62349682889948e-05 2023-03-18 05:43:35,834 44k INFO Train Epoch: 308 [89%] 2023-03-18 05:43:35,834 44k INFO Losses: [2.619088649749756, 2.1350080966949463, 4.727886199951172, 11.733638763427734, 1.1912751197814941], step: 265400, lr: 9.62349682889948e-05 2023-03-18 05:44:46,036 44k INFO ====> Epoch: 308, cost 659.08 s 2023-03-18 05:46:14,925 44k INFO Train Epoch: 309 [12%] 2023-03-18 05:46:14,925 44k INFO Losses: [2.4733059406280518, 2.0249271392822266, 9.610123634338379, 15.727736473083496, 1.0514180660247803], step: 265600, lr: 9.622293891795867e-05 2023-03-18 05:46:18,800 44k INFO Saving model and optimizer state at iteration 309 to ./logs\44k\G_265600.pth 2023-03-18 05:46:23,131 44k INFO Saving model and optimizer state at iteration 309 to ./logs\44k\D_265600.pth 2023-03-18 05:48:53,546 44k INFO Train Epoch: 309 [35%] 2023-03-18 05:48:53,546 44k INFO Losses: [2.4307148456573486, 2.593038558959961, 11.675004959106445, 19.756271362304688, 0.9878902435302734], step: 265800, lr: 9.622293891795867e-05 2023-03-18 05:51:20,662 44k INFO Train Epoch: 309 [58%] 2023-03-18 05:51:20,662 44k INFO Losses: [2.5905256271362305, 2.08648943901062, 5.89362096786499, 15.00790786743164, 1.2289903163909912], step: 266000, lr: 9.622293891795867e-05 2023-03-18 05:53:48,237 44k INFO Train Epoch: 309 [82%] 2023-03-18 05:53:48,237 44k INFO Losses: [2.868011236190796, 1.8234748840332031, 6.967700481414795, 14.38260269165039, 0.8084480166435242], step: 266200, lr: 9.622293891795867e-05 2023-03-18 05:55:43,925 44k INFO ====> Epoch: 309, cost 657.89 s 2023-03-18 05:56:27,289 44k INFO Train Epoch: 310 [5%] 2023-03-18 05:56:27,290 44k INFO Losses: [2.6688621044158936, 2.203305244445801, 9.519079208374023, 16.92506217956543, 0.9505566358566284], step: 266400, lr: 9.621091105059392e-05 2023-03-18 05:56:31,275 44k INFO Saving model and optimizer state at iteration 310 to ./logs\44k\G_266400.pth 2023-03-18 05:56:35,716 44k INFO Saving model and optimizer state at iteration 310 to ./logs\44k\D_266400.pth 2023-03-18 05:59:06,330 44k INFO Train Epoch: 310 [28%] 2023-03-18 05:59:06,330 44k INFO Losses: [2.4906773567199707, 2.29886531829834, 8.372966766357422, 17.933679580688477, 1.1111561059951782], step: 266600, lr: 9.621091105059392e-05 2023-03-18 06:01:33,713 44k INFO Train Epoch: 310 [51%] 2023-03-18 06:01:33,714 44k INFO Losses: [2.400428056716919, 2.447004795074463, 10.142557144165039, 14.96300220489502, 1.1227117776870728], step: 266800, lr: 9.621091105059392e-05 2023-03-18 06:04:01,488 44k INFO Train Epoch: 310 [74%] 2023-03-18 06:04:01,488 44k INFO Losses: [2.2983129024505615, 2.267946720123291, 12.070771217346191, 19.15565299987793, 0.9839089512825012], step: 267000, lr: 9.621091105059392e-05 2023-03-18 06:06:28,943 44k INFO Train Epoch: 310 [98%] 2023-03-18 06:06:28,944 44k INFO Losses: [2.881348133087158, 1.965449333190918, 4.334405422210693, 12.988136291503906, 0.8316200375556946], step: 267200, lr: 9.621091105059392e-05 2023-03-18 06:06:32,923 44k INFO Saving model and optimizer state at iteration 310 to ./logs\44k\G_267200.pth 2023-03-18 06:06:37,616 44k INFO Saving model and optimizer state at iteration 310 to ./logs\44k\D_267200.pth 2023-03-18 06:06:55,090 44k INFO ====> Epoch: 310, cost 671.17 s 2023-03-18 06:09:20,001 44k INFO Train Epoch: 311 [21%] 2023-03-18 06:09:20,001 44k INFO Losses: [2.7332725524902344, 1.8914638757705688, 7.179778099060059, 13.329080581665039, 1.0031706094741821], step: 267400, lr: 9.619888468671259e-05 2023-03-18 06:11:47,196 44k INFO Train Epoch: 311 [44%] 2023-03-18 06:11:47,196 44k INFO Losses: [2.4890482425689697, 2.263040065765381, 10.911659240722656, 19.80948257446289, 1.0949629545211792], step: 267600, lr: 9.619888468671259e-05 2023-03-18 06:14:14,669 44k INFO Train Epoch: 311 [67%] 2023-03-18 06:14:14,669 44k INFO Losses: [2.338135004043579, 2.3344476222991943, 12.793976783752441, 19.239463806152344, 1.2081527709960938], step: 267800, lr: 9.619888468671259e-05 2023-03-18 06:16:41,942 44k INFO Train Epoch: 311 [90%] 2023-03-18 06:16:41,942 44k INFO Losses: [2.6300852298736572, 2.1958723068237305, 11.170916557312012, 16.45610809326172, 0.8192450404167175], step: 268000, lr: 9.619888468671259e-05 2023-03-18 06:16:45,890 44k INFO Saving model and optimizer state at iteration 311 to ./logs\44k\G_268000.pth 2023-03-18 06:16:50,028 44k INFO Saving model and optimizer state at iteration 311 to ./logs\44k\D_268000.pth 2023-03-18 06:17:53,236 44k INFO ====> Epoch: 311, cost 658.15 s 2023-03-18 06:19:32,606 44k INFO Train Epoch: 312 [14%] 2023-03-18 06:19:32,606 44k INFO Losses: [2.645303726196289, 2.166952610015869, 6.731842041015625, 16.308876037597656, 1.2407306432724], step: 268200, lr: 9.618685982612675e-05 2023-03-18 06:22:00,165 44k INFO Train Epoch: 312 [37%] 2023-03-18 06:22:00,166 44k INFO Losses: [2.652495861053467, 2.221564531326294, 8.026930809020996, 15.609476089477539, 0.9412392377853394], step: 268400, lr: 9.618685982612675e-05 2023-03-18 06:24:27,585 44k INFO Train Epoch: 312 [60%] 2023-03-18 06:24:27,586 44k INFO Losses: [2.5286953449249268, 2.2605764865875244, 10.667304039001465, 18.347240447998047, 0.8617334365844727], step: 268600, lr: 9.618685982612675e-05 2023-03-18 06:26:55,186 44k INFO Train Epoch: 312 [83%] 2023-03-18 06:26:55,186 44k INFO Losses: [2.43603253364563, 2.5680277347564697, 9.5665283203125, 13.634564399719238, 1.0138938426971436], step: 268800, lr: 9.618685982612675e-05 2023-03-18 06:26:59,133 44k INFO Saving model and optimizer state at iteration 312 to ./logs\44k\G_268800.pth 2023-03-18 06:27:03,863 44k INFO Saving model and optimizer state at iteration 312 to ./logs\44k\D_268800.pth 2023-03-18 06:28:52,480 44k INFO ====> Epoch: 312, cost 659.24 s 2023-03-18 06:29:46,046 44k INFO Train Epoch: 313 [6%] 2023-03-18 06:29:46,046 44k INFO Losses: [2.2283554077148438, 2.8478450775146484, 8.581491470336914, 14.233168601989746, 0.827877402305603], step: 269000, lr: 9.617483646864849e-05 2023-03-18 06:32:13,664 44k INFO Train Epoch: 313 [30%] 2023-03-18 06:32:13,665 44k INFO Losses: [2.3599131107330322, 2.323188543319702, 12.186383247375488, 19.346750259399414, 1.0954937934875488], step: 269200, lr: 9.617483646864849e-05 2023-03-18 06:34:40,930 44k INFO Train Epoch: 313 [53%] 2023-03-18 06:34:40,931 44k INFO Losses: [2.42329740524292, 2.408895492553711, 7.697644233703613, 14.874994277954102, 1.1730866432189941], step: 269400, lr: 9.617483646864849e-05 2023-03-18 06:37:08,576 44k INFO Train Epoch: 313 [76%] 2023-03-18 06:37:08,576 44k INFO Losses: [2.5871167182922363, 1.9673845767974854, 10.618978500366211, 14.51581859588623, 1.2738401889801025], step: 269600, lr: 9.617483646864849e-05 2023-03-18 06:37:12,471 44k INFO Saving model and optimizer state at iteration 313 to ./logs\44k\G_269600.pth 2023-03-18 06:37:16,471 44k INFO Saving model and optimizer state at iteration 313 to ./logs\44k\D_269600.pth 2023-03-18 06:39:46,979 44k INFO Train Epoch: 313 [99%] 2023-03-18 06:39:46,980 44k INFO Losses: [2.4664807319641113, 2.1873373985290527, 7.805105209350586, 12.320427894592285, 1.0532857179641724], step: 269800, lr: 9.617483646864849e-05 2023-03-18 06:39:51,054 44k INFO ====> Epoch: 313, cost 658.57 s 2023-03-18 06:42:26,395 44k INFO Train Epoch: 314 [23%] 2023-03-18 06:42:26,395 44k INFO Losses: [2.3059136867523193, 2.6980810165405273, 10.546470642089844, 15.78343391418457, 0.9706102609634399], step: 270000, lr: 9.61628146140899e-05 2023-03-18 06:44:53,813 44k INFO Train Epoch: 314 [46%] 2023-03-18 06:44:53,813 44k INFO Losses: [2.6099462509155273, 2.2023041248321533, 6.75855827331543, 15.826297760009766, 1.1432722806930542], step: 270200, lr: 9.61628146140899e-05 2023-03-18 06:47:21,371 44k INFO Train Epoch: 314 [69%] 2023-03-18 06:47:21,371 44k INFO Losses: [2.279139995574951, 2.537526845932007, 9.96740436553955, 17.7558650970459, 0.741157591342926], step: 270400, lr: 9.61628146140899e-05 2023-03-18 06:47:25,245 44k INFO Saving model and optimizer state at iteration 314 to ./logs\44k\G_270400.pth 2023-03-18 06:47:30,059 44k INFO Saving model and optimizer state at iteration 314 to ./logs\44k\D_270400.pth 2023-03-18 06:50:00,577 44k INFO Train Epoch: 314 [92%] 2023-03-18 06:50:00,577 44k INFO Losses: [2.51698899269104, 2.0432143211364746, 10.368651390075684, 17.500835418701172, 1.0312360525131226], step: 270600, lr: 9.61628146140899e-05 2023-03-18 06:50:50,319 44k INFO ====> Epoch: 314, cost 659.26 s 2023-03-18 06:52:39,997 44k INFO Train Epoch: 315 [15%] 2023-03-18 06:52:39,997 44k INFO Losses: [2.342859983444214, 2.2450852394104004, 8.548311233520508, 16.369245529174805, 0.7928121089935303], step: 270800, lr: 9.615079426226314e-05 2023-03-18 06:55:07,531 44k INFO Train Epoch: 315 [39%] 2023-03-18 06:55:07,532 44k INFO Losses: [2.4611191749572754, 2.2540085315704346, 9.69759464263916, 19.401519775390625, 0.8974986672401428], step: 271000, lr: 9.615079426226314e-05 2023-03-18 06:57:35,037 44k INFO Train Epoch: 315 [62%] 2023-03-18 06:57:35,037 44k INFO Losses: [2.686811923980713, 2.247985601425171, 8.80577278137207, 15.877537727355957, 0.9192924499511719], step: 271200, lr: 9.615079426226314e-05 2023-03-18 06:57:39,070 44k INFO Saving model and optimizer state at iteration 315 to ./logs\44k\G_271200.pth 2023-03-18 06:57:42,908 44k INFO Saving model and optimizer state at iteration 315 to ./logs\44k\D_271200.pth 2023-03-18 07:00:13,686 44k INFO Train Epoch: 315 [85%] 2023-03-18 07:00:13,686 44k INFO Losses: [2.5415196418762207, 2.095155954360962, 9.795415878295898, 18.812421798706055, 1.3039473295211792], step: 271400, lr: 9.615079426226314e-05 2023-03-18 07:01:48,903 44k INFO ====> Epoch: 315, cost 658.58 s 2023-03-18 07:02:53,056 44k INFO Train Epoch: 316 [8%] 2023-03-18 07:02:53,057 44k INFO Losses: [2.6571664810180664, 2.3366644382476807, 8.901527404785156, 15.901288032531738, 1.1262719631195068], step: 271600, lr: 9.613877541298036e-05 2023-03-18 07:05:20,723 44k INFO Train Epoch: 316 [31%] 2023-03-18 07:05:20,724 44k INFO Losses: [2.5765304565429688, 2.2427427768707275, 7.733003616333008, 16.872150421142578, 0.6651782989501953], step: 271800, lr: 9.613877541298036e-05 2023-03-18 07:07:48,198 44k INFO Train Epoch: 316 [55%] 2023-03-18 07:07:48,199 44k INFO Losses: [2.6479852199554443, 2.388261079788208, 7.606923580169678, 18.029218673706055, 1.1819379329681396], step: 272000, lr: 9.613877541298036e-05 2023-03-18 07:07:52,276 44k INFO Saving model and optimizer state at iteration 316 to ./logs\44k\G_272000.pth 2023-03-18 07:07:56,728 44k INFO Saving model and optimizer state at iteration 316 to ./logs\44k\D_272000.pth 2023-03-18 07:10:27,388 44k INFO Train Epoch: 316 [78%] 2023-03-18 07:10:27,388 44k INFO Losses: [2.686203718185425, 1.9353140592575073, 6.962953090667725, 14.296473503112793, 1.3767166137695312], step: 272200, lr: 9.613877541298036e-05 2023-03-18 07:12:48,426 44k INFO ====> Epoch: 316, cost 659.52 s 2023-03-18 07:13:06,827 44k INFO Train Epoch: 317 [1%] 2023-03-18 07:13:06,828 44k INFO Losses: [2.5326976776123047, 2.303579092025757, 6.238705158233643, 12.565064430236816, 1.1378686428070068], step: 272400, lr: 9.612675806605373e-05 2023-03-18 07:15:34,637 44k INFO Train Epoch: 317 [24%] 2023-03-18 07:15:34,637 44k INFO Losses: [2.3748154640197754, 2.3105387687683105, 11.695099830627441, 18.7307071685791, 0.7025766372680664], step: 272600, lr: 9.612675806605373e-05 2023-03-18 07:18:02,027 44k INFO Train Epoch: 317 [47%] 2023-03-18 07:18:02,027 44k INFO Losses: [2.48412823677063, 2.149448871612549, 9.06778621673584, 16.94638442993164, 1.0147366523742676], step: 272800, lr: 9.612675806605373e-05 2023-03-18 07:18:05,987 44k INFO Saving model and optimizer state at iteration 317 to ./logs\44k\G_272800.pth 2023-03-18 07:18:10,630 44k INFO Saving model and optimizer state at iteration 317 to ./logs\44k\D_272800.pth 2023-03-18 07:20:41,364 44k INFO Train Epoch: 317 [71%] 2023-03-18 07:20:41,364 44k INFO Losses: [2.5751044750213623, 2.200544834136963, 11.964326858520508, 17.177167892456055, 0.9208069443702698], step: 273000, lr: 9.612675806605373e-05 2023-03-18 07:23:08,860 44k INFO Train Epoch: 317 [94%] 2023-03-18 07:23:08,860 44k INFO Losses: [2.6879220008850098, 2.0595178604125977, 6.423506736755371, 10.946208953857422, 0.9790401458740234], step: 273200, lr: 9.612675806605373e-05 2023-03-18 07:23:48,261 44k INFO ====> Epoch: 317, cost 659.83 s 2023-03-18 07:25:48,466 44k INFO Train Epoch: 318 [17%] 2023-03-18 07:25:48,466 44k INFO Losses: [2.547783613204956, 2.1393702030181885, 8.648958206176758, 18.528535842895508, 0.8159936666488647], step: 273400, lr: 9.611474222129547e-05 2023-03-18 07:28:16,030 44k INFO Train Epoch: 318 [40%] 2023-03-18 07:28:16,030 44k INFO Losses: [2.7921805381774902, 1.7731854915618896, 7.392936706542969, 13.876019477844238, 1.010467767715454], step: 273600, lr: 9.611474222129547e-05 2023-03-18 07:28:19,933 44k INFO Saving model and optimizer state at iteration 318 to ./logs\44k\G_273600.pth 2023-03-18 07:28:24,016 44k INFO Saving model and optimizer state at iteration 318 to ./logs\44k\D_273600.pth 2023-03-18 07:30:54,771 44k INFO Train Epoch: 318 [63%] 2023-03-18 07:30:54,771 44k INFO Losses: [2.3806257247924805, 2.3219380378723145, 13.784828186035156, 18.572214126586914, 1.102874517440796], step: 273800, lr: 9.611474222129547e-05 2023-03-18 07:33:22,445 44k INFO Train Epoch: 318 [87%] 2023-03-18 07:33:22,445 44k INFO Losses: [2.6248550415039062, 2.1272361278533936, 9.721010208129883, 16.037927627563477, 0.7361060976982117], step: 274000, lr: 9.611474222129547e-05 2023-03-18 07:34:47,427 44k INFO ====> Epoch: 318, cost 659.17 s 2023-03-18 07:36:01,926 44k INFO Train Epoch: 319 [10%] 2023-03-18 07:36:01,926 44k INFO Losses: [2.546029806137085, 2.1940701007843018, 10.493453025817871, 17.49419403076172, 0.7899893522262573], step: 274200, lr: 9.61027278785178e-05 2023-03-18 07:38:29,710 44k INFO Train Epoch: 319 [33%] 2023-03-18 07:38:29,710 44k INFO Losses: [2.2725448608398438, 2.3715670108795166, 13.042584419250488, 20.809213638305664, 0.836977481842041], step: 274400, lr: 9.61027278785178e-05 2023-03-18 07:38:33,624 44k INFO Saving model and optimizer state at iteration 319 to ./logs\44k\G_274400.pth 2023-03-18 07:38:37,669 44k INFO Saving model and optimizer state at iteration 319 to ./logs\44k\D_274400.pth 2023-03-18 07:41:08,333 44k INFO Train Epoch: 319 [56%] 2023-03-18 07:41:08,334 44k INFO Losses: [2.2444846630096436, 2.764660358428955, 10.590850830078125, 16.10074234008789, 0.9250320196151733], step: 274600, lr: 9.61027278785178e-05 2023-03-18 07:43:36,037 44k INFO Train Epoch: 319 [79%] 2023-03-18 07:43:36,037 44k INFO Losses: [2.546415090560913, 2.2246737480163574, 5.931961536407471, 10.070152282714844, 1.1409000158309937], step: 274800, lr: 9.61027278785178e-05 2023-03-18 07:45:46,885 44k INFO ====> Epoch: 319, cost 659.46 s 2023-03-18 07:46:15,600 44k INFO Train Epoch: 320 [3%] 2023-03-18 07:46:15,600 44k INFO Losses: [2.728675365447998, 2.2572073936462402, 8.931364059448242, 18.13987922668457, 1.1328171491622925], step: 275000, lr: 9.609071503753299e-05 2023-03-18 07:48:43,604 44k INFO Train Epoch: 320 [26%] 2023-03-18 07:48:43,604 44k INFO Losses: [2.3988640308380127, 2.228996515274048, 11.08401870727539, 17.023134231567383, 0.5926072001457214], step: 275200, lr: 9.609071503753299e-05 2023-03-18 07:48:47,532 44k INFO Saving model and optimizer state at iteration 320 to ./logs\44k\G_275200.pth 2023-03-18 07:48:52,185 44k INFO Saving model and optimizer state at iteration 320 to ./logs\44k\D_275200.pth 2023-03-18 07:51:22,695 44k INFO Train Epoch: 320 [49%] 2023-03-18 07:51:22,696 44k INFO Losses: [2.4196665287017822, 2.5298593044281006, 8.654524803161621, 18.809873580932617, 1.2748481035232544], step: 275400, lr: 9.609071503753299e-05 2023-03-18 07:53:50,479 44k INFO Train Epoch: 320 [72%] 2023-03-18 07:53:50,480 44k INFO Losses: [2.3491835594177246, 2.3058905601501465, 11.824094772338867, 20.391620635986328, 1.3039836883544922], step: 275600, lr: 9.609071503753299e-05 2023-03-18 07:56:18,055 44k INFO Train Epoch: 320 [95%] 2023-03-18 07:56:18,055 44k INFO Losses: [2.5563530921936035, 1.9614613056182861, 9.803213119506836, 14.67662239074707, 1.078718662261963], step: 275800, lr: 9.609071503753299e-05 2023-03-18 07:56:47,277 44k INFO ====> Epoch: 320, cost 660.39 s 2023-03-18 07:58:57,895 44k INFO Train Epoch: 321 [19%] 2023-03-18 07:58:57,895 44k INFO Losses: [2.508171558380127, 2.2600724697113037, 10.408652305603027, 17.738929748535156, 0.9465076327323914], step: 276000, lr: 9.60787036981533e-05 2023-03-18 07:59:01,897 44k INFO Saving model and optimizer state at iteration 321 to ./logs\44k\G_276000.pth 2023-03-18 07:59:05,861 44k INFO Saving model and optimizer state at iteration 321 to ./logs\44k\D_276000.pth 2023-03-18 08:01:36,471 44k INFO Train Epoch: 321 [42%] 2023-03-18 08:01:36,471 44k INFO Losses: [2.5952138900756836, 2.1093602180480957, 3.809008836746216, 9.643705368041992, 0.5957464575767517], step: 276200, lr: 9.60787036981533e-05 2023-03-18 08:04:04,250 44k INFO Train Epoch: 321 [65%] 2023-03-18 08:04:04,251 44k INFO Losses: [2.662329912185669, 2.21785306930542, 7.61326265335083, 15.802631378173828, 1.0650838613510132], step: 276400, lr: 9.60787036981533e-05 2023-03-18 08:06:32,066 44k INFO Train Epoch: 321 [88%] 2023-03-18 08:06:32,067 44k INFO Losses: [2.831681966781616, 1.682658314704895, 8.098605155944824, 15.449668884277344, 0.9008838534355164], step: 276600, lr: 9.60787036981533e-05 2023-03-18 08:07:46,746 44k INFO ====> Epoch: 321, cost 659.47 s 2023-03-18 08:09:11,460 44k INFO Train Epoch: 322 [11%] 2023-03-18 08:09:11,461 44k INFO Losses: [2.6923325061798096, 2.2296533584594727, 10.024149894714355, 13.717074394226074, 1.0104705095291138], step: 276800, lr: 9.606669386019102e-05 2023-03-18 08:09:15,426 44k INFO Saving model and optimizer state at iteration 322 to ./logs\44k\G_276800.pth 2023-03-18 08:09:20,215 44k INFO Saving model and optimizer state at iteration 322 to ./logs\44k\D_276800.pth 2023-03-18 08:11:50,774 44k INFO Train Epoch: 322 [35%] 2023-03-18 08:11:50,776 44k INFO Losses: [2.5267746448516846, 2.5543129444122314, 10.339533805847168, 16.17167091369629, 1.1873149871826172], step: 277000, lr: 9.606669386019102e-05 2023-03-18 08:14:18,306 44k INFO Train Epoch: 322 [58%] 2023-03-18 08:14:18,306 44k INFO Losses: [2.518968105316162, 2.320237159729004, 9.355047225952148, 13.542987823486328, 1.0677167177200317], step: 277200, lr: 9.606669386019102e-05 2023-03-18 08:16:46,149 44k INFO Train Epoch: 322 [81%] 2023-03-18 08:16:46,150 44k INFO Losses: [2.576047658920288, 2.251216173171997, 9.94331169128418, 17.77480697631836, 1.2582606077194214], step: 277400, lr: 9.606669386019102e-05 2023-03-18 08:18:46,597 44k INFO ====> Epoch: 322, cost 659.85 s 2023-03-18 08:19:25,624 44k INFO Train Epoch: 323 [4%] 2023-03-18 08:19:25,624 44k INFO Losses: [2.4835031032562256, 2.1503186225891113, 9.337934494018555, 16.776960372924805, 0.8337710499763489], step: 277600, lr: 9.60546855234585e-05 2023-03-18 08:19:29,507 44k INFO Saving model and optimizer state at iteration 323 to ./logs\44k\G_277600.pth 2023-03-18 08:19:33,965 44k INFO Saving model and optimizer state at iteration 323 to ./logs\44k\D_277600.pth 2023-03-18 08:22:04,758 44k INFO Train Epoch: 323 [27%] 2023-03-18 08:22:04,758 44k INFO Losses: [2.580428123474121, 2.1646764278411865, 8.061330795288086, 11.798736572265625, 1.2926079034805298], step: 277800, lr: 9.60546855234585e-05 2023-03-18 08:24:32,491 44k INFO Train Epoch: 323 [51%] 2023-03-18 08:24:32,492 44k INFO Losses: [2.5208816528320312, 1.9807114601135254, 9.226201057434082, 15.461124420166016, 1.0128182172775269], step: 278000, lr: 9.60546855234585e-05 2023-03-18 08:27:00,322 44k INFO Train Epoch: 323 [74%] 2023-03-18 08:27:00,323 44k INFO Losses: [2.6355042457580566, 2.1355929374694824, 6.368625640869141, 13.729961395263672, 0.8463397026062012], step: 278200, lr: 9.60546855234585e-05 2023-03-18 08:29:28,291 44k INFO Train Epoch: 323 [97%] 2023-03-18 08:29:28,291 44k INFO Losses: [2.4853315353393555, 2.52097225189209, 9.56402587890625, 16.75349235534668, 1.0618528127670288], step: 278400, lr: 9.60546855234585e-05 2023-03-18 08:29:32,218 44k INFO Saving model and optimizer state at iteration 323 to ./logs\44k\G_278400.pth 2023-03-18 08:29:36,907 44k INFO Saving model and optimizer state at iteration 323 to ./logs\44k\D_278400.pth 2023-03-18 08:29:58,718 44k INFO ====> Epoch: 323, cost 672.12 s 2023-03-18 08:32:19,657 44k INFO Train Epoch: 324 [20%] 2023-03-18 08:32:19,657 44k INFO Losses: [2.233400344848633, 2.7160592079162598, 7.283876895904541, 13.404086112976074, 1.1131935119628906], step: 278600, lr: 9.604267868776807e-05 2023-03-18 08:34:47,558 44k INFO Train Epoch: 324 [43%] 2023-03-18 08:34:47,558 44k INFO Losses: [2.547974109649658, 2.2565085887908936, 11.914340019226074, 18.46012306213379, 1.2587941884994507], step: 278800, lr: 9.604267868776807e-05 2023-03-18 08:37:15,731 44k INFO Train Epoch: 324 [67%] 2023-03-18 08:37:15,732 44k INFO Losses: [2.5863428115844727, 2.4018666744232178, 9.759736061096191, 17.818029403686523, 1.2515673637390137], step: 279000, lr: 9.604267868776807e-05 2023-03-18 08:39:43,484 44k INFO Train Epoch: 324 [90%] 2023-03-18 08:39:43,485 44k INFO Losses: [2.3557634353637695, 2.7979183197021484, 6.7381463050842285, 13.946563720703125, 0.9034103155136108], step: 279200, lr: 9.604267868776807e-05 2023-03-18 08:39:47,556 44k INFO Saving model and optimizer state at iteration 324 to ./logs\44k\G_279200.pth 2023-03-18 08:39:51,890 44k INFO Saving model and optimizer state at iteration 324 to ./logs\44k\D_279200.pth 2023-03-18 08:40:59,238 44k INFO ====> Epoch: 324, cost 660.52 s 2023-03-18 08:42:34,297 44k INFO Train Epoch: 325 [13%] 2023-03-18 08:42:34,297 44k INFO Losses: [2.491992235183716, 1.9747084379196167, 12.676562309265137, 19.408235549926758, 0.9166529178619385], step: 279400, lr: 9.603067335293209e-05 2023-03-18 08:45:02,151 44k INFO Train Epoch: 325 [36%] 2023-03-18 08:45:02,152 44k INFO Losses: [2.7492189407348633, 1.9642871618270874, 5.577601909637451, 12.19131088256836, 1.0945619344711304], step: 279600, lr: 9.603067335293209e-05 2023-03-18 08:47:29,751 44k INFO Train Epoch: 325 [59%] 2023-03-18 08:47:29,752 44k INFO Losses: [2.570835590362549, 2.3239474296569824, 8.030308723449707, 17.991483688354492, 0.8474574089050293], step: 279800, lr: 9.603067335293209e-05 2023-03-18 08:49:57,570 44k INFO Train Epoch: 325 [83%] 2023-03-18 08:49:57,571 44k INFO Losses: [2.745425224304199, 1.9224275350570679, 7.8224663734436035, 11.380912780761719, 1.0132094621658325], step: 280000, lr: 9.603067335293209e-05 2023-03-18 08:50:01,491 44k INFO Saving model and optimizer state at iteration 325 to ./logs\44k\G_280000.pth 2023-03-18 08:50:05,881 44k INFO Saving model and optimizer state at iteration 325 to ./logs\44k\D_280000.pth 2023-03-18 08:51:59,170 44k INFO ====> Epoch: 325, cost 659.93 s 2023-03-18 08:52:48,459 44k INFO Train Epoch: 326 [6%] 2023-03-18 08:52:48,459 44k INFO Losses: [2.4319353103637695, 2.4637067317962646, 10.53923225402832, 14.371603012084961, 0.6440317034721375], step: 280200, lr: 9.601866951876297e-05 2023-03-18 08:55:16,265 44k INFO Train Epoch: 326 [29%] 2023-03-18 08:55:16,266 44k INFO Losses: [2.3411877155303955, 2.148983955383301, 13.519930839538574, 19.19933319091797, 1.3507078886032104], step: 280400, lr: 9.601866951876297e-05 2023-03-18 08:57:43,845 44k INFO Train Epoch: 326 [52%] 2023-03-18 08:57:43,846 44k INFO Losses: [2.325678825378418, 2.4451324939727783, 12.85615348815918, 19.431127548217773, 0.7815988659858704], step: 280600, lr: 9.601866951876297e-05 2023-03-18 09:00:11,826 44k INFO Train Epoch: 326 [75%] 2023-03-18 09:00:11,827 44k INFO Losses: [2.6149098873138428, 2.305278778076172, 7.439381122589111, 15.5255765914917, 1.2067276239395142], step: 280800, lr: 9.601866951876297e-05 2023-03-18 09:00:15,733 44k INFO Saving model and optimizer state at iteration 326 to ./logs\44k\G_280800.pth 2023-03-18 09:00:20,392 44k INFO Saving model and optimizer state at iteration 326 to ./logs\44k\D_280800.pth 2023-03-18 09:02:51,178 44k INFO Train Epoch: 326 [99%] 2023-03-18 09:02:51,179 44k INFO Losses: [2.357630729675293, 2.2054483890533447, 11.547953605651855, 19.205400466918945, 0.8895686268806458], step: 281000, lr: 9.601866951876297e-05 2023-03-18 09:02:59,631 44k INFO ====> Epoch: 326, cost 660.46 s 2023-03-18 09:05:30,831 44k INFO Train Epoch: 327 [22%] 2023-03-18 09:05:30,832 44k INFO Losses: [2.6111323833465576, 2.25978422164917, 5.9079999923706055, 10.335874557495117, 1.3635724782943726], step: 281200, lr: 9.600666718507311e-05 2023-03-18 09:07:58,345 44k INFO Train Epoch: 327 [45%] 2023-03-18 09:07:58,345 44k INFO Losses: [2.544679641723633, 2.0901336669921875, 9.185465812683105, 17.728313446044922, 1.0348471403121948], step: 281400, lr: 9.600666718507311e-05 2023-03-18 09:10:26,155 44k INFO Train Epoch: 327 [68%] 2023-03-18 09:10:26,155 44k INFO Losses: [2.499547004699707, 2.2974140644073486, 10.334474563598633, 16.34430694580078, 1.0515824556350708], step: 281600, lr: 9.600666718507311e-05 2023-03-18 09:10:30,173 44k INFO Saving model and optimizer state at iteration 327 to ./logs\44k\G_281600.pth 2023-03-18 09:10:34,245 44k INFO Saving model and optimizer state at iteration 327 to ./logs\44k\D_281600.pth 2023-03-18 09:13:04,974 44k INFO Train Epoch: 327 [91%] 2023-03-18 09:13:04,974 44k INFO Losses: [2.636843681335449, 1.9041472673416138, 11.17850112915039, 19.206520080566406, 1.111440896987915], step: 281800, lr: 9.600666718507311e-05 2023-03-18 09:13:59,306 44k INFO ====> Epoch: 327, cost 659.68 s 2023-03-18 09:15:44,829 44k INFO Train Epoch: 328 [15%] 2023-03-18 09:15:44,829 44k INFO Losses: [2.5560760498046875, 2.008676528930664, 8.671327590942383, 17.28314971923828, 0.7732677459716797], step: 282000, lr: 9.599466635167497e-05 2023-03-18 09:18:12,705 44k INFO Train Epoch: 328 [38%] 2023-03-18 09:18:12,705 44k INFO Losses: [2.65374755859375, 2.1730566024780273, 9.563102722167969, 19.89597511291504, 0.9160358905792236], step: 282200, lr: 9.599466635167497e-05 2023-03-18 09:20:40,295 44k INFO Train Epoch: 328 [61%] 2023-03-18 09:20:40,296 44k INFO Losses: [2.1960153579711914, 2.7633628845214844, 8.144352912902832, 15.724239349365234, 1.0174338817596436], step: 282400, lr: 9.599466635167497e-05 2023-03-18 09:20:44,277 44k INFO Saving model and optimizer state at iteration 328 to ./logs\44k\G_282400.pth 2023-03-18 09:20:48,867 44k INFO Saving model and optimizer state at iteration 328 to ./logs\44k\D_282400.pth 2023-03-18 09:23:19,534 44k INFO Train Epoch: 328 [84%] 2023-03-18 09:23:19,535 44k INFO Losses: [2.511577844619751, 2.0625271797180176, 9.669984817504883, 15.665867805480957, 0.4099865257740021], step: 282600, lr: 9.599466635167497e-05 2023-03-18 09:24:59,352 44k INFO ====> Epoch: 328, cost 660.05 s 2023-03-18 09:25:59,099 44k INFO Train Epoch: 329 [7%] 2023-03-18 09:25:59,099 44k INFO Losses: [2.743377923965454, 1.9238405227661133, 5.792948246002197, 12.03875732421875, 1.1506903171539307], step: 282800, lr: 9.5982667018381e-05 2023-03-18 09:28:26,981 44k INFO Train Epoch: 329 [31%] 2023-03-18 09:28:26,982 44k INFO Losses: [2.334460496902466, 2.545567512512207, 11.853559494018555, 18.159034729003906, 1.0861660242080688], step: 283000, lr: 9.5982667018381e-05 2023-03-18 09:30:54,557 44k INFO Train Epoch: 329 [54%] 2023-03-18 09:30:54,557 44k INFO Losses: [2.891441822052002, 1.9233959913253784, 1.7911899089813232, 5.5877532958984375, 0.6825445294380188], step: 283200, lr: 9.5982667018381e-05 2023-03-18 09:30:58,533 44k INFO Saving model and optimizer state at iteration 329 to ./logs\44k\G_283200.pth 2023-03-18 09:31:03,121 44k INFO Saving model and optimizer state at iteration 329 to ./logs\44k\D_283200.pth 2023-03-18 09:33:33,869 44k INFO Train Epoch: 329 [77%] 2023-03-18 09:33:33,870 44k INFO Losses: [2.5869953632354736, 1.9339605569839478, 9.789183616638184, 15.530755043029785, 1.2580692768096924], step: 283400, lr: 9.5982667018381e-05 2023-03-18 09:35:59,585 44k INFO ====> Epoch: 329, cost 660.23 s 2023-03-18 09:36:13,731 44k INFO Train Epoch: 330 [0%] 2023-03-18 09:36:13,731 44k INFO Losses: [2.795435905456543, 2.1647047996520996, 8.469358444213867, 17.582021713256836, 1.060906171798706], step: 283600, lr: 9.59706691850037e-05 2023-03-18 09:38:41,613 44k INFO Train Epoch: 330 [23%] 2023-03-18 09:38:41,613 44k INFO Losses: [2.8169407844543457, 2.0460147857666016, 5.327569961547852, 13.87583065032959, 0.9841699600219727], step: 283800, lr: 9.59706691850037e-05 2023-03-18 09:41:09,232 44k INFO Train Epoch: 330 [47%] 2023-03-18 09:41:09,232 44k INFO Losses: [2.6672468185424805, 2.6168313026428223, 6.545163631439209, 13.061455726623535, 1.134773850440979], step: 284000, lr: 9.59706691850037e-05 2023-03-18 09:41:13,147 44k INFO Saving model and optimizer state at iteration 330 to ./logs\44k\G_284000.pth 2023-03-18 09:41:17,345 44k INFO Saving model and optimizer state at iteration 330 to ./logs\44k\D_284000.pth 2023-03-18 09:43:48,243 44k INFO Train Epoch: 330 [70%] 2023-03-18 09:43:48,244 44k INFO Losses: [2.5500359535217285, 2.2078211307525635, 9.243863105773926, 14.356284141540527, 0.7846471667289734], step: 284200, lr: 9.59706691850037e-05 2023-03-18 09:46:16,069 44k INFO Train Epoch: 330 [93%] 2023-03-18 09:46:16,069 44k INFO Losses: [2.8939805030822754, 1.8296144008636475, 8.628681182861328, 18.028385162353516, 1.133097767829895], step: 284400, lr: 9.59706691850037e-05 2023-03-18 09:46:59,995 44k INFO ====> Epoch: 330, cost 660.41 s 2023-03-18 09:48:55,893 44k INFO Train Epoch: 331 [16%] 2023-03-18 09:48:55,893 44k INFO Losses: [2.469813346862793, 2.19755220413208, 9.580041885375977, 18.14786148071289, 0.9665874242782593], step: 284600, lr: 9.595867285135558e-05 2023-03-18 09:51:23,656 44k INFO Train Epoch: 331 [39%] 2023-03-18 09:51:23,657 44k INFO Losses: [2.6195995807647705, 2.2376623153686523, 11.163531303405762, 14.558220863342285, 0.8257871866226196], step: 284800, lr: 9.595867285135558e-05 2023-03-18 09:51:27,586 44k INFO Saving model and optimizer state at iteration 331 to ./logs\44k\G_284800.pth 2023-03-18 09:51:32,234 44k INFO Saving model and optimizer state at iteration 331 to ./logs\44k\D_284800.pth 2023-03-18 09:54:02,929 44k INFO Train Epoch: 331 [63%] 2023-03-18 09:54:02,930 44k INFO Losses: [2.7047035694122314, 1.9615497589111328, 8.73831844329834, 15.725759506225586, 0.8883121013641357], step: 285000, lr: 9.595867285135558e-05 2023-03-18 09:56:30,806 44k INFO Train Epoch: 331 [86%] 2023-03-18 09:56:30,806 44k INFO Losses: [2.3768222332000732, 2.426593542098999, 12.883336067199707, 19.810001373291016, 0.9701721668243408], step: 285200, lr: 9.595867285135558e-05 2023-03-18 09:58:00,263 44k INFO ====> Epoch: 331, cost 660.27 s 2023-03-18 09:59:10,416 44k INFO Train Epoch: 332 [9%] 2023-03-18 09:59:10,416 44k INFO Losses: [2.429840564727783, 2.116179943084717, 10.086014747619629, 15.455928802490234, 1.133942723274231], step: 285400, lr: 9.594667801724916e-05 2023-03-18 10:01:38,174 44k INFO Train Epoch: 332 [32%] 2023-03-18 10:01:38,174 44k INFO Losses: [2.673816680908203, 2.0057425498962402, 9.199610710144043, 16.146955490112305, 1.0834568738937378], step: 285600, lr: 9.594667801724916e-05 2023-03-18 10:01:42,256 44k INFO Saving model and optimizer state at iteration 332 to ./logs\44k\G_285600.pth 2023-03-18 10:01:46,717 44k INFO Saving model and optimizer state at iteration 332 to ./logs\44k\D_285600.pth 2023-03-18 10:04:17,228 44k INFO Train Epoch: 332 [55%] 2023-03-18 10:04:17,228 44k INFO Losses: [2.6043355464935303, 2.144092082977295, 9.083649635314941, 14.594070434570312, 1.0941365957260132], step: 285800, lr: 9.594667801724916e-05 2023-03-18 10:06:45,096 44k INFO Train Epoch: 332 [79%] 2023-03-18 10:06:45,097 44k INFO Losses: [2.528440475463867, 2.071992874145508, 8.555462837219238, 17.601764678955078, 1.3826420307159424], step: 286000, lr: 9.594667801724916e-05 2023-03-18 10:09:00,447 44k INFO ====> Epoch: 332, cost 660.18 s 2023-03-18 10:09:24,792 44k INFO Train Epoch: 333 [2%] 2023-03-18 10:09:24,792 44k INFO Losses: [2.575242280960083, 2.0271596908569336, 9.235373497009277, 19.232513427734375, 1.4660594463348389], step: 286200, lr: 9.5934684682497e-05 2023-03-18 10:11:52,829 44k INFO Train Epoch: 333 [25%] 2023-03-18 10:11:52,829 44k INFO Losses: [2.6623640060424805, 1.8205416202545166, 7.241111755371094, 15.63725471496582, 1.0331815481185913], step: 286400, lr: 9.5934684682497e-05 2023-03-18 10:11:56,845 44k INFO Saving model and optimizer state at iteration 333 to ./logs\44k\G_286400.pth 2023-03-18 10:12:00,753 44k INFO Saving model and optimizer state at iteration 333 to ./logs\44k\D_286400.pth 2023-03-18 10:14:31,563 44k INFO Train Epoch: 333 [48%] 2023-03-18 10:14:31,564 44k INFO Losses: [2.66127347946167, 2.2915024757385254, 10.032845497131348, 17.104602813720703, 0.7384946346282959], step: 286600, lr: 9.5934684682497e-05 2023-03-18 10:16:59,522 44k INFO Train Epoch: 333 [71%] 2023-03-18 10:16:59,523 44k INFO Losses: [2.8297908306121826, 2.1160888671875, 7.368399620056152, 16.16876792907715, 1.071194052696228], step: 286800, lr: 9.5934684682497e-05 2023-03-18 10:19:27,185 44k INFO Train Epoch: 333 [95%] 2023-03-18 10:19:27,186 44k INFO Losses: [2.676826000213623, 2.308121919631958, 11.661932945251465, 20.03898048400879, 1.3049817085266113], step: 287000, lr: 9.5934684682497e-05 2023-03-18 10:20:00,837 44k INFO ====> Epoch: 333, cost 660.39 s 2023-03-18 10:22:12,355 44k INFO Train Epoch: 334 [18%] 2023-03-18 10:22:12,355 44k INFO Losses: [2.597534418106079, 2.4248087406158447, 10.906956672668457, 16.769493103027344, 1.1287471055984497], step: 287200, lr: 9.592269284691169e-05 2023-03-18 10:22:16,307 44k INFO Saving model and optimizer state at iteration 334 to ./logs\44k\G_287200.pth 2023-03-18 10:22:21,302 44k INFO Saving model and optimizer state at iteration 334 to ./logs\44k\D_287200.pth 2023-03-18 10:24:52,317 44k INFO Train Epoch: 334 [41%] 2023-03-18 10:24:52,317 44k INFO Losses: [2.4362218379974365, 2.3185970783233643, 8.125731468200684, 17.761075973510742, 1.1669495105743408], step: 287400, lr: 9.592269284691169e-05 2023-03-18 10:27:20,519 44k INFO Train Epoch: 334 [64%] 2023-03-18 10:27:20,519 44k INFO Losses: [2.1441380977630615, 2.417452812194824, 13.056865692138672, 19.756023406982422, 1.2793214321136475], step: 287600, lr: 9.592269284691169e-05 2023-03-18 10:29:48,787 44k INFO Train Epoch: 334 [87%] 2023-03-18 10:29:48,787 44k INFO Losses: [2.657801866531372, 1.8146065473556519, 9.568751335144043, 15.181068420410156, 0.7323160171508789], step: 287800, lr: 9.592269284691169e-05 2023-03-18 10:31:08,114 44k INFO ====> Epoch: 334, cost 667.28 s 2023-03-18 10:32:28,552 44k INFO Train Epoch: 335 [11%] 2023-03-18 10:32:28,552 44k INFO Losses: [2.4776241779327393, 2.304107904434204, 9.993059158325195, 19.595151901245117, 1.1850224733352661], step: 288000, lr: 9.591070251030582e-05 2023-03-18 10:32:32,549 44k INFO Saving model and optimizer state at iteration 335 to ./logs\44k\G_288000.pth 2023-03-18 10:32:36,698 44k INFO Saving model and optimizer state at iteration 335 to ./logs\44k\D_288000.pth 2023-03-18 10:35:07,790 44k INFO Train Epoch: 335 [34%] 2023-03-18 10:35:07,790 44k INFO Losses: [2.507491111755371, 2.1465582847595215, 14.215641021728516, 18.69512367248535, 1.1640136241912842], step: 288200, lr: 9.591070251030582e-05 2023-03-18 10:37:35,851 44k INFO Train Epoch: 335 [57%] 2023-03-18 10:37:35,852 44k INFO Losses: [2.6106228828430176, 2.0855088233947754, 7.01536750793457, 13.849371910095215, 1.1104609966278076], step: 288400, lr: 9.591070251030582e-05 2023-03-18 10:40:03,929 44k INFO Train Epoch: 335 [80%] 2023-03-18 10:40:03,929 44k INFO Losses: [2.3944921493530273, 2.3999311923980713, 11.135828018188477, 16.261816024780273, 1.1839202642440796], step: 288600, lr: 9.591070251030582e-05 2023-03-18 10:42:15,419 44k INFO ====> Epoch: 335, cost 667.30 s 2023-03-18 10:42:51,193 44k INFO Train Epoch: 336 [3%] 2023-03-18 10:42:51,194 44k INFO Losses: [2.558197259902954, 2.0467774868011475, 11.335229873657227, 18.046064376831055, 1.3004295825958252], step: 288800, lr: 9.589871367249203e-05 2023-03-18 10:42:55,348 44k INFO Saving model and optimizer state at iteration 336 to ./logs\44k\G_288800.pth 2023-03-18 10:42:59,237 44k INFO Saving model and optimizer state at iteration 336 to ./logs\44k\D_288800.pth 2023-03-18 10:45:47,464 44k INFO Train Epoch: 336 [27%] 2023-03-18 10:45:47,464 44k INFO Losses: [2.5691261291503906, 2.3118207454681396, 7.085758209228516, 14.263884544372559, 0.5161329507827759], step: 289000, lr: 9.589871367249203e-05 2023-03-18 10:48:37,057 44k INFO Train Epoch: 336 [50%] 2023-03-18 10:48:37,057 44k INFO Losses: [2.3967058658599854, 2.283064365386963, 6.349727630615234, 13.807805061340332, 1.2078748941421509], step: 289200, lr: 9.589871367249203e-05 2023-03-18 10:51:15,918 44k INFO Train Epoch: 336 [73%] 2023-03-18 10:51:15,918 44k INFO Losses: [2.3982129096984863, 2.112335681915283, 11.002860069274902, 16.44732093811035, 0.5331320762634277], step: 289400, lr: 9.589871367249203e-05 2023-03-18 10:53:54,958 44k INFO Train Epoch: 336 [96%] 2023-03-18 10:53:54,959 44k INFO Losses: [2.263151168823242, 2.5944318771362305, 9.672632217407227, 14.247118949890137, 0.6979907155036926], step: 289600, lr: 9.589871367249203e-05 2023-03-18 10:53:59,010 44k INFO Saving model and optimizer state at iteration 336 to ./logs\44k\G_289600.pth 2023-03-18 10:54:02,933 44k INFO Saving model and optimizer state at iteration 336 to ./logs\44k\D_289600.pth 2023-03-18 10:54:37,710 44k INFO ====> Epoch: 336, cost 742.29 s 2023-03-18 10:56:57,900 44k INFO Train Epoch: 337 [19%] 2023-03-18 10:56:57,901 44k INFO Losses: [2.621030807495117, 2.16670298576355, 11.464676856994629, 19.042078018188477, 1.0215955972671509], step: 289800, lr: 9.588672633328296e-05