import pandas as pd
import numpy as np
import random
import seaborn as sns
import matplotlib.pyplot as pyplot
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers, losses
from tensorflow.keras.models import Model
from tensorflow import keras
from tensorflow.keras import regularizers
from os import path
import pickle
import warnings
warnings.filterwarnings('ignore')
def plot_train_test(model_fit_history):
if(path.exists(model_fit_history)):
history = pickle.load(open(model_fit_history, "rb"))
plt.figure(figsize=(15,8))
pyplot.subplot(211)
pyplot.title('Loss')
pyplot.plot(history['loss'], label='train')
pyplot.plot(history['val_loss'], label='test')
pyplot.legend()
pyplot.subplot(212)
pyplot.title('AUC')
pyplot.plot(history['auc'], label='train')
pyplot.plot(history['val_auc'], label='test')
pyplot.legend()
pyplot.show()
def model_summary(stored_model):
if(path.exists(stored_model)):
model = keras.models.load_model(stored_model)
model.summary()
# evaluate the model
print('Model Evaluation : ')
_,train_acc,train_auc = model.evaluate(X_train, y_train)
_,test_acc,test_auc = model.evaluate(X_test, y_test)
print(' ')
print('Train: %.3f, Test: %.3f' % (train_auc, test_auc))
def print_logs(filename):
# Using readlines()
log_file = open(filename, 'r')
Lines = log_file.readlines()
count = 0
for line in Lines:
if line.strip().find('2160000/2160000')!=-1:
count += 1
print("Epoch {}: {}".format(count, line.strip()))
This case study is about using neural networks for classification problem with the Higgs Boson dataset. The dataset that contains observations of kinematic properties measured by particle detectors in accelerator [1].Higgs Boson is the unusal kind of subautomic particle discovered by Scottish physicist Peter Higgs [2]. The mainstream media always referred this particle as God particle after launch of a book The God particle by Leon Lederman. This particles are produced by quantum excitation of Higgs field [2].In 1964, Peter Higgs proposed a mechanism to explain why some particle have mass. There are several ways by which particle can attain a mass like by interacting with Higgs field. Higgs field exist just like gravitational or magnetic field that doesn't change (P. Onyisi,2013). The experiment was carried out in year 2012 by ATLAS and CMS and they found subautomic particle have properties similar to explained by Higgs mechanism.
The dataset for this case study contains 11 million observations produced using Monte Carlo simulations and has observations of singal processes that produces Higgs Boson and background process that does not and shall be used to distinguish between signal processes and background process.So this is signal vs background classification problem.
Deep learning is a new area in Machine Learning that attempts to model high level abstractions present in the raw data to understand the high varying functions underlying the data and to perform well generalized predictions for unseen data. This is accomplished through certain non-linear transformations of data through varying deep architectures such as Neural Networks. Deep learning aims at fulfilling the objective of true Artificial Intelligence and has recently been of great interest to researchers in machine learning. Tech giants like Google, Microsoft, Facebook and Baidu are investing hundreds of millions of dollars in bleeding-edge deep learning research and developing its applications [2].
As described in the paper (Baldi,2014) collisions at high energy particles are great source of finding exotic particles, basically these are classification problems and requires machine learning. Classical machine learning have limitations in learning non-linear patterns in the data and often requires features to be manually crafted and process is quite time consuming.However, Deep learning approach found to be more effective in reading and learning such non-linear structures and classifiying signals vs background processs more effectively compared to classical machine learning methods and that too witout manually crafting features as it is done in other modeling techniques.
This field deals with fast and high energy collision of particles and how they form and decay. The formation and decay of these Higgs Boson can be observed and recorded. In the paper “Searching for Exotic Particles in High-Energy Physics with Deep Learning,” published in 2014, the authors used deep learning with neural networks to classify “exotic particle(s)” that results in particle collision. The Higgs Boson data was collected from the Large Hadron Collider’s detectors. In the paper above, they focused on improving past research that used other machine learning techniques. The deep learning method they used was built using the library standards in 2014 and it outperformed the previous methods. The model was implemented using Pylearn2 which is no longer supported.
Objective
Build a neural network model similar to that implemented in the paper (Baldi,2014) using TensorFlow to effectively classify signal and background processes (to find Higgs Boson particle) and to recommend ways of improving the model published in the paper.
The dataset was obtained from UCI website[1]. It contains data that was produced usign Monte Carlo simulations. It has 11 million observations. Out of 21 features first 21 after target variables are low level kinematic properties measured by particle detectors and remaining 7 featuresa are function of 21 features.These are high level features as described on the UCI website. These are derived by physicists for the purpose of classification specially for deep learning. This eliminates the need for manually crafting feautre for data modeling. This data does not contain any missing values as indicated by the website.
Out of 11 million observations, 2.7 million are randomly sampled from the HIGGS.csv file.
#number of records in file (excludes header)
#Random sample of 2.7 Million records from the dataset
random.seed(10)
filename = "data/HIGGS.csv"
n = sum(1 for line in open(filename)) - 1
s = 2700000
skip = sorted(random.sample(range(1,n+1),n-s))
higgs_ds = pd.read_csv(filename,header=None,skiprows=skip)
#higgs_ds = pd.read_csv(filename,header=None)
features = ['target','lepton_pT','lepton_eta','lepton_phi','missing_energy_magnitude','missing_energy_phi',
'jet_1pt','jet_1eta','jet_1phi','jet_1b-tag','jet_2pt','jet_2eta','jet_2phi','jet_2b-tag',
'jet_3pt','jet_3eta','jet_3phi','jet_3b-tag','jet_4pt','jet_4eta','jet_4phi','jet_4b-tag',
'm_jj','m_jjj','m_lv','m_jlv','m_bb','m_wbb','m_wwbb']
higgs_ds.rename(columns=dict(zip(higgs_ds.columns, features)),inplace=True)
higgs_ds
target | lepton_pT | lepton_eta | lepton_phi | missing_energy_magnitude | missing_energy_phi | jet_1pt | jet_1eta | jet_1phi | jet_1b-tag | ... | jet_4eta | jet_4phi | jet_4b-tag | m_jj | m_jjj | m_lv | m_jlv | m_bb | m_wbb | m_wwbb | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1.0 | 0.869293 | -0.635082 | 0.225690 | 0.327470 | -0.689993 | 0.754202 | -0.248573 | -1.092064 | 0.000000 | ... | -0.010455 | -0.045767 | 3.101961 | 1.353760 | 0.979563 | 0.978076 | 0.920005 | 0.721657 | 0.988751 | 0.876678 |
1 | 1.0 | 0.798835 | 1.470639 | -1.635975 | 0.453773 | 0.425629 | 1.104875 | 1.282322 | 1.381664 | 0.000000 | ... | 1.128848 | 0.900461 | 0.000000 | 0.909753 | 1.108330 | 0.985692 | 0.951331 | 0.803252 | 0.865924 | 0.780118 |
2 | 1.0 | 0.945974 | 1.111244 | 1.218337 | 0.907639 | 0.821537 | 1.153243 | -0.365420 | -1.566055 | 0.000000 | ... | -0.451018 | 0.063653 | 3.101961 | 0.829024 | 0.980648 | 0.994360 | 0.908248 | 0.775879 | 0.783311 | 0.725122 |
3 | 1.0 | 1.102447 | 0.426544 | 1.717157 | 0.934302 | 0.775743 | 1.279386 | -0.249563 | -0.926306 | 2.173076 | ... | 1.207966 | -1.150600 | 0.000000 | 0.708635 | 0.521908 | 1.054313 | 1.272654 | 0.834634 | 0.934980 | 0.865305 |
4 | 1.0 | 1.014419 | 0.012607 | -0.484635 | 0.695256 | 1.701171 | 0.597096 | 0.076222 | 0.142635 | 2.173076 | ... | 1.294579 | 0.263977 | 0.000000 | 1.575766 | 1.067265 | 1.071992 | 0.805769 | 1.130206 | 0.838251 | 0.752052 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
2699996 | 0.0 | 0.859594 | 0.750876 | -0.213308 | 0.713165 | 0.905792 | 1.503366 | -0.151531 | -1.068226 | 2.173076 | ... | -1.056481 | -1.586760 | 0.000000 | 0.973016 | 1.059963 | 0.989977 | 0.727738 | 1.057913 | 0.875585 | 0.757173 |
2699997 | 0.0 | 1.379889 | -0.928246 | 1.453043 | 1.249777 | 0.701728 | 1.018581 | -1.090268 | -0.545448 | 2.173076 | ... | -0.398550 | -0.401466 | 0.000000 | 0.827399 | 0.908412 | 1.139007 | 1.326875 | 1.772200 | 1.337061 | 1.038975 |
2699998 | 1.0 | 0.922366 | -0.263026 | -0.533463 | 0.706617 | 1.134827 | 1.180817 | -0.020820 | 0.747459 | 0.000000 | ... | -2.092513 | -0.123455 | 0.000000 | 0.457268 | 0.918168 | 1.115962 | 0.911163 | 0.800597 | 1.015639 | 0.853639 |
2699999 | 1.0 | 1.595473 | 1.246626 | -1.321368 | 0.865705 | 1.532427 | 0.456021 | 1.729906 | -0.394657 | 0.000000 | ... | -2.134987 | 0.522566 | 0.000000 | 0.901468 | 0.786123 | 0.980619 | 1.144889 | 0.692346 | 0.788754 | 0.725130 |
2700000 | 1.0 | 0.700559 | 0.774251 | 1.520182 | 0.847112 | 0.211230 | 1.095531 | 0.052457 | 0.024553 | 2.173076 | ... | 1.585235 | 1.713962 | 0.000000 | 0.337374 | 0.845208 | 0.987610 | 0.883422 | 1.888438 | 1.153766 | 0.931279 |
2700001 rows × 29 columns
higgs_ds.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 2700001 entries, 0 to 2700000 Data columns (total 29 columns): # Column Dtype --- ------ ----- 0 target float64 1 lepton_pT float64 2 lepton_eta float64 3 lepton_phi float64 4 missing_energy_magnitude float64 5 missing_energy_phi float64 6 jet_1pt float64 7 jet_1eta float64 8 jet_1phi float64 9 jet_1b-tag float64 10 jet_2pt float64 11 jet_2eta float64 12 jet_2phi float64 13 jet_2b-tag float64 14 jet_3pt float64 15 jet_3eta float64 16 jet_3phi float64 17 jet_3b-tag float64 18 jet_4pt float64 19 jet_4eta float64 20 jet_4phi float64 21 jet_4b-tag float64 22 m_jj float64 23 m_jjj float64 24 m_lv float64 25 m_jlv float64 26 m_bb float64 27 m_wbb float64 28 m_wwbb float64 dtypes: float64(29) memory usage: 597.4 MB
higgs_ds.describe()
target | lepton_pT | lepton_eta | lepton_phi | missing_energy_magnitude | missing_energy_phi | jet_1pt | jet_1eta | jet_1phi | jet_1b-tag | ... | jet_4eta | jet_4phi | jet_4b-tag | m_jj | m_jjj | m_lv | m_jlv | m_bb | m_wbb | m_wwbb | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | ... | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 | 2.700001e+06 |
mean | 5.297354e-01 | 9.912009e-01 | 1.954370e-04 | -7.047962e-04 | 9.991829e-01 | 1.148560e-03 | 9.909048e-01 | -4.237584e-05 | 1.133937e-04 | 1.000003e+00 | ... | 5.290199e-04 | 3.115851e-04 | 9.986060e-01 | 1.034535e+00 | 1.024836e+00 | 1.050533e+00 | 1.009803e+00 | 9.730135e-01 | 1.033091e+00 | 9.598680e-01 |
std | 4.991151e-01 | 5.652594e-01 | 1.008588e+00 | 1.006442e+00 | 6.009023e-01 | 1.006664e+00 | 4.749358e-01 | 1.009384e+00 | 1.005909e+00 | 1.027669e+00 | ... | 1.007802e+00 | 1.006712e+00 | 1.399785e+00 | 6.767933e-01 | 3.816140e-01 | 1.646220e-01 | 3.978606e-01 | 5.250037e-01 | 3.655881e-01 | 3.135694e-01 |
min | 0.000000e+00 | 2.746966e-01 | -2.434976e+00 | -1.742508e+00 | 2.370088e-04 | -1.743944e+00 | 1.375940e-01 | -2.969725e+00 | -1.741237e+00 | 0.000000e+00 | ... | -2.497265e+00 | -1.742691e+00 | 0.000000e+00 | 7.900884e-02 | 2.477852e-01 | 8.304866e-02 | 1.636103e-01 | 5.163348e-02 | 3.191644e-01 | 3.475562e-01 |
25% | 0.000000e+00 | 5.903873e-01 | -7.383225e-01 | -8.724857e-01 | 5.767537e-01 | -8.708085e-01 | 6.790843e-01 | -6.882352e-01 | -8.680962e-01 | 0.000000e+00 | ... | -7.133574e-01 | -8.720338e-01 | 0.000000e+00 | 7.907076e-01 | 8.461264e-01 | 9.857475e-01 | 7.674400e-01 | 6.740847e-01 | 8.194916e-01 | 7.703549e-01 |
50% | 1.000000e+00 | 8.531884e-01 | 9.198132e-04 | -2.410638e-04 | 8.920108e-01 | 1.531822e-03 | 8.950025e-01 | -2.543566e-05 | 1.269625e-03 | 1.086538e+00 | ... | 1.204956e-03 | 2.906335e-04 | 0.000000e+00 | 8.949236e-01 | 9.506630e-01 | 9.897699e-01 | 9.164546e-01 | 8.735536e-01 | 9.473060e-01 | 8.718703e-01 |
75% | 1.000000e+00 | 1.235677e+00 | 7.382142e-01 | 8.704391e-01 | 1.294226e+00 | 8.734688e-01 | 1.170740e+00 | 6.881843e-01 | 8.677583e-01 | 2.173076e+00 | ... | 7.141017e-01 | 8.727152e-01 | 3.101961e+00 | 1.024506e+00 | 1.083515e+00 | 1.020404e+00 | 1.142138e+00 | 1.139039e+00 | 1.140598e+00 | 1.059480e+00 |
max | 1.000000e+00 | 1.142379e+01 | 2.434868e+00 | 1.743236e+00 | 1.284386e+01 | 1.743257e+00 | 8.848616e+00 | 2.969674e+00 | 1.741454e+00 | 2.173076e+00 | ... | 2.498009e+00 | 1.743372e+00 | 3.101961e+00 | 3.355602e+01 | 1.673047e+01 | 7.553898e+00 | 1.289145e+01 | 1.373569e+01 | 1.097622e+01 | 7.458594e+00 |
8 rows × 29 columns
Distribution of high level features is almost same. Skewness exists on feautures like lepton_phi,jet_2pt,jet_3pt and so on but all high level features shows skewness. Below two plot indicates dimensionality reduction may help to get unique features.
plt.figure(figsize=(20,4))
x=sns.boxplot(x="variable", y="value", data=higgs_ds.melt())
loc, labels = plt.xticks()
x.set_xticklabels(labels, rotation=45)
x
<matplotlib.axes._subplots.AxesSubplot at 0x7fcb79975358>
Heatmap shows lititle evidence of correlated features so all features (21 low level and 7 high level) shall be used for modeling.
plt.figure(figsize=(20,8))
sns.heatmap(higgs_ds.iloc[:,1:].corr())
<matplotlib.axes._subplots.AxesSubplot at 0x7fca71e97f98>
The HIGGS data set is nearly balanced, with 52.99% positive examples, that’s why we did not perform subsampling or oversampling on the data.
label | count | % | |
---|---|---|---|
signal | 1 | 5829123 | 52.99% |
background | 0 | 5170877 | 47.01% |
The paper mentioned that various numbers of examples were used for different stages of their study.
As the paper was published in 2014, the original study might be not thorough due to expensive computational cost. In this case study, the full dataset (11 million data) is used to build and validate models.
Randomly sampled data too has sample proportion of singal (52.96%) and background (47.04%) observations which is nearly balanced.
higgs_ds.groupby(['target']).size().reset_index(name='counts')
target | counts | |
---|---|---|
0 | 0.0 | 1269715 |
1 | 1.0 | 1430286 |
The paper released by Baldi et.al (2014) has used pylearn to build neural network model. The Pylearn was supported with python 2.7 and is no longer available. This case study attempts to build similar model to evaluate and compare performance by using Tensorflow. Following are steps we have followed to build NN sequential model using tensorFlow with different hyperparameters.
Author of paper built neural network using pylearn with following configuration
First we attempted to built exact replica of the model with same hyperparameters and activation functions and then we tried different activation function (ReLU and ELU) by different dropouts and each model was trained for 200 epochs with batch size of 1000. We chose 1000 batch size because it helped us increase speed of the training without affecting accuracy. We used AUC as a metric to measure the goodness of the NN models to perform an apple to apple comparison to the model found in[2]. Finally, we visualized AUC trends for both train and test data and the learning rate decay curve through the model training process.
We used TensorFlow with Keras package to build our model. To replicate the original model in the paper, we used the same hyperparameters from the paper as listed below.
X_train, X_test, y_train, y_test = train_test_split(
np.array(higgs_ds.iloc[:,1:]), np.array(higgs_ds.iloc[:,0]), test_size=0.2, random_state=42)
print(' ')
print('Shape of the dataframe')
print('Train set',X_train.shape)
print('Test set',X_test.shape)
Shape of the dataframe Train set (2160000, 28) Test set (540001, 28)
Normalize features using MinMaxScaler to rescale all features that shall bring values of all features between 0 and 1. It shall bring all features on similar scale that is required for ML algorithm.
scaler = MinMaxScaler(feature_range=(0, 1))
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
Code that was used to replicate model with above configuration is in Appendix with same heading as this section.
model_summary('data/model.replica')
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= h0 (Dense) (None, 300) 8700 _________________________________________________________________ h1 (Dense) (None, 300) 90300 _________________________________________________________________ h2 (Dense) (None, 300) 90300 _________________________________________________________________ h3 (Dense) (None, 300) 90300 _________________________________________________________________ dropout (Dropout) (None, 300) 0 _________________________________________________________________ y (Dense) (None, 1) 301 ================================================================= Total params: 279,901 Trainable params: 279,901 Non-trainable params: 0 _________________________________________________________________ Model Evaluation : 2160000/2160000 [==============================] - 106s 49us/sample - loss: 0.4727 - acc: 0.7691 - auc_16: 0.8536 540001/540001 [==============================] - 27s 51us/sample - loss: 0.4850 - acc: 0.7614 - auc_16: 0.8451 Train: 0.854, Test: 0.845
We were able to replicate model for same configuration as in paper and learning curve and loss are plotted as below. Train and test curves are almost same for all 200 epochs and there is no sign or over or underfitting. Which indicates model has learnt well for the given sample of data. AUC is sligtly lower than the paper but it can be improved by increasing training size.
plot_train_test('data/model.replica_history')
Here we tried same configuration as replica model except activation function. We have used ReLu instead of tanh.
Code that was used to replicate model with above configuration is in Appendix with same heading as this section.
model_summary('data/model.p')
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= h0 (Dense) (None, 300) 8700 _________________________________________________________________ h1 (Dense) (None, 300) 90300 _________________________________________________________________ h2 (Dense) (None, 300) 90300 _________________________________________________________________ h3 (Dense) (None, 300) 90300 _________________________________________________________________ dropout (Dropout) (None, 300) 0 _________________________________________________________________ y (Dense) (None, 1) 301 ================================================================= Total params: 279,901 Trainable params: 279,901 Non-trainable params: 0 _________________________________________________________________ Model Evaluation : 2160000/2160000 [==============================] - 117s 54us/sample - loss: 0.3956 - acc: 0.8114 - auc_18: 0.9000 540001/540001 [==============================] - 27s 49us/sample - loss: 0.5201 - acc: 0.7566 - auc_18: 0.8355 Train: 0.900, Test: 0.835
Here we have used same configuration as replica except for activation function. For this one we have used ReLu and plot shows that his model overfits. Loss increases for test set as number of epochs which is not a good sign for the model. thats why we tried two more variation with more dropout rate for every hidden layer for regularization to control overfitting.
plot_train_test('data/model.p_hist')
As shown in previous plot model with single dropout rate of 0.5 overfits i.e. regularizing only one hidden layer doesn't help to handle overfitting. So here we tried dropout rate of 0.4 for each hidden layer.
Below configuration was used to build this model
Code that was used to replicate model with above configuration is in Appendix with same heading as this section.
model_summary('data/model_keras_lr005.p')
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= h0 (Dense) (None, 300) 8700 _________________________________________________________________ dropout (Dropout) (None, 300) 0 _________________________________________________________________ h1 (Dense) (None, 300) 90300 _________________________________________________________________ dropout_1 (Dropout) (None, 300) 0 _________________________________________________________________ h2 (Dense) (None, 300) 90300 _________________________________________________________________ dropout_2 (Dropout) (None, 300) 0 _________________________________________________________________ h3 (Dense) (None, 300) 90300 _________________________________________________________________ dropout_3 (Dropout) (None, 300) 0 _________________________________________________________________ y (Dense) (None, 1) 301 ================================================================= Total params: 279,901 Trainable params: 279,901 Non-trainable params: 0 _________________________________________________________________ Model Evaluation : 2160000/2160000 [==============================] - 122s 56us/sample - loss: 0.5542 - acc: 0.7123 - auc_20: 0.8027 540001/540001 [==============================] - 30s 56us/sample - loss: 0.5561 - acc: 0.7108 - auc_20: 0.8000 Train: 0.803, Test: 0.800
Learning by this model looks much better than previous model. This appears to be tuned well and addressing overfitting issue. AUC is still lower (0.8) than original model (with tahnh activation function) due to size of training size likely and it needs to be trained with more data. Validation loss on the other hand looks decreasing but is graph indicates validation loss not stable and varies throughout as it gets dropped as number of epochs increases.
plot_train_test('data/model_keras_lr005.p_history')
Here we tried different activation function (ELU-Exponential Linear Unit) with same configuration as previous
Below configuration was used to build this model
Code that was used to replicate model with above configuration is in Appendix with same heading as this section.
model_summary('data/model_keras_elu.p')
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= h0 (Dense) (None, 300) 8700 _________________________________________________________________ dropout (Dropout) (None, 300) 0 _________________________________________________________________ h1 (Dense) (None, 300) 90300 _________________________________________________________________ dropout_1 (Dropout) (None, 300) 0 _________________________________________________________________ h2 (Dense) (None, 300) 90300 _________________________________________________________________ dropout_2 (Dropout) (None, 300) 0 _________________________________________________________________ h3 (Dense) (None, 300) 90300 _________________________________________________________________ dropout_3 (Dropout) (None, 300) 0 _________________________________________________________________ y (Dense) (None, 1) 301 ================================================================= Total params: 279,901 Trainable params: 279,901 Non-trainable params: 0 _________________________________________________________________ Model Evaluation : 2160000/2160000 [==============================] - 135s 63us/sample - loss: 0.5479 - acc: 0.7173 - auc_22: 0.7926 540001/540001 [==============================] - 33s 61us/sample - loss: 0.5493 - acc: 0.7156 - auc_22: 0.7912 Train: 0.793, Test: 0.791
AUC plot is almost similar to previous model and here loss is validation relatively smooth. There is no signs of overffiting as there not much difference in train and test sets. ELU activation function appears to handle overfitting much better and ReLu. Again accuracy is lower (0.79) due to size of the training. So it can be tested with more training data to improve AUC.
plot_train_test('data/model_keras_elu.p_history')
Activation function determines the output of a node, given an input to the node. Mathematically it can be represented as:
y = f(x), where x is input, and y is output
Here, the input to a deep node is typically a weighted sum outputs from prior layer of nodes.
For training a neural network, use of gradient descent involves calculation of error gradient that is back propagated from output layer towards input layer to train weights. Hence properties of differential of activation functions become critical.
Multiple aspects to be considered in choice of activation function are:
The paper uses tanh activation function. The tanh function can lead to a problem of vanishing gradient, where the neural network stops training. Both sigmoid and tanh function are prone to this as the output saturates for large positive or negative values of inputs. In this region, the differential of the activation function can get very small.
The proposal is to use following two functions for better training of the neural network:
It is recommended to use RelU, and ELU activation functions to build a model with better Accuracy in detecting the Higgs Boson.
For real data where chances of collision producing Higgs Boson is very low. In this case the AUC may not be an ideal metric.
It is important to consider implications of False Positives (FP) and False Negatives (FN)
False Positive: • False positive is when a Higgins Boson is classified as detected, but actually it is not. • While critical, this may still be acceptable with a very low probability, as a positive will get additionally peer reviewed, and verified.
False Negative: • False negative is when a Higgins Boson is classified as absent but actually it should be found. • This is a very big miss, and is more critical than False Positive. • FN should be ~0.
Since the cost of FN is much larger than FP, AUC may not be ideal metric. Minimizing FN directly may be a better metric. F1 Measure is recommended for highly imbalanced datasets.
Dropout is a technique to prevent over fitting during training phase. Srivastava, et. al (2014) present the technique. In modern scheme of neural network training, use of dropout is a standard approach to prevent over fitting in deep neural network designs. [4]
In drop out, the nodes in the neural network are removed from network randomly in every weight training iteration. This effectively means that in every iteration a sample of full network (or a thinned network) is used and trained. Hence, a collection of different thinned networks with extensive weight sharing is achieved. Conceptually the dropout breaks-up situations where network layers co-adapt to correct mistakes from prior layers, in turn making the model more robust.
In the paper, drop out feature during training algorithm is used in the paper for the ‘top hidden layer with 50% probability during training’.
A priori, it cannot be typically said what dropout rate would be optimal, i.e. give best performance.
The proposal is to use different dropout rate percentages over different training/validation set size to determine best model performance.
In the case study, were successfully able to replicate the neural network design to detect Higgs Boson. The replicated model matched the performance metric (AUC) mentioned in the paper (~0.88), when all the features are included in modeling.
This was a deep neural network, with 5 layers, and 4 hidden layers of 300 nodes each. The paper uses tanh activation function, and uses subset of the simulation generated data set to train the model. Weights were initialized from a normal distribution with zero mean and standard deviation 0.1 in the first layer, 0.001 in the output layer, and 0.05 all other hidden layers. Gradient computations were made on mini-batches of size 100, and drop out technique was used for top hidden layer at 50% probability rate.
As part of the case study, we were successfully able to run the model on same data set size, and replicate the results. Adjusting the batch size to 1000 helped running the model faster. This is different from paper, which uses batch size of 100. Additionally, modified models were developed using RelU, and ELU activation functions, and with different dropout schemes and compared to AUC performance metric mentioned in the paper.
In our observations, we don’t see a marked jump in accuracy or loss function reduction when different techniques are deployed. It is recommended that further tuning of model be considered by choosing different dropout schemes with more training data,different learning rates, momentum,decay rate etc.
This section contains code that was used to build different models. Following models were tried
This is code for model replicated from the paper with exact similar configuration.
warnings.filterwarnings('ignore')
print(' Store Model : ',sys.argv[1])
store_model = sys.argv[1]
if(path.exists(store_model)):
model = keras.models.load_model(store_model)
else:
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(28,)))
model.add(layers.Dense(300, activation='tanh',name="h0",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.1)))
model.add(layers.Dense(300, activation='tanh',name="h1",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(layers.Dense(300, activation='tanh',name="h2",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(layers.Dense(300, activation='tanh',name="h3",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid',name="y",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev= 0.001)))
warnings.filterwarnings('ignore')
print(' Store history : ',sys.argv[2])
model_fit_history = sys.argv[2]
if (not path.exists(store_model)):
# Replica model
# initial_learning_rate=.000001,
# decay_steps=10000,
# decay_rate=1.0000002
# momentum=0.9
# batch_size=100
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=.000001,
decay_steps=10000,
decay_rate=1.0000002)
#opt = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
opt = tf.keras.optimizers.SGD(learning_rate=0.05, momentum=0.9)
model.compile( optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy','AUC'])
history= model.fit(X_train, y_train, epochs=200, validation_data=(X_test,y_test), batch_size=1000)
model.save(store_model)
pickle.dump( history.history, open( model_fit_history, "wb" ) )
warnings.filterwarnings('ignore')
store_model = "data/model.p"
if(path.exists(store_model)):
model = keras.models.load_model(store_model)
else:
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(28,)))
model.add(layers.Dense(300, activation='relu',name="h0",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.1)))
model.add(layers.Dense(300, activation='relu',name="h1",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(layers.Dense(300, activation='relu',name="h2",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(layers.Dense(300, activation='relu',name="h3",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid',name="y",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev= 0.001)))
warnings.filterwarnings('ignore')
model_fit_history = "data/model.p_history"
if (not path.exists(store_model)):
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.05,
decay_steps=10000,
decay_rate=1.0000002)
#opt = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
opt = tf.keras.optimizers.SGD(learning_rate=0.05, momentum=0.9)
model.compile( optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy','AUC'])
history= model.fit(X_train, y_train, epochs=200, validation_data=(X_test,y_test), batch_size=1000)
pickle.dump( history.history, open( model_fit_history, "wb" ) )
warnings.filterwarnings('ignore')
print(' Store Model : ',sys.argv[1])
store_model = sys.argv[1]
if(path.exists(store_model)):
model = keras.models.load_model(store_model)
else:
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(28,)))
model.add(layers.Dense(300, activation='relu',name="h0",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.1)))
model.add(tf.keras.layers.Dropout(0.4))
model.add(layers.Dense(300, activation='relu',name="h1",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(tf.keras.layers.Dropout(0.4))
model.add(layers.Dense(300, activation='relu',name="h2",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(tf.keras.layers.Dropout(0.4))
model.add(layers.Dense(300, activation='relu',name="h3",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(tf.keras.layers.Dropout(0.4))
model.add(layers.Dense(1, activation='sigmoid',name="y",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev= 0.001)))
warnings.filterwarnings('ignore')
print(' Store history : ',sys.argv[2])
model_fit_history = sys.argv[2]
if (not path.exists(store_model)):
# New model
# initial_learning_rate=0.05,
# decay_steps=10000,
# decay_rate=0.96
# momentum=0.9
# batch_size=100
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.05,
decay_steps=10000,
decay_rate=0.96)
opt = tf.keras.optimizers.SGD(learning_rate=0.05, momentum=0.9)
model.compile( optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy','AUC'])
history= model.fit(X_train, y_train, epochs=200, validation_data=(X_test,y_test), batch_size=1000)
model.save(store_model)
pickle.dump( history.history, open( model_fit_history, "wb" ) )
warnings.filterwarnings('ignore')
print(' Store Model : ',sys.argv[1])
store_model = sys.argv[1]
if(path.exists(store_model)):
model = keras.models.load_model(store_model)
else:
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(28,)))
model.add(layers.Dense(300, activation='elu',name="h0",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.1)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(layers.Dense(300, activation='elu',name="h1",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(layers.Dense(300, activation='elu',name="h2",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(layers.Dense(300, activation='elu',name="h3",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev=.05)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid',name="y",
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0., stddev= 0.001)))
warnings.filterwarnings('ignore')
print(' Store history : ',sys.argv[2])
model_fit_history = sys.argv[2]
if (not path.exists(store_model)):
# New model
# initial_learning_rate=0.05,
# decay_steps=10000,
# decay_rate=0.96
# momentum=0.9
# batch_size=100
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.05,
decay_steps=10000,
decay_rate=0.96)
opt = tf.keras.optimizers.SGD(learning_rate=lr_schedule, momentum=0.9)
model.compile( optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy','AUC'])
history= model.fit(X_train, y_train, epochs=200, validation_data=(X_test,y_test), batch_size=1000)
model.save(store_model)
pickle.dump( history.history, open( model_fit_history, "wb" ) )
print_logs('replica_model.log')
Epoch 1: 2160000/2160000 [==============================] - 51s 24us/sample - loss: 0.6580 - acc: 0.6012 - auc: 0.6389 - val_loss: 0.6435 - val_acc: 0.6262 - val_auc: 0.6695 Epoch 2: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.6441 - acc: 0.6261 - auc: 0.6684 - val_loss: 0.6388 - val_acc: 0.6357 - val_auc: 0.6787 Epoch 3: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.6405 - acc: 0.6314 - auc: 0.6743 - val_loss: 0.6350 - val_acc: 0.6396 - val_auc: 0.6839 Epoch 4: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.6293 - acc: 0.6427 - auc: 0.6942 - val_loss: 0.6269 - val_acc: 0.6517 - val_auc: 0.7130 Epoch 5: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.6130 - acc: 0.6619 - auc: 0.7202 - val_loss: 0.6157 - val_acc: 0.6593 - val_auc: 0.7184 Epoch 6: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.6043 - acc: 0.6704 - auc: 0.7322 - val_loss: 0.5993 - val_acc: 0.6768 - val_auc: 0.7411 Epoch 7: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.5991 - acc: 0.6758 - auc: 0.7388 - val_loss: 0.5954 - val_acc: 0.6830 - val_auc: 0.7481 Epoch 8: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.5938 - acc: 0.6810 - auc: 0.7451 - val_loss: 0.5874 - val_acc: 0.6870 - val_auc: 0.7535 Epoch 9: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.5902 - acc: 0.6840 - auc: 0.7492 - val_loss: 0.5830 - val_acc: 0.6904 - val_auc: 0.7584 Epoch 10: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5854 - acc: 0.6872 - auc: 0.7545 - val_loss: 0.5829 - val_acc: 0.6889 - val_auc: 0.7578 Epoch 11: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.5809 - acc: 0.6912 - auc: 0.7592 - val_loss: 0.5864 - val_acc: 0.6855 - val_auc: 0.7540 Epoch 12: 2160000/2160000 [==============================] - 49s 23us/sample - loss: 0.5768 - acc: 0.6951 - auc: 0.7637 - val_loss: 0.5825 - val_acc: 0.6899 - val_auc: 0.7615 Epoch 13: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5724 - acc: 0.6986 - auc: 0.7682 - val_loss: 0.5801 - val_acc: 0.6907 - val_auc: 0.7631 Epoch 14: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.5686 - acc: 0.7013 - auc: 0.7721 - val_loss: 0.5736 - val_acc: 0.6963 - val_auc: 0.7690 Epoch 15: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5669 - acc: 0.7026 - auc: 0.7739 - val_loss: 0.5602 - val_acc: 0.7073 - val_auc: 0.7815 Epoch 16: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.5634 - acc: 0.7052 - auc: 0.7774 - val_loss: 0.5592 - val_acc: 0.7073 - val_auc: 0.7823 Epoch 17: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5617 - acc: 0.7068 - auc: 0.7792 - val_loss: 0.5552 - val_acc: 0.7114 - val_auc: 0.7863 Epoch 18: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5596 - acc: 0.7083 - auc: 0.7813 - val_loss: 0.5559 - val_acc: 0.7107 - val_auc: 0.7849 Epoch 19: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.5577 - acc: 0.7101 - auc: 0.7832 - val_loss: 0.5614 - val_acc: 0.7070 - val_auc: 0.7821 Epoch 20: 2160000/2160000 [==============================] - 50s 23us/sample - loss: 0.5552 - acc: 0.7122 - auc: 0.7857 - val_loss: 0.5515 - val_acc: 0.7156 - val_auc: 0.7916 Epoch 21: 2160000/2160000 [==============================] - 48s 22us/sample - loss: 0.5534 - acc: 0.7137 - auc: 0.7874 - val_loss: 0.5503 - val_acc: 0.7153 - val_auc: 0.7910 Epoch 22: 2160000/2160000 [==============================] - 54s 25us/sample - loss: 0.5521 - acc: 0.7145 - auc: 0.7886 - val_loss: 0.5480 - val_acc: 0.7168 - val_auc: 0.7935 Epoch 23: 2160000/2160000 [==============================] - 54s 25us/sample - loss: 0.5500 - acc: 0.7162 - auc: 0.7906 - val_loss: 0.5466 - val_acc: 0.7182 - val_auc: 0.7940 Epoch 24: 2160000/2160000 [==============================] - 53s 25us/sample - loss: 0.5488 - acc: 0.7169 - auc: 0.7917 - val_loss: 0.5538 - val_acc: 0.7114 - val_auc: 0.7877 Epoch 25: 2160000/2160000 [==============================] - 60s 28us/sample - loss: 0.5477 - acc: 0.7177 - auc: 0.7927 - val_loss: 0.5436 - val_acc: 0.7200 - val_auc: 0.7963 Epoch 26: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.5461 - acc: 0.7190 - auc: 0.7942 - val_loss: 0.5480 - val_acc: 0.7173 - val_auc: 0.7926 Epoch 27: 2160000/2160000 [==============================] - 48s 22us/sample - loss: 0.5454 - acc: 0.7196 - auc: 0.7948 - val_loss: 0.5436 - val_acc: 0.7195 - val_auc: 0.7963 Epoch 28: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.5435 - acc: 0.7210 - auc: 0.7967 - val_loss: 0.5395 - val_acc: 0.7232 - val_auc: 0.8004 Epoch 29: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.5422 - acc: 0.7221 - auc: 0.7978 - val_loss: 0.5409 - val_acc: 0.7223 - val_auc: 0.7991 Epoch 30: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.5419 - acc: 0.7224 - auc: 0.7981 - val_loss: 0.5385 - val_acc: 0.7243 - val_auc: 0.8011 Epoch 31: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5398 - acc: 0.7241 - auc: 0.8001 - val_loss: 0.5367 - val_acc: 0.7254 - val_auc: 0.8038 Epoch 32: 2160000/2160000 [==============================] - 50s 23us/sample - loss: 0.5388 - acc: 0.7248 - auc: 0.8011 - val_loss: 0.5379 - val_acc: 0.7253 - val_auc: 0.8032 Epoch 33: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.5373 - acc: 0.7260 - auc: 0.8024 - val_loss: 0.5369 - val_acc: 0.7256 - val_auc: 0.8038 Epoch 34: 2160000/2160000 [==============================] - 51s 24us/sample - loss: 0.5355 - acc: 0.7273 - auc: 0.8040 - val_loss: 0.5361 - val_acc: 0.7263 - val_auc: 0.8042 Epoch 35: 2160000/2160000 [==============================] - 48s 22us/sample - loss: 0.5351 - acc: 0.7275 - auc: 0.8043 - val_loss: 0.5343 - val_acc: 0.7279 - val_auc: 0.8049 Epoch 36: 2160000/2160000 [==============================] - 54s 25us/sample - loss: 0.5338 - acc: 0.7286 - auc: 0.8056 - val_loss: 0.5369 - val_acc: 0.7264 - val_auc: 0.8030 Epoch 37: 2160000/2160000 [==============================] - 56s 26us/sample - loss: 0.5330 - acc: 0.7290 - auc: 0.8062 - val_loss: 0.5344 - val_acc: 0.7277 - val_auc: 0.8055 Epoch 38: 2160000/2160000 [==============================] - 48s 22us/sample - loss: 0.5318 - acc: 0.7300 - auc: 0.8073 - val_loss: 0.5304 - val_acc: 0.7311 - val_auc: 0.8091 Epoch 39: 2160000/2160000 [==============================] - 50s 23us/sample - loss: 0.5312 - acc: 0.7306 - auc: 0.8078 - val_loss: 0.5300 - val_acc: 0.7312 - val_auc: 0.8091 Epoch 40: 2160000/2160000 [==============================] - 49s 23us/sample - loss: 0.5301 - acc: 0.7310 - auc: 0.8087 - val_loss: 0.5282 - val_acc: 0.7311 - val_auc: 0.8100 Epoch 41: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.5291 - acc: 0.7320 - auc: 0.8096 - val_loss: 0.5321 - val_acc: 0.7291 - val_auc: 0.8084 Epoch 42: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.5288 - acc: 0.7319 - auc: 0.8099 - val_loss: 0.5338 - val_acc: 0.7275 - val_auc: 0.8059 Epoch 43: 2160000/2160000 [==============================] - 51s 23us/sample - loss: 0.5285 - acc: 0.7326 - auc: 0.8101 - val_loss: 0.5281 - val_acc: 0.7322 - val_auc: 0.8106 Epoch 44: 2160000/2160000 [==============================] - 48s 22us/sample - loss: 0.5272 - acc: 0.7333 - auc: 0.8113 - val_loss: 0.5323 - val_acc: 0.7302 - val_auc: 0.8081 Epoch 45: 2160000/2160000 [==============================] - 59s 27us/sample - loss: 0.5267 - acc: 0.7339 - auc: 0.8117 - val_loss: 0.5256 - val_acc: 0.7346 - val_auc: 0.8127 Epoch 46: 2160000/2160000 [==============================] - 53s 25us/sample - loss: 0.5257 - acc: 0.7344 - auc: 0.8125 - val_loss: 0.5237 - val_acc: 0.7354 - val_auc: 0.8145 Epoch 47: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.5252 - acc: 0.7349 - auc: 0.8130 - val_loss: 0.5330 - val_acc: 0.7281 - val_auc: 0.8065 Epoch 48: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5236 - acc: 0.7358 - auc: 0.8143 - val_loss: 0.5253 - val_acc: 0.7335 - val_auc: 0.8129 Epoch 49: 2160000/2160000 [==============================] - 52s 24us/sample - loss: 0.5229 - acc: 0.7365 - auc: 0.8150 - val_loss: 0.5262 - val_acc: 0.7326 - val_auc: 0.8142 Epoch 50: 2160000/2160000 [==============================] - 53s 24us/sample - loss: 0.5215 - acc: 0.7372 - auc: 0.8161 - val_loss: 0.5208 - val_acc: 0.7369 - val_auc: 0.8168 Epoch 51: 2160000/2160000 [==============================] - 49s 23us/sample - loss: 0.5213 - acc: 0.7378 - auc: 0.8163 - val_loss: 0.5227 - val_acc: 0.7355 - val_auc: 0.8150 Epoch 52: 2160000/2160000 [==============================] - 48s 22us/sample - loss: 0.5199 - acc: 0.7384 - auc: 0.8175 - val_loss: 0.5209 - val_acc: 0.7375 - val_auc: 0.8176 Epoch 53: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5188 - acc: 0.7395 - auc: 0.8184 - val_loss: 0.5224 - val_acc: 0.7363 - val_auc: 0.8159 Epoch 54: 2160000/2160000 [==============================] - 50s 23us/sample - loss: 0.5179 - acc: 0.7401 - auc: 0.8191 - val_loss: 0.5231 - val_acc: 0.7348 - val_auc: 0.8177 Epoch 55: 2160000/2160000 [==============================] - 48s 22us/sample - loss: 0.5171 - acc: 0.7404 - auc: 0.8197 - val_loss: 0.5179 - val_acc: 0.7384 - val_auc: 0.8193 Epoch 56: 2160000/2160000 [==============================] - 51s 24us/sample - loss: 0.5159 - acc: 0.7411 - auc: 0.8207 - val_loss: 0.5168 - val_acc: 0.7392 - val_auc: 0.8196 Epoch 57: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5153 - acc: 0.7419 - auc: 0.8212 - val_loss: 0.5171 - val_acc: 0.7393 - val_auc: 0.8206 Epoch 58: 2160000/2160000 [==============================] - 56s 26us/sample - loss: 0.5144 - acc: 0.7422 - auc: 0.8219 - val_loss: 0.5148 - val_acc: 0.7414 - val_auc: 0.8223 Epoch 59: 2160000/2160000 [==============================] - 54s 25us/sample - loss: 0.5138 - acc: 0.7428 - auc: 0.8224 - val_loss: 0.5166 - val_acc: 0.7399 - val_auc: 0.8223 Epoch 60: 2160000/2160000 [==============================] - 60s 28us/sample - loss: 0.5128 - acc: 0.7434 - auc: 0.8232 - val_loss: 0.5182 - val_acc: 0.7387 - val_auc: 0.8189 Epoch 61: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5121 - acc: 0.7440 - auc: 0.8237 - val_loss: 0.5163 - val_acc: 0.7398 - val_auc: 0.8229 Epoch 62: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5119 - acc: 0.7441 - auc: 0.8239 - val_loss: 0.5150 - val_acc: 0.7410 - val_auc: 0.8235 Epoch 63: 2160000/2160000 [==============================] - 59s 27us/sample - loss: 0.5110 - acc: 0.7445 - auc: 0.8246 - val_loss: 0.5149 - val_acc: 0.7417 - val_auc: 0.8215 Epoch 64: 2160000/2160000 [==============================] - 79s 36us/sample - loss: 0.5100 - acc: 0.7452 - auc: 0.8254 - val_loss: 0.5142 - val_acc: 0.7417 - val_auc: 0.8223 Epoch 65: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5097 - acc: 0.7453 - auc: 0.8256 - val_loss: 0.5155 - val_acc: 0.7406 - val_auc: 0.8210 Epoch 66: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.5092 - acc: 0.7459 - auc: 0.8260 - val_loss: 0.5110 - val_acc: 0.7438 - val_auc: 0.8247 Epoch 67: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5091 - acc: 0.7463 - auc: 0.8261 - val_loss: 0.5081 - val_acc: 0.7458 - val_auc: 0.8266 Epoch 68: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5082 - acc: 0.7465 - auc: 0.8268 - val_loss: 0.5108 - val_acc: 0.7437 - val_auc: 0.8256 Epoch 69: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5076 - acc: 0.7470 - auc: 0.8273 - val_loss: 0.5083 - val_acc: 0.7453 - val_auc: 0.8271 Epoch 70: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5071 - acc: 0.7470 - auc: 0.8276 - val_loss: 0.5063 - val_acc: 0.7472 - val_auc: 0.8286 Epoch 71: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5070 - acc: 0.7473 - auc: 0.8277 - val_loss: 0.5092 - val_acc: 0.7453 - val_auc: 0.8267 Epoch 72: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5064 - acc: 0.7475 - auc: 0.8282 - val_loss: 0.5111 - val_acc: 0.7442 - val_auc: 0.8255 Epoch 73: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5061 - acc: 0.7479 - auc: 0.8284 - val_loss: 0.5083 - val_acc: 0.7462 - val_auc: 0.8284 Epoch 74: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5057 - acc: 0.7482 - auc: 0.8288 - val_loss: 0.5082 - val_acc: 0.7460 - val_auc: 0.8273 Epoch 75: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5050 - acc: 0.7486 - auc: 0.8294 - val_loss: 0.5070 - val_acc: 0.7465 - val_auc: 0.8285 Epoch 76: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5051 - acc: 0.7485 - auc: 0.8293 - val_loss: 0.5042 - val_acc: 0.7481 - val_auc: 0.8304 Epoch 77: 2160000/2160000 [==============================] - 89s 41us/sample - loss: 0.5043 - acc: 0.7491 - auc: 0.8298 - val_loss: 0.5069 - val_acc: 0.7463 - val_auc: 0.8286 Epoch 78: 2160000/2160000 [==============================] - 89s 41us/sample - loss: 0.5041 - acc: 0.7493 - auc: 0.8301 - val_loss: 0.5052 - val_acc: 0.7470 - val_auc: 0.8293 Epoch 79: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5039 - acc: 0.7492 - auc: 0.8302 - val_loss: 0.5063 - val_acc: 0.7479 - val_auc: 0.8282 Epoch 80: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5035 - acc: 0.7497 - auc: 0.8305 - val_loss: 0.5056 - val_acc: 0.7473 - val_auc: 0.8303 Epoch 81: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5031 - acc: 0.7499 - auc: 0.8308 - val_loss: 0.5042 - val_acc: 0.7486 - val_auc: 0.8300 Epoch 82: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5022 - acc: 0.7503 - auc: 0.8315 - val_loss: 0.5118 - val_acc: 0.7439 - val_auc: 0.8238 Epoch 83: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5025 - acc: 0.7505 - auc: 0.8313 - val_loss: 0.5090 - val_acc: 0.7456 - val_auc: 0.8259 Epoch 84: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5017 - acc: 0.7506 - auc: 0.8318 - val_loss: 0.5044 - val_acc: 0.7483 - val_auc: 0.8305 Epoch 85: 2160000/2160000 [==============================] - 76s 35us/sample - loss: 0.5007 - acc: 0.7515 - auc: 0.8326 - val_loss: 0.5024 - val_acc: 0.7493 - val_auc: 0.8321 Epoch 86: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.5010 - acc: 0.7511 - auc: 0.8324 - val_loss: 0.5062 - val_acc: 0.7468 - val_auc: 0.8283 Epoch 87: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.5004 - acc: 0.7516 - auc: 0.8329 - val_loss: 0.5022 - val_acc: 0.7496 - val_auc: 0.8318 Epoch 88: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.5003 - acc: 0.7518 - auc: 0.8329 - val_loss: 0.5080 - val_acc: 0.7473 - val_auc: 0.8281 Epoch 89: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.5001 - acc: 0.7517 - auc: 0.8331 - val_loss: 0.5044 - val_acc: 0.7483 - val_auc: 0.8296 Epoch 90: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4996 - acc: 0.7523 - auc: 0.8336 - val_loss: 0.5003 - val_acc: 0.7515 - val_auc: 0.8335 Epoch 91: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4993 - acc: 0.7523 - auc: 0.8337 - val_loss: 0.5005 - val_acc: 0.7511 - val_auc: 0.8327 Epoch 92: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4983 - acc: 0.7528 - auc: 0.8345 - val_loss: 0.5043 - val_acc: 0.7484 - val_auc: 0.8299 Epoch 93: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4986 - acc: 0.7527 - auc: 0.8343 - val_loss: 0.5014 - val_acc: 0.7502 - val_auc: 0.8327 Epoch 94: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4983 - acc: 0.7528 - auc: 0.8344 - val_loss: 0.5013 - val_acc: 0.7507 - val_auc: 0.8325 Epoch 95: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4980 - acc: 0.7531 - auc: 0.8347 - val_loss: 0.5038 - val_acc: 0.7497 - val_auc: 0.8323 Epoch 96: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4976 - acc: 0.7534 - auc: 0.8350 - val_loss: 0.4978 - val_acc: 0.7522 - val_auc: 0.8352 Epoch 97: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.4971 - acc: 0.7538 - auc: 0.8354 - val_loss: 0.4975 - val_acc: 0.7531 - val_auc: 0.8353 Epoch 98: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.4967 - acc: 0.7542 - auc: 0.8357 - val_loss: 0.5036 - val_acc: 0.7498 - val_auc: 0.8311 Epoch 99: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4960 - acc: 0.7544 - auc: 0.8362 - val_loss: 0.4993 - val_acc: 0.7514 - val_auc: 0.8335 Epoch 100: 2160000/2160000 [==============================] - 50s 23us/sample - loss: 0.4965 - acc: 0.7542 - auc: 0.8358 - val_loss: 0.4967 - val_acc: 0.7534 - val_auc: 0.8356 Epoch 101: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4957 - acc: 0.7545 - auc: 0.8364 - val_loss: 0.4992 - val_acc: 0.7522 - val_auc: 0.8342 Epoch 102: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4956 - acc: 0.7547 - auc: 0.8365 - val_loss: 0.5021 - val_acc: 0.7495 - val_auc: 0.8332 Epoch 103: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4954 - acc: 0.7547 - auc: 0.8366 - val_loss: 0.4965 - val_acc: 0.7529 - val_auc: 0.8363 Epoch 104: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4946 - acc: 0.7554 - auc: 0.8373 - val_loss: 0.4984 - val_acc: 0.7524 - val_auc: 0.8360 Epoch 105: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4944 - acc: 0.7556 - auc: 0.8374 - val_loss: 0.4995 - val_acc: 0.7516 - val_auc: 0.8338 Epoch 106: 2160000/2160000 [==============================] - 50s 23us/sample - loss: 0.4944 - acc: 0.7554 - auc: 0.8374 - val_loss: 0.5028 - val_acc: 0.7491 - val_auc: 0.8315 Epoch 107: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4941 - acc: 0.7556 - auc: 0.8376 - val_loss: 0.4958 - val_acc: 0.7544 - val_auc: 0.8367 Epoch 108: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4940 - acc: 0.7556 - auc: 0.8377 - val_loss: 0.4954 - val_acc: 0.7548 - val_auc: 0.8368 Epoch 109: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4933 - acc: 0.7563 - auc: 0.8382 - val_loss: 0.5003 - val_acc: 0.7509 - val_auc: 0.8328 Epoch 110: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.4933 - acc: 0.7563 - auc: 0.8382 - val_loss: 0.4962 - val_acc: 0.7541 - val_auc: 0.8366 Epoch 111: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.4929 - acc: 0.7562 - auc: 0.8385 - val_loss: 0.4947 - val_acc: 0.7543 - val_auc: 0.8383 Epoch 112: 2160000/2160000 [==============================] - 49s 23us/sample - loss: 0.4925 - acc: 0.7567 - auc: 0.8388 - val_loss: 0.4927 - val_acc: 0.7561 - val_auc: 0.8384 Epoch 113: 2160000/2160000 [==============================] - 44s 21us/sample - loss: 0.4924 - acc: 0.7567 - auc: 0.8389 - val_loss: 0.4963 - val_acc: 0.7536 - val_auc: 0.8363 Epoch 114: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4921 - acc: 0.7571 - auc: 0.8391 - val_loss: 0.4927 - val_acc: 0.7558 - val_auc: 0.8387 Epoch 115: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.4920 - acc: 0.7569 - auc: 0.8392 - val_loss: 0.4972 - val_acc: 0.7529 - val_auc: 0.8361 Epoch 116: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4914 - acc: 0.7572 - auc: 0.8396 - val_loss: 0.4974 - val_acc: 0.7530 - val_auc: 0.8351 Epoch 117: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4917 - acc: 0.7574 - auc: 0.8395 - val_loss: 0.4987 - val_acc: 0.7524 - val_auc: 0.8350 Epoch 118: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4912 - acc: 0.7574 - auc: 0.8398 - val_loss: 0.4957 - val_acc: 0.7543 - val_auc: 0.8366 Epoch 119: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4914 - acc: 0.7572 - auc: 0.8397 - val_loss: 0.4936 - val_acc: 0.7551 - val_auc: 0.8391 Epoch 120: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4911 - acc: 0.7578 - auc: 0.8398 - val_loss: 0.4992 - val_acc: 0.7518 - val_auc: 0.8359 Epoch 121: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4908 - acc: 0.7577 - auc: 0.8401 - val_loss: 0.4945 - val_acc: 0.7550 - val_auc: 0.8372 Epoch 122: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4902 - acc: 0.7583 - auc: 0.8406 - val_loss: 0.4920 - val_acc: 0.7564 - val_auc: 0.8391 Epoch 123: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4897 - acc: 0.7585 - auc: 0.8409 - val_loss: 0.5028 - val_acc: 0.7502 - val_auc: 0.8319 Epoch 124: 2160000/2160000 [==============================] - 48s 22us/sample - loss: 0.4898 - acc: 0.7587 - auc: 0.8408 - val_loss: 0.4962 - val_acc: 0.7532 - val_auc: 0.8390 Epoch 125: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4896 - acc: 0.7587 - auc: 0.8410 - val_loss: 0.4898 - val_acc: 0.7578 - val_auc: 0.8407 Epoch 126: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.4893 - acc: 0.7588 - auc: 0.8412 - val_loss: 0.4954 - val_acc: 0.7540 - val_auc: 0.8369 Epoch 127: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4894 - acc: 0.7587 - auc: 0.8411 - val_loss: 0.4909 - val_acc: 0.7570 - val_auc: 0.8403 Epoch 128: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4888 - acc: 0.7590 - auc: 0.8416 - val_loss: 0.4970 - val_acc: 0.7530 - val_auc: 0.8373 Epoch 129: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4886 - acc: 0.7592 - auc: 0.8417 - val_loss: 0.4920 - val_acc: 0.7564 - val_auc: 0.8390 Epoch 130: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4884 - acc: 0.7592 - auc: 0.8418 - val_loss: 0.4945 - val_acc: 0.7546 - val_auc: 0.8374 Epoch 131: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4880 - acc: 0.7597 - auc: 0.8422 - val_loss: 0.4933 - val_acc: 0.7554 - val_auc: 0.8385 Epoch 132: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4883 - acc: 0.7592 - auc: 0.8419 - val_loss: 0.4910 - val_acc: 0.7569 - val_auc: 0.8396 Epoch 133: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.4881 - acc: 0.7594 - auc: 0.8421 - val_loss: 0.4969 - val_acc: 0.7529 - val_auc: 0.8360 Epoch 134: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4877 - acc: 0.7599 - auc: 0.8424 - val_loss: 0.4906 - val_acc: 0.7576 - val_auc: 0.8400 Epoch 135: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4876 - acc: 0.7597 - auc: 0.8424 - val_loss: 0.4931 - val_acc: 0.7557 - val_auc: 0.8389 Epoch 136: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4878 - acc: 0.7597 - auc: 0.8423 - val_loss: 0.4913 - val_acc: 0.7571 - val_auc: 0.8396 Epoch 137: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.4878 - acc: 0.7598 - auc: 0.8423 - val_loss: 0.4973 - val_acc: 0.7529 - val_auc: 0.8373 Epoch 138: 2160000/2160000 [==============================] - 48s 22us/sample - loss: 0.4869 - acc: 0.7601 - auc: 0.8429 - val_loss: 0.4934 - val_acc: 0.7550 - val_auc: 0.8389 Epoch 139: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4869 - acc: 0.7604 - auc: 0.8430 - val_loss: 0.4952 - val_acc: 0.7542 - val_auc: 0.8367 Epoch 140: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4864 - acc: 0.7609 - auc: 0.8433 - val_loss: 0.4895 - val_acc: 0.7585 - val_auc: 0.8411 Epoch 141: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4863 - acc: 0.7608 - auc: 0.8434 - val_loss: 0.4915 - val_acc: 0.7564 - val_auc: 0.8393 Epoch 142: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4860 - acc: 0.7607 - auc: 0.8436 - val_loss: 0.4898 - val_acc: 0.7581 - val_auc: 0.8411 Epoch 143: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4861 - acc: 0.7607 - auc: 0.8436 - val_loss: 0.4920 - val_acc: 0.7570 - val_auc: 0.8392 Epoch 144: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4863 - acc: 0.7608 - auc: 0.8435 - val_loss: 0.4930 - val_acc: 0.7558 - val_auc: 0.8381 Epoch 145: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4859 - acc: 0.7608 - auc: 0.8436 - val_loss: 0.4884 - val_acc: 0.7585 - val_auc: 0.8416 Epoch 146: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4854 - acc: 0.7612 - auc: 0.8441 - val_loss: 0.4919 - val_acc: 0.7566 - val_auc: 0.8396 Epoch 147: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4853 - acc: 0.7613 - auc: 0.8441 - val_loss: 0.4897 - val_acc: 0.7583 - val_auc: 0.8410 Epoch 148: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4851 - acc: 0.7615 - auc: 0.8442 - val_loss: 0.4899 - val_acc: 0.7580 - val_auc: 0.8407 Epoch 149: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4846 - acc: 0.7619 - auc: 0.8446 - val_loss: 0.4915 - val_acc: 0.7570 - val_auc: 0.8398 Epoch 150: 2160000/2160000 [==============================] - 50s 23us/sample - loss: 0.4847 - acc: 0.7618 - auc: 0.8446 - val_loss: 0.4943 - val_acc: 0.7560 - val_auc: 0.8384 Epoch 151: 2160000/2160000 [==============================] - 49s 23us/sample - loss: 0.4847 - acc: 0.7617 - auc: 0.8445 - val_loss: 0.4979 - val_acc: 0.7532 - val_auc: 0.8351 Epoch 152: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4845 - acc: 0.7619 - auc: 0.8447 - val_loss: 0.4878 - val_acc: 0.7591 - val_auc: 0.8421 Epoch 153: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4841 - acc: 0.7622 - auc: 0.8450 - val_loss: 0.4871 - val_acc: 0.7596 - val_auc: 0.8425 Epoch 154: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.4840 - acc: 0.7622 - auc: 0.8451 - val_loss: 0.4961 - val_acc: 0.7540 - val_auc: 0.8360 Epoch 155: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4838 - acc: 0.7623 - auc: 0.8452 - val_loss: 0.4883 - val_acc: 0.7593 - val_auc: 0.8430 Epoch 156: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4838 - acc: 0.7621 - auc: 0.8452 - val_loss: 0.4890 - val_acc: 0.7587 - val_auc: 0.8419 Epoch 157: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4833 - acc: 0.7626 - auc: 0.8456 - val_loss: 0.4883 - val_acc: 0.7586 - val_auc: 0.8417 Epoch 158: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.4830 - acc: 0.7627 - auc: 0.8458 - val_loss: 0.4898 - val_acc: 0.7575 - val_auc: 0.8409 Epoch 159: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4830 - acc: 0.7628 - auc: 0.8458 - val_loss: 0.4906 - val_acc: 0.7573 - val_auc: 0.8404 Epoch 160: 2160000/2160000 [==============================] - 42s 19us/sample - loss: 0.4831 - acc: 0.7627 - auc: 0.8457 - val_loss: 0.4963 - val_acc: 0.7551 - val_auc: 0.8364 Epoch 161: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4830 - acc: 0.7627 - auc: 0.8458 - val_loss: 0.4902 - val_acc: 0.7585 - val_auc: 0.8416 Epoch 162: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.4827 - acc: 0.7629 - auc: 0.8460 - val_loss: 0.4876 - val_acc: 0.7594 - val_auc: 0.8426 Epoch 163: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.4824 - acc: 0.7633 - auc: 0.8462 - val_loss: 0.4888 - val_acc: 0.7593 - val_auc: 0.8419 Epoch 164: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.4825 - acc: 0.7631 - auc: 0.8461 - val_loss: 0.4882 - val_acc: 0.7589 - val_auc: 0.8419 Epoch 165: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4821 - acc: 0.7633 - auc: 0.8465 - val_loss: 0.4897 - val_acc: 0.7576 - val_auc: 0.8409 Epoch 166: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4821 - acc: 0.7633 - auc: 0.8464 - val_loss: 0.4908 - val_acc: 0.7568 - val_auc: 0.8421 Epoch 167: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4818 - acc: 0.7635 - auc: 0.8466 - val_loss: 0.4886 - val_acc: 0.7583 - val_auc: 0.8421 Epoch 168: 2160000/2160000 [==============================] - 42s 19us/sample - loss: 0.4814 - acc: 0.7639 - auc: 0.8469 - val_loss: 0.4879 - val_acc: 0.7587 - val_auc: 0.8425 Epoch 169: 2160000/2160000 [==============================] - 42s 19us/sample - loss: 0.4813 - acc: 0.7637 - auc: 0.8470 - val_loss: 0.4896 - val_acc: 0.7581 - val_auc: 0.8411 Epoch 170: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4814 - acc: 0.7639 - auc: 0.8469 - val_loss: 0.4889 - val_acc: 0.7587 - val_auc: 0.8414 Epoch 171: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.4813 - acc: 0.7640 - auc: 0.8470 - val_loss: 0.4889 - val_acc: 0.7582 - val_auc: 0.8423 Epoch 172: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.4811 - acc: 0.7641 - auc: 0.8472 - val_loss: 0.4884 - val_acc: 0.7585 - val_auc: 0.8417 Epoch 173: 2160000/2160000 [==============================] - 44s 21us/sample - loss: 0.4809 - acc: 0.7642 - auc: 0.8473 - val_loss: 0.4857 - val_acc: 0.7606 - val_auc: 0.8441 Epoch 174: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.4810 - acc: 0.7639 - auc: 0.8473 - val_loss: 0.4868 - val_acc: 0.7593 - val_auc: 0.8429 Epoch 175: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4804 - acc: 0.7645 - auc: 0.8476 - val_loss: 0.4871 - val_acc: 0.7601 - val_auc: 0.8427 Epoch 176: 2160000/2160000 [==============================] - 42s 19us/sample - loss: 0.4803 - acc: 0.7645 - auc: 0.8477 - val_loss: 0.4899 - val_acc: 0.7576 - val_auc: 0.8421 Epoch 177: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4805 - acc: 0.7645 - auc: 0.8476 - val_loss: 0.4898 - val_acc: 0.7594 - val_auc: 0.8429 Epoch 178: 2160000/2160000 [==============================] - 45s 21us/sample - loss: 0.4800 - acc: 0.7646 - auc: 0.8479 - val_loss: 0.4858 - val_acc: 0.7605 - val_auc: 0.8437 Epoch 179: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4802 - acc: 0.7645 - auc: 0.8478 - val_loss: 0.4892 - val_acc: 0.7582 - val_auc: 0.8414 Epoch 180: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4799 - acc: 0.7648 - auc: 0.8480 - val_loss: 0.4869 - val_acc: 0.7594 - val_auc: 0.8433 Epoch 181: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4798 - acc: 0.7648 - auc: 0.8481 - val_loss: 0.4863 - val_acc: 0.7602 - val_auc: 0.8433 Epoch 182: 2160000/2160000 [==============================] - 42s 19us/sample - loss: 0.4797 - acc: 0.7648 - auc: 0.8482 - val_loss: 0.4857 - val_acc: 0.7607 - val_auc: 0.8440 Epoch 183: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4799 - acc: 0.7650 - auc: 0.8480 - val_loss: 0.4890 - val_acc: 0.7584 - val_auc: 0.8411 Epoch 184: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4791 - acc: 0.7652 - auc: 0.8486 - val_loss: 0.4862 - val_acc: 0.7608 - val_auc: 0.8441 Epoch 185: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4792 - acc: 0.7654 - auc: 0.8485 - val_loss: 0.4875 - val_acc: 0.7587 - val_auc: 0.8427 Epoch 186: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4788 - acc: 0.7655 - auc: 0.8488 - val_loss: 0.4923 - val_acc: 0.7565 - val_auc: 0.8406 Epoch 187: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4788 - acc: 0.7656 - auc: 0.8488 - val_loss: 0.4859 - val_acc: 0.7596 - val_auc: 0.8434 Epoch 188: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4785 - acc: 0.7656 - auc: 0.8490 - val_loss: 0.4863 - val_acc: 0.7601 - val_auc: 0.8432 Epoch 189: 2160000/2160000 [==============================] - 42s 19us/sample - loss: 0.4788 - acc: 0.7655 - auc: 0.8488 - val_loss: 0.4872 - val_acc: 0.7589 - val_auc: 0.8439 Epoch 190: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4782 - acc: 0.7659 - auc: 0.8492 - val_loss: 0.4914 - val_acc: 0.7566 - val_auc: 0.8409 Epoch 191: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4784 - acc: 0.7658 - auc: 0.8491 - val_loss: 0.4862 - val_acc: 0.7607 - val_auc: 0.8436 Epoch 192: 2160000/2160000 [==============================] - 44s 21us/sample - loss: 0.4781 - acc: 0.7660 - auc: 0.8493 - val_loss: 0.4874 - val_acc: 0.7595 - val_auc: 0.8428 Epoch 193: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4780 - acc: 0.7662 - auc: 0.8494 - val_loss: 0.4854 - val_acc: 0.7608 - val_auc: 0.8440 Epoch 194: 2160000/2160000 [==============================] - 47s 22us/sample - loss: 0.4778 - acc: 0.7660 - auc: 0.8495 - val_loss: 0.4886 - val_acc: 0.7591 - val_auc: 0.8419 Epoch 195: 2160000/2160000 [==============================] - 46s 21us/sample - loss: 0.4780 - acc: 0.7660 - auc: 0.8494 - val_loss: 0.4841 - val_acc: 0.7616 - val_auc: 0.8450 Epoch 196: 2160000/2160000 [==============================] - 44s 20us/sample - loss: 0.4777 - acc: 0.7663 - auc: 0.8495 - val_loss: 0.4876 - val_acc: 0.7596 - val_auc: 0.8425 Epoch 197: 2160000/2160000 [==============================] - 43s 20us/sample - loss: 0.4776 - acc: 0.7663 - auc: 0.8497 - val_loss: 0.4889 - val_acc: 0.7582 - val_auc: 0.8412 Epoch 198: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4776 - acc: 0.7664 - auc: 0.8497 - val_loss: 0.4874 - val_acc: 0.7594 - val_auc: 0.8424 Epoch 199: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4773 - acc: 0.7665 - auc: 0.8499 - val_loss: 0.4902 - val_acc: 0.7575 - val_auc: 0.8416 Epoch 200: 2160000/2160000 [==============================] - 42s 20us/sample - loss: 0.4771 - acc: 0.7666 - auc: 0.8500 - val_loss: 0.4850 - val_acc: 0.7614 - val_auc: 0.8451
print_logs('keras_model.log')
Epoch 1: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.6819 - acc: 0.5553 - auc: 0.5714 - val_loss: 0.6479 - val_acc: 0.6248 - val_auc: 0.6675 Epoch 2: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.6496 - acc: 0.6169 - auc: 0.6585 - val_loss: 0.6334 - val_acc: 0.6414 - val_auc: 0.6930 Epoch 3: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.6439 - acc: 0.6239 - auc: 0.6695 - val_loss: 0.6316 - val_acc: 0.6366 - val_auc: 0.6983 Epoch 4: 2160000/2160000 [==============================] - 70s 33us/sample - loss: 0.6398 - acc: 0.6299 - auc: 0.6770 - val_loss: 0.6361 - val_acc: 0.6335 - val_auc: 0.6939 Epoch 5: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.6385 - acc: 0.6314 - auc: 0.6785 - val_loss: 0.6275 - val_acc: 0.6452 - val_auc: 0.7002 Epoch 6: 2160000/2160000 [==============================] - 64s 29us/sample - loss: 0.6343 - acc: 0.6365 - auc: 0.6861 - val_loss: 0.6197 - val_acc: 0.6493 - val_auc: 0.7154 Epoch 7: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.6307 - acc: 0.6413 - auc: 0.6925 - val_loss: 0.6254 - val_acc: 0.6419 - val_auc: 0.7137 Epoch 8: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.6280 - acc: 0.6447 - auc: 0.6971 - val_loss: 0.6163 - val_acc: 0.6571 - val_auc: 0.7177 Epoch 9: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.6246 - acc: 0.6486 - auc: 0.7024 - val_loss: 0.6134 - val_acc: 0.6600 - val_auc: 0.7226 Epoch 10: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.6234 - acc: 0.6501 - auc: 0.7040 - val_loss: 0.6130 - val_acc: 0.6617 - val_auc: 0.7259 Epoch 11: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.6211 - acc: 0.6534 - auc: 0.7076 - val_loss: 0.6092 - val_acc: 0.6743 - val_auc: 0.7356 Epoch 12: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.6205 - acc: 0.6545 - auc: 0.7080 - val_loss: 0.6105 - val_acc: 0.6686 - val_auc: 0.7286 Epoch 13: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.6186 - acc: 0.6563 - auc: 0.7111 - val_loss: 0.6062 - val_acc: 0.6702 - val_auc: 0.7364 Epoch 14: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.6173 - acc: 0.6573 - auc: 0.7130 - val_loss: 0.5966 - val_acc: 0.6798 - val_auc: 0.7442 Epoch 15: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.6153 - acc: 0.6582 - auc: 0.7155 - val_loss: 0.6061 - val_acc: 0.6729 - val_auc: 0.7412 Epoch 16: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.6163 - acc: 0.6561 - auc: 0.7136 - val_loss: 0.6043 - val_acc: 0.6689 - val_auc: 0.7413 Epoch 17: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.6130 - acc: 0.6609 - auc: 0.7188 - val_loss: 0.6010 - val_acc: 0.6772 - val_auc: 0.7455 Epoch 18: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.6128 - acc: 0.6598 - auc: 0.7190 - val_loss: 0.6014 - val_acc: 0.6806 - val_auc: 0.7453 Epoch 19: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.6093 - acc: 0.6640 - auc: 0.7241 - val_loss: 0.6144 - val_acc: 0.6576 - val_auc: 0.7500 Epoch 20: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.6056 - acc: 0.6681 - auc: 0.7292 - val_loss: 0.5986 - val_acc: 0.6779 - val_auc: 0.7580 Epoch 21: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.6045 - acc: 0.6683 - auc: 0.7304 - val_loss: 0.5913 - val_acc: 0.6853 - val_auc: 0.7563 Epoch 22: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.6032 - acc: 0.6704 - auc: 0.7323 - val_loss: 0.6089 - val_acc: 0.6636 - val_auc: 0.7526 Epoch 23: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.6000 - acc: 0.6738 - auc: 0.7366 - val_loss: 0.5973 - val_acc: 0.6762 - val_auc: 0.7614 Epoch 24: 2160000/2160000 [==============================] - 64s 29us/sample - loss: 0.5970 - acc: 0.6763 - auc: 0.7402 - val_loss: 0.5919 - val_acc: 0.6862 - val_auc: 0.7637 Epoch 25: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5964 - acc: 0.6769 - auc: 0.7409 - val_loss: 0.6032 - val_acc: 0.6704 - val_auc: 0.7587 Epoch 26: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5947 - acc: 0.6789 - auc: 0.7432 - val_loss: 0.5944 - val_acc: 0.6785 - val_auc: 0.7627 Epoch 27: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5960 - acc: 0.6775 - auc: 0.7413 - val_loss: 0.5893 - val_acc: 0.6887 - val_auc: 0.7626 Epoch 28: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5943 - acc: 0.6790 - auc: 0.7434 - val_loss: 0.5995 - val_acc: 0.6720 - val_auc: 0.7621 Epoch 29: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5935 - acc: 0.6799 - auc: 0.7444 - val_loss: 0.5964 - val_acc: 0.6803 - val_auc: 0.7648 Epoch 30: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5914 - acc: 0.6824 - auc: 0.7472 - val_loss: 0.5917 - val_acc: 0.6806 - val_auc: 0.7656 Epoch 31: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5920 - acc: 0.6812 - auc: 0.7461 - val_loss: 0.5952 - val_acc: 0.6795 - val_auc: 0.7594 Epoch 32: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5881 - acc: 0.6850 - auc: 0.7511 - val_loss: 0.5857 - val_acc: 0.6902 - val_auc: 0.7700 Epoch 33: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5881 - acc: 0.6849 - auc: 0.7509 - val_loss: 0.5965 - val_acc: 0.6762 - val_auc: 0.7705 Epoch 34: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5870 - acc: 0.6857 - auc: 0.7522 - val_loss: 0.5987 - val_acc: 0.6766 - val_auc: 0.7659 Epoch 35: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5840 - acc: 0.6886 - auc: 0.7557 - val_loss: 0.5946 - val_acc: 0.6738 - val_auc: 0.7709 Epoch 36: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5825 - acc: 0.6898 - auc: 0.7574 - val_loss: 0.5850 - val_acc: 0.6878 - val_auc: 0.7769 Epoch 37: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5821 - acc: 0.6900 - auc: 0.7578 - val_loss: 0.5995 - val_acc: 0.6738 - val_auc: 0.7693 Epoch 38: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5807 - acc: 0.6916 - auc: 0.7595 - val_loss: 0.6048 - val_acc: 0.6691 - val_auc: 0.7720 Epoch 39: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5814 - acc: 0.6909 - auc: 0.7585 - val_loss: 0.5941 - val_acc: 0.6780 - val_auc: 0.7723 Epoch 40: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5792 - acc: 0.6929 - auc: 0.7611 - val_loss: 0.5867 - val_acc: 0.6886 - val_auc: 0.7744 Epoch 41: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5788 - acc: 0.6930 - auc: 0.7615 - val_loss: 0.5947 - val_acc: 0.6797 - val_auc: 0.7759 Epoch 42: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5778 - acc: 0.6940 - auc: 0.7627 - val_loss: 0.5959 - val_acc: 0.6801 - val_auc: 0.7741 Epoch 43: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5767 - acc: 0.6949 - auc: 0.7638 - val_loss: 0.5815 - val_acc: 0.6906 - val_auc: 0.7795 Epoch 44: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5761 - acc: 0.6957 - auc: 0.7645 - val_loss: 0.6173 - val_acc: 0.6599 - val_auc: 0.7739 Epoch 45: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5759 - acc: 0.6958 - auc: 0.7647 - val_loss: 0.5775 - val_acc: 0.6955 - val_auc: 0.7796 Epoch 46: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5754 - acc: 0.6963 - auc: 0.7652 - val_loss: 0.5873 - val_acc: 0.6857 - val_auc: 0.7796 Epoch 47: 2160000/2160000 [==============================] - 64s 29us/sample - loss: 0.5744 - acc: 0.6974 - auc: 0.7663 - val_loss: 0.5943 - val_acc: 0.6796 - val_auc: 0.7759 Epoch 48: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5741 - acc: 0.6973 - auc: 0.7666 - val_loss: 0.5875 - val_acc: 0.6882 - val_auc: 0.7779 Epoch 49: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5730 - acc: 0.6984 - auc: 0.7678 - val_loss: 0.6156 - val_acc: 0.6527 - val_auc: 0.7779 Epoch 50: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5726 - acc: 0.6985 - auc: 0.7682 - val_loss: 0.5898 - val_acc: 0.6815 - val_auc: 0.7802 Epoch 51: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5722 - acc: 0.6993 - auc: 0.7688 - val_loss: 0.5762 - val_acc: 0.6963 - val_auc: 0.7830 Epoch 52: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5713 - acc: 0.6999 - auc: 0.7696 - val_loss: 0.5676 - val_acc: 0.7032 - val_auc: 0.7830 Epoch 53: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5710 - acc: 0.7002 - auc: 0.7700 - val_loss: 0.5835 - val_acc: 0.6878 - val_auc: 0.7794 Epoch 54: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5698 - acc: 0.7013 - auc: 0.7713 - val_loss: 0.5792 - val_acc: 0.6938 - val_auc: 0.7844 Epoch 55: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5696 - acc: 0.7014 - auc: 0.7715 - val_loss: 0.5835 - val_acc: 0.6884 - val_auc: 0.7826 Epoch 56: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5694 - acc: 0.7014 - auc: 0.7716 - val_loss: 0.5750 - val_acc: 0.6941 - val_auc: 0.7854 Epoch 57: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5697 - acc: 0.7011 - auc: 0.7713 - val_loss: 0.5778 - val_acc: 0.6917 - val_auc: 0.7867 Epoch 58: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.5692 - acc: 0.7017 - auc: 0.7719 - val_loss: 0.5753 - val_acc: 0.6953 - val_auc: 0.7848 Epoch 59: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5691 - acc: 0.7016 - auc: 0.7720 - val_loss: 0.5896 - val_acc: 0.6790 - val_auc: 0.7847 Epoch 60: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5678 - acc: 0.7029 - auc: 0.7733 - val_loss: 0.5818 - val_acc: 0.6923 - val_auc: 0.7831 Epoch 61: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5682 - acc: 0.7022 - auc: 0.7729 - val_loss: 0.5809 - val_acc: 0.6918 - val_auc: 0.7830 Epoch 62: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5669 - acc: 0.7036 - auc: 0.7742 - val_loss: 0.5700 - val_acc: 0.7002 - val_auc: 0.7866 Epoch 63: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5671 - acc: 0.7033 - auc: 0.7740 - val_loss: 0.5705 - val_acc: 0.7013 - val_auc: 0.7875 Epoch 64: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5681 - acc: 0.7028 - auc: 0.7731 - val_loss: 0.5754 - val_acc: 0.6962 - val_auc: 0.7858 Epoch 65: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5677 - acc: 0.7028 - auc: 0.7734 - val_loss: 0.5787 - val_acc: 0.6946 - val_auc: 0.7839 Epoch 66: 2160000/2160000 [==============================] - 64s 29us/sample - loss: 0.5659 - acc: 0.7042 - auc: 0.7752 - val_loss: 0.5848 - val_acc: 0.6878 - val_auc: 0.7853 Epoch 67: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5654 - acc: 0.7049 - auc: 0.7757 - val_loss: 0.5820 - val_acc: 0.6923 - val_auc: 0.7876 Epoch 68: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5643 - acc: 0.7054 - auc: 0.7769 - val_loss: 0.5717 - val_acc: 0.6985 - val_auc: 0.7888 Epoch 69: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5643 - acc: 0.7059 - auc: 0.7770 - val_loss: 0.5673 - val_acc: 0.7032 - val_auc: 0.7897 Epoch 70: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5641 - acc: 0.7061 - auc: 0.7772 - val_loss: 0.5914 - val_acc: 0.6796 - val_auc: 0.7848 Epoch 71: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5637 - acc: 0.7060 - auc: 0.7775 - val_loss: 0.5751 - val_acc: 0.6934 - val_auc: 0.7872 Epoch 72: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5640 - acc: 0.7060 - auc: 0.7772 - val_loss: 0.5724 - val_acc: 0.7003 - val_auc: 0.7870 Epoch 73: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5642 - acc: 0.7057 - auc: 0.7770 - val_loss: 0.5748 - val_acc: 0.6964 - val_auc: 0.7875 Epoch 74: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5638 - acc: 0.7062 - auc: 0.7775 - val_loss: 0.5778 - val_acc: 0.6908 - val_auc: 0.7893 Epoch 75: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5632 - acc: 0.7062 - auc: 0.7779 - val_loss: 0.5680 - val_acc: 0.7025 - val_auc: 0.7898 Epoch 76: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5631 - acc: 0.7070 - auc: 0.7783 - val_loss: 0.5767 - val_acc: 0.6947 - val_auc: 0.7905 Epoch 77: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5630 - acc: 0.7065 - auc: 0.7782 - val_loss: 0.5786 - val_acc: 0.6973 - val_auc: 0.7825 Epoch 78: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5627 - acc: 0.7069 - auc: 0.7786 - val_loss: 0.5805 - val_acc: 0.6886 - val_auc: 0.7888 Epoch 79: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5622 - acc: 0.7072 - auc: 0.7790 - val_loss: 0.5650 - val_acc: 0.7044 - val_auc: 0.7892 Epoch 80: 2160000/2160000 [==============================] - 88s 41us/sample - loss: 0.5618 - acc: 0.7075 - auc: 0.7794 - val_loss: 0.5838 - val_acc: 0.6898 - val_auc: 0.7880 Epoch 81: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5616 - acc: 0.7078 - auc: 0.7796 - val_loss: 0.5832 - val_acc: 0.6898 - val_auc: 0.7894 Epoch 82: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5615 - acc: 0.7080 - auc: 0.7798 - val_loss: 0.5807 - val_acc: 0.6905 - val_auc: 0.7902 Epoch 83: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5604 - acc: 0.7089 - auc: 0.7809 - val_loss: 0.5672 - val_acc: 0.7022 - val_auc: 0.7922 Epoch 84: 2160000/2160000 [==============================] - 72s 34us/sample - loss: 0.5599 - acc: 0.7090 - auc: 0.7813 - val_loss: 0.5658 - val_acc: 0.7043 - val_auc: 0.7934 Epoch 85: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5600 - acc: 0.7092 - auc: 0.7813 - val_loss: 0.5729 - val_acc: 0.6957 - val_auc: 0.7932 Epoch 86: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5597 - acc: 0.7095 - auc: 0.7815 - val_loss: 0.5692 - val_acc: 0.7001 - val_auc: 0.7926 Epoch 87: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5599 - acc: 0.7090 - auc: 0.7813 - val_loss: 0.5741 - val_acc: 0.6965 - val_auc: 0.7845 Epoch 88: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5603 - acc: 0.7087 - auc: 0.7809 - val_loss: 0.5634 - val_acc: 0.7056 - val_auc: 0.7885 Epoch 89: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5596 - acc: 0.7093 - auc: 0.7816 - val_loss: 0.5709 - val_acc: 0.7017 - val_auc: 0.7930 Epoch 90: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5589 - acc: 0.7099 - auc: 0.7822 - val_loss: 0.5696 - val_acc: 0.7019 - val_auc: 0.7899 Epoch 91: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5596 - acc: 0.7095 - auc: 0.7816 - val_loss: 0.5673 - val_acc: 0.7010 - val_auc: 0.7926 Epoch 92: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5589 - acc: 0.7098 - auc: 0.7822 - val_loss: 0.5757 - val_acc: 0.6945 - val_auc: 0.7912 Epoch 93: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5589 - acc: 0.7099 - auc: 0.7822 - val_loss: 0.5703 - val_acc: 0.6980 - val_auc: 0.7924 Epoch 94: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5584 - acc: 0.7102 - auc: 0.7828 - val_loss: 0.5681 - val_acc: 0.7004 - val_auc: 0.7961 Epoch 95: 2160000/2160000 [==============================] - 70s 33us/sample - loss: 0.5583 - acc: 0.7105 - auc: 0.7828 - val_loss: 0.5716 - val_acc: 0.7015 - val_auc: 0.7936 Epoch 96: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5580 - acc: 0.7103 - auc: 0.7831 - val_loss: 0.5760 - val_acc: 0.6963 - val_auc: 0.7931 Epoch 97: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5577 - acc: 0.7110 - auc: 0.7835 - val_loss: 0.5800 - val_acc: 0.6904 - val_auc: 0.7898 Epoch 98: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5572 - acc: 0.7116 - auc: 0.7840 - val_loss: 0.5727 - val_acc: 0.6969 - val_auc: 0.7952 Epoch 99: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5572 - acc: 0.7113 - auc: 0.7840 - val_loss: 0.5671 - val_acc: 0.7021 - val_auc: 0.7930 Epoch 100: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5568 - acc: 0.7115 - auc: 0.7844 - val_loss: 0.5655 - val_acc: 0.7034 - val_auc: 0.7948 Epoch 101: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5566 - acc: 0.7119 - auc: 0.7846 - val_loss: 0.5691 - val_acc: 0.6996 - val_auc: 0.7944 Epoch 102: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5571 - acc: 0.7113 - auc: 0.7841 - val_loss: 0.5620 - val_acc: 0.7072 - val_auc: 0.7958 Epoch 103: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5563 - acc: 0.7120 - auc: 0.7849 - val_loss: 0.5553 - val_acc: 0.7136 - val_auc: 0.7956 Epoch 104: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5559 - acc: 0.7118 - auc: 0.7852 - val_loss: 0.5717 - val_acc: 0.6966 - val_auc: 0.7949 Epoch 105: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5555 - acc: 0.7127 - auc: 0.7856 - val_loss: 0.5737 - val_acc: 0.6960 - val_auc: 0.7915 Epoch 106: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5553 - acc: 0.7120 - auc: 0.7857 - val_loss: 0.5659 - val_acc: 0.7012 - val_auc: 0.7958 Epoch 107: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5548 - acc: 0.7130 - auc: 0.7863 - val_loss: 0.5630 - val_acc: 0.7072 - val_auc: 0.7934 Epoch 108: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5546 - acc: 0.7132 - auc: 0.7865 - val_loss: 0.5730 - val_acc: 0.6966 - val_auc: 0.7944 Epoch 109: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5549 - acc: 0.7129 - auc: 0.7861 - val_loss: 0.5617 - val_acc: 0.7063 - val_auc: 0.7977 Epoch 110: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5542 - acc: 0.7133 - auc: 0.7868 - val_loss: 0.5703 - val_acc: 0.6982 - val_auc: 0.7959 Epoch 111: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5539 - acc: 0.7137 - auc: 0.7871 - val_loss: 0.5690 - val_acc: 0.7013 - val_auc: 0.7964 Epoch 112: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5544 - acc: 0.7135 - auc: 0.7867 - val_loss: 0.5674 - val_acc: 0.7034 - val_auc: 0.7954 Epoch 113: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5545 - acc: 0.7133 - auc: 0.7866 - val_loss: 0.5635 - val_acc: 0.7061 - val_auc: 0.7936 Epoch 114: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5541 - acc: 0.7138 - auc: 0.7870 - val_loss: 0.5642 - val_acc: 0.7045 - val_auc: 0.7965 Epoch 115: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5538 - acc: 0.7139 - auc: 0.7872 - val_loss: 0.5622 - val_acc: 0.7078 - val_auc: 0.7954 Epoch 116: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5534 - acc: 0.7141 - auc: 0.7876 - val_loss: 0.5656 - val_acc: 0.7033 - val_auc: 0.7965 Epoch 117: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5536 - acc: 0.7141 - auc: 0.7875 - val_loss: 0.5617 - val_acc: 0.7077 - val_auc: 0.7962 Epoch 118: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5532 - acc: 0.7144 - auc: 0.7878 - val_loss: 0.5602 - val_acc: 0.7071 - val_auc: 0.7996 Epoch 119: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5532 - acc: 0.7145 - auc: 0.7878 - val_loss: 0.5640 - val_acc: 0.7065 - val_auc: 0.7971 Epoch 120: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5531 - acc: 0.7147 - auc: 0.7880 - val_loss: 0.5568 - val_acc: 0.7116 - val_auc: 0.7992 Epoch 121: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5528 - acc: 0.7146 - auc: 0.7882 - val_loss: 0.5656 - val_acc: 0.7027 - val_auc: 0.7972 Epoch 122: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5527 - acc: 0.7147 - auc: 0.7883 - val_loss: 0.5635 - val_acc: 0.7058 - val_auc: 0.7983 Epoch 123: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5523 - acc: 0.7150 - auc: 0.7887 - val_loss: 0.5729 - val_acc: 0.6972 - val_auc: 0.7960 Epoch 124: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5522 - acc: 0.7151 - auc: 0.7887 - val_loss: 0.5655 - val_acc: 0.7041 - val_auc: 0.7949 Epoch 125: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5524 - acc: 0.7148 - auc: 0.7886 - val_loss: 0.5630 - val_acc: 0.7051 - val_auc: 0.7966 Epoch 126: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5517 - acc: 0.7152 - auc: 0.7892 - val_loss: 0.5617 - val_acc: 0.7070 - val_auc: 0.7990 Epoch 127: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5517 - acc: 0.7152 - auc: 0.7893 - val_loss: 0.5574 - val_acc: 0.7125 - val_auc: 0.7953 Epoch 128: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5515 - acc: 0.7157 - auc: 0.7895 - val_loss: 0.5693 - val_acc: 0.6999 - val_auc: 0.7975 Epoch 129: 2160000/2160000 [==============================] - 76s 35us/sample - loss: 0.5514 - acc: 0.7158 - auc: 0.7896 - val_loss: 0.5665 - val_acc: 0.7057 - val_auc: 0.7975 Epoch 130: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5514 - acc: 0.7159 - auc: 0.7896 - val_loss: 0.5575 - val_acc: 0.7123 - val_auc: 0.7955 Epoch 131: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5513 - acc: 0.7157 - auc: 0.7897 - val_loss: 0.5753 - val_acc: 0.6954 - val_auc: 0.7954 Epoch 132: 2160000/2160000 [==============================] - 76s 35us/sample - loss: 0.5510 - acc: 0.7162 - auc: 0.7900 - val_loss: 0.5599 - val_acc: 0.7071 - val_auc: 0.7996 Epoch 133: 2160000/2160000 [==============================] - 76s 35us/sample - loss: 0.5509 - acc: 0.7160 - auc: 0.7900 - val_loss: 0.5713 - val_acc: 0.6983 - val_auc: 0.7968 Epoch 134: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5505 - acc: 0.7164 - auc: 0.7904 - val_loss: 0.5649 - val_acc: 0.7048 - val_auc: 0.7958 Epoch 135: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5504 - acc: 0.7164 - auc: 0.7904 - val_loss: 0.5581 - val_acc: 0.7102 - val_auc: 0.7989 Epoch 136: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5502 - acc: 0.7166 - auc: 0.7907 - val_loss: 0.5599 - val_acc: 0.7090 - val_auc: 0.7974 Epoch 137: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5504 - acc: 0.7164 - auc: 0.7905 - val_loss: 0.5608 - val_acc: 0.7065 - val_auc: 0.7992 Epoch 138: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5503 - acc: 0.7166 - auc: 0.7907 - val_loss: 0.5681 - val_acc: 0.7046 - val_auc: 0.7988 Epoch 139: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5501 - acc: 0.7166 - auc: 0.7908 - val_loss: 0.5714 - val_acc: 0.6956 - val_auc: 0.7997 Epoch 140: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5500 - acc: 0.7166 - auc: 0.7907 - val_loss: 0.5706 - val_acc: 0.6998 - val_auc: 0.7997 Epoch 141: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5497 - acc: 0.7167 - auc: 0.7911 - val_loss: 0.5672 - val_acc: 0.7012 - val_auc: 0.7989 Epoch 142: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5496 - acc: 0.7170 - auc: 0.7912 - val_loss: 0.5592 - val_acc: 0.7086 - val_auc: 0.7995 Epoch 143: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5493 - acc: 0.7171 - auc: 0.7915 - val_loss: 0.5639 - val_acc: 0.7055 - val_auc: 0.7991 Epoch 144: 2160000/2160000 [==============================] - 79s 36us/sample - loss: 0.5494 - acc: 0.7169 - auc: 0.7914 - val_loss: 0.5550 - val_acc: 0.7132 - val_auc: 0.8021 Epoch 145: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5492 - acc: 0.7172 - auc: 0.7916 - val_loss: 0.5624 - val_acc: 0.7054 - val_auc: 0.8002 Epoch 146: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5488 - acc: 0.7174 - auc: 0.7919 - val_loss: 0.5612 - val_acc: 0.7066 - val_auc: 0.8017 Epoch 147: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5491 - acc: 0.7173 - auc: 0.7917 - val_loss: 0.5546 - val_acc: 0.7131 - val_auc: 0.8007 Epoch 148: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5484 - acc: 0.7177 - auc: 0.7923 - val_loss: 0.5566 - val_acc: 0.7134 - val_auc: 0.7995 Epoch 149: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5486 - acc: 0.7179 - auc: 0.7921 - val_loss: 0.5648 - val_acc: 0.7043 - val_auc: 0.7993 Epoch 150: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5486 - acc: 0.7177 - auc: 0.7922 - val_loss: 0.5612 - val_acc: 0.7043 - val_auc: 0.8029 Epoch 151: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5484 - acc: 0.7177 - auc: 0.7924 - val_loss: 0.5693 - val_acc: 0.7023 - val_auc: 0.7976 Epoch 152: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5481 - acc: 0.7178 - auc: 0.7926 - val_loss: 0.5689 - val_acc: 0.7003 - val_auc: 0.7986 Epoch 153: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5482 - acc: 0.7180 - auc: 0.7926 - val_loss: 0.5627 - val_acc: 0.7085 - val_auc: 0.7998 Epoch 154: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5479 - acc: 0.7184 - auc: 0.7929 - val_loss: 0.5695 - val_acc: 0.7027 - val_auc: 0.7988 Epoch 155: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5477 - acc: 0.7184 - auc: 0.7930 - val_loss: 0.5621 - val_acc: 0.7058 - val_auc: 0.8010 Epoch 156: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5475 - acc: 0.7185 - auc: 0.7932 - val_loss: 0.5646 - val_acc: 0.7037 - val_auc: 0.7993 Epoch 157: 2160000/2160000 [==============================] - 84s 39us/sample - loss: 0.5475 - acc: 0.7184 - auc: 0.7932 - val_loss: 0.5653 - val_acc: 0.7071 - val_auc: 0.7977 Epoch 158: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5474 - acc: 0.7183 - auc: 0.7933 - val_loss: 0.5615 - val_acc: 0.7065 - val_auc: 0.8030 Epoch 159: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5475 - acc: 0.7184 - auc: 0.7932 - val_loss: 0.5615 - val_acc: 0.7076 - val_auc: 0.8008 Epoch 160: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5478 - acc: 0.7179 - auc: 0.7928 - val_loss: 0.5592 - val_acc: 0.7107 - val_auc: 0.7995 Epoch 161: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5476 - acc: 0.7186 - auc: 0.7931 - val_loss: 0.5651 - val_acc: 0.7063 - val_auc: 0.7989 Epoch 162: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5473 - acc: 0.7187 - auc: 0.7934 - val_loss: 0.5633 - val_acc: 0.7074 - val_auc: 0.7989 Epoch 163: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5467 - acc: 0.7191 - auc: 0.7940 - val_loss: 0.5583 - val_acc: 0.7108 - val_auc: 0.8016 Epoch 164: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5464 - acc: 0.7193 - auc: 0.7941 - val_loss: 0.5604 - val_acc: 0.7089 - val_auc: 0.8004 Epoch 165: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5466 - acc: 0.7193 - auc: 0.7941 - val_loss: 0.5585 - val_acc: 0.7101 - val_auc: 0.7985 Epoch 166: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5464 - acc: 0.7195 - auc: 0.7942 - val_loss: 0.5564 - val_acc: 0.7111 - val_auc: 0.8018 Epoch 167: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5465 - acc: 0.7191 - auc: 0.7941 - val_loss: 0.5582 - val_acc: 0.7121 - val_auc: 0.8021 Epoch 168: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5463 - acc: 0.7195 - auc: 0.7944 - val_loss: 0.5572 - val_acc: 0.7130 - val_auc: 0.8011 Epoch 169: 2160000/2160000 [==============================] - 76s 35us/sample - loss: 0.5462 - acc: 0.7197 - auc: 0.7944 - val_loss: 0.5678 - val_acc: 0.7042 - val_auc: 0.7989 Epoch 170: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5461 - acc: 0.7192 - auc: 0.7945 - val_loss: 0.5628 - val_acc: 0.7077 - val_auc: 0.8016 Epoch 171: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5459 - acc: 0.7195 - auc: 0.7947 - val_loss: 0.5597 - val_acc: 0.7108 - val_auc: 0.7987 Epoch 172: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5459 - acc: 0.7195 - auc: 0.7947 - val_loss: 0.5558 - val_acc: 0.7130 - val_auc: 0.8015 Epoch 173: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5455 - acc: 0.7199 - auc: 0.7950 - val_loss: 0.5666 - val_acc: 0.7035 - val_auc: 0.8022 Epoch 174: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5452 - acc: 0.7201 - auc: 0.7954 - val_loss: 0.5583 - val_acc: 0.7107 - val_auc: 0.8021 Epoch 175: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5452 - acc: 0.7201 - auc: 0.7953 - val_loss: 0.5549 - val_acc: 0.7117 - val_auc: 0.8031 Epoch 176: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5450 - acc: 0.7202 - auc: 0.7954 - val_loss: 0.5606 - val_acc: 0.7071 - val_auc: 0.8027 Epoch 177: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5450 - acc: 0.7201 - auc: 0.7954 - val_loss: 0.5596 - val_acc: 0.7086 - val_auc: 0.8009 Epoch 178: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5451 - acc: 0.7202 - auc: 0.7954 - val_loss: 0.5658 - val_acc: 0.7041 - val_auc: 0.8031 Epoch 179: 2160000/2160000 [==============================] - 70s 33us/sample - loss: 0.5448 - acc: 0.7205 - auc: 0.7956 - val_loss: 0.5593 - val_acc: 0.7098 - val_auc: 0.8020 Epoch 180: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5447 - acc: 0.7202 - auc: 0.7957 - val_loss: 0.5516 - val_acc: 0.7149 - val_auc: 0.8044 Epoch 181: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5445 - acc: 0.7203 - auc: 0.7959 - val_loss: 0.5593 - val_acc: 0.7111 - val_auc: 0.7996 Epoch 182: 2160000/2160000 [==============================] - 64s 29us/sample - loss: 0.5445 - acc: 0.7206 - auc: 0.7959 - val_loss: 0.5535 - val_acc: 0.7135 - val_auc: 0.8036 Epoch 183: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5445 - acc: 0.7206 - auc: 0.7960 - val_loss: 0.5575 - val_acc: 0.7097 - val_auc: 0.8024 Epoch 184: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5442 - acc: 0.7208 - auc: 0.7962 - val_loss: 0.5574 - val_acc: 0.7122 - val_auc: 0.8026 Epoch 185: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5442 - acc: 0.7209 - auc: 0.7962 - val_loss: 0.5553 - val_acc: 0.7132 - val_auc: 0.8033 Epoch 186: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5441 - acc: 0.7207 - auc: 0.7963 - val_loss: 0.5543 - val_acc: 0.7140 - val_auc: 0.8020 Epoch 187: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5438 - acc: 0.7210 - auc: 0.7966 - val_loss: 0.5701 - val_acc: 0.7010 - val_auc: 0.8024 Epoch 188: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5441 - acc: 0.7208 - auc: 0.7963 - val_loss: 0.5577 - val_acc: 0.7103 - val_auc: 0.8026 Epoch 189: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5439 - acc: 0.7210 - auc: 0.7965 - val_loss: 0.5621 - val_acc: 0.7070 - val_auc: 0.8017 Epoch 190: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5437 - acc: 0.7209 - auc: 0.7966 - val_loss: 0.5586 - val_acc: 0.7109 - val_auc: 0.8038 Epoch 191: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5437 - acc: 0.7210 - auc: 0.7966 - val_loss: 0.5564 - val_acc: 0.7125 - val_auc: 0.8040 Epoch 192: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5435 - acc: 0.7216 - auc: 0.7970 - val_loss: 0.5601 - val_acc: 0.7093 - val_auc: 0.8034 Epoch 193: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.5436 - acc: 0.7210 - auc: 0.7968 - val_loss: 0.5594 - val_acc: 0.7096 - val_auc: 0.8043 Epoch 194: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.5434 - acc: 0.7216 - auc: 0.7970 - val_loss: 0.5592 - val_acc: 0.7092 - val_auc: 0.8033 Epoch 195: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5433 - acc: 0.7215 - auc: 0.7970 - val_loss: 0.5542 - val_acc: 0.7125 - val_auc: 0.8059 Epoch 196: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5433 - acc: 0.7213 - auc: 0.7971 - val_loss: 0.5583 - val_acc: 0.7093 - val_auc: 0.8035 Epoch 197: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.5432 - acc: 0.7217 - auc: 0.7972 - val_loss: 0.5535 - val_acc: 0.7140 - val_auc: 0.8040 Epoch 198: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5433 - acc: 0.7215 - auc: 0.7971 - val_loss: 0.5515 - val_acc: 0.7146 - val_auc: 0.8038 Epoch 199: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.5430 - acc: 0.7217 - auc: 0.7973 - val_loss: 0.5614 - val_acc: 0.7072 - val_auc: 0.8028 Epoch 200: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5430 - acc: 0.7217 - auc: 0.7974 - val_loss: 0.5584 - val_acc: 0.7077 - val_auc: 0.8030
print_logs('keras_model-kera_lr005.log')
Epoch 1: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.6833 - acc: 0.5522 - auc: 0.5657 - val_loss: 0.6479 - val_acc: 0.6227 - val_auc: 0.6637 Epoch 2: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.6496 - acc: 0.6173 - auc: 0.6584 - val_loss: 0.6307 - val_acc: 0.6418 - val_auc: 0.6931 Epoch 3: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.6428 - acc: 0.6251 - auc: 0.6710 - val_loss: 0.6256 - val_acc: 0.6462 - val_auc: 0.7055 Epoch 4: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.6413 - acc: 0.6275 - auc: 0.6739 - val_loss: 0.6295 - val_acc: 0.6455 - val_auc: 0.7017 Epoch 5: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.6403 - acc: 0.6297 - auc: 0.6767 - val_loss: 0.6312 - val_acc: 0.6412 - val_auc: 0.7024 Epoch 6: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.6359 - acc: 0.6357 - auc: 0.6836 - val_loss: 0.6216 - val_acc: 0.6532 - val_auc: 0.7083 Epoch 7: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.6341 - acc: 0.6378 - auc: 0.6866 - val_loss: 0.6168 - val_acc: 0.6546 - val_auc: 0.7217 Epoch 8: 2160000/2160000 [==============================] - 77s 35us/sample - loss: 0.6308 - acc: 0.6422 - auc: 0.6927 - val_loss: 0.6144 - val_acc: 0.6576 - val_auc: 0.7249 Epoch 9: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.6319 - acc: 0.6410 - auc: 0.6906 - val_loss: 0.6139 - val_acc: 0.6640 - val_auc: 0.7237 Epoch 10: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.6284 - acc: 0.6454 - auc: 0.6966 - val_loss: 0.6247 - val_acc: 0.6513 - val_auc: 0.7129 Epoch 11: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.6283 - acc: 0.6450 - auc: 0.6965 - val_loss: 0.6123 - val_acc: 0.6643 - val_auc: 0.7333 Epoch 12: 2160000/2160000 [==============================] - 64s 29us/sample - loss: 0.6276 - acc: 0.6444 - auc: 0.6970 - val_loss: 0.6327 - val_acc: 0.6385 - val_auc: 0.7214 Epoch 13: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.6236 - acc: 0.6496 - auc: 0.7035 - val_loss: 0.6083 - val_acc: 0.6703 - val_auc: 0.7336 Epoch 14: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.6226 - acc: 0.6497 - auc: 0.7044 - val_loss: 0.6186 - val_acc: 0.6585 - val_auc: 0.7181 Epoch 15: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.6229 - acc: 0.6489 - auc: 0.7035 - val_loss: 0.6090 - val_acc: 0.6686 - val_auc: 0.7360 Epoch 16: 2160000/2160000 [==============================] - 83s 38us/sample - loss: 0.6191 - acc: 0.6540 - auc: 0.7099 - val_loss: 0.6095 - val_acc: 0.6644 - val_auc: 0.7335 Epoch 17: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.6174 - acc: 0.6562 - auc: 0.7123 - val_loss: 0.6094 - val_acc: 0.6708 - val_auc: 0.7375 Epoch 18: 2160000/2160000 [==============================] - 60s 28us/sample - loss: 0.6134 - acc: 0.6604 - auc: 0.7184 - val_loss: 0.5988 - val_acc: 0.6820 - val_auc: 0.7462 Epoch 19: 2160000/2160000 [==============================] - 60s 28us/sample - loss: 0.6127 - acc: 0.6607 - auc: 0.7194 - val_loss: 0.6076 - val_acc: 0.6689 - val_auc: 0.7457 Epoch 20: 2160000/2160000 [==============================] - 60s 28us/sample - loss: 0.6106 - acc: 0.6627 - auc: 0.7222 - val_loss: 0.6012 - val_acc: 0.6796 - val_auc: 0.7474 Epoch 21: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.6069 - acc: 0.6669 - auc: 0.7274 - val_loss: 0.5977 - val_acc: 0.6838 - val_auc: 0.7542 Epoch 22: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.6053 - acc: 0.6681 - auc: 0.7294 - val_loss: 0.5963 - val_acc: 0.6852 - val_auc: 0.7519 Epoch 23: 2160000/2160000 [==============================] - 60s 28us/sample - loss: 0.6042 - acc: 0.6694 - auc: 0.7309 - val_loss: 0.5977 - val_acc: 0.6785 - val_auc: 0.7551 Epoch 24: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.6031 - acc: 0.6704 - auc: 0.7324 - val_loss: 0.6035 - val_acc: 0.6720 - val_auc: 0.7553 Epoch 25: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.5999 - acc: 0.6743 - auc: 0.7368 - val_loss: 0.6139 - val_acc: 0.6546 - val_auc: 0.7598 Epoch 26: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5968 - acc: 0.6770 - auc: 0.7407 - val_loss: 0.5885 - val_acc: 0.6892 - val_auc: 0.7624 Epoch 27: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5950 - acc: 0.6786 - auc: 0.7429 - val_loss: 0.5988 - val_acc: 0.6739 - val_auc: 0.7602 Epoch 28: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5944 - acc: 0.6797 - auc: 0.7438 - val_loss: 0.6002 - val_acc: 0.6701 - val_auc: 0.7662 Epoch 29: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5940 - acc: 0.6801 - auc: 0.7441 - val_loss: 0.6058 - val_acc: 0.6618 - val_auc: 0.7653 Epoch 30: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.5923 - acc: 0.6815 - auc: 0.7462 - val_loss: 0.5964 - val_acc: 0.6794 - val_auc: 0.7548 Epoch 31: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5911 - acc: 0.6821 - auc: 0.7476 - val_loss: 0.5904 - val_acc: 0.6840 - val_auc: 0.7687 Epoch 32: 2160000/2160000 [==============================] - 61s 28us/sample - loss: 0.5910 - acc: 0.6826 - auc: 0.7476 - val_loss: 0.6013 - val_acc: 0.6702 - val_auc: 0.7677 Epoch 33: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5912 - acc: 0.6822 - auc: 0.7473 - val_loss: 0.5927 - val_acc: 0.6821 - val_auc: 0.7641 Epoch 34: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5897 - acc: 0.6837 - auc: 0.7491 - val_loss: 0.5936 - val_acc: 0.6805 - val_auc: 0.7648 Epoch 35: 2160000/2160000 [==============================] - 66s 31us/sample - loss: 0.5877 - acc: 0.6853 - auc: 0.7514 - val_loss: 0.5888 - val_acc: 0.6879 - val_auc: 0.7700 Epoch 36: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5881 - acc: 0.6852 - auc: 0.7510 - val_loss: 0.5862 - val_acc: 0.6854 - val_auc: 0.7712 Epoch 37: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5890 - acc: 0.6845 - auc: 0.7500 - val_loss: 0.5945 - val_acc: 0.6811 - val_auc: 0.7706 Epoch 38: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5852 - acc: 0.6875 - auc: 0.7544 - val_loss: 0.5839 - val_acc: 0.6902 - val_auc: 0.7721 Epoch 39: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5847 - acc: 0.6882 - auc: 0.7549 - val_loss: 0.5969 - val_acc: 0.6741 - val_auc: 0.7623 Epoch 40: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5842 - acc: 0.6883 - auc: 0.7554 - val_loss: 0.5856 - val_acc: 0.6890 - val_auc: 0.7707 Epoch 41: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5812 - acc: 0.6918 - auc: 0.7591 - val_loss: 0.5778 - val_acc: 0.6964 - val_auc: 0.7713 Epoch 42: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5816 - acc: 0.6913 - auc: 0.7586 - val_loss: 0.5891 - val_acc: 0.6824 - val_auc: 0.7741 Epoch 43: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5804 - acc: 0.6918 - auc: 0.7598 - val_loss: 0.5895 - val_acc: 0.6848 - val_auc: 0.7733 Epoch 44: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5793 - acc: 0.6932 - auc: 0.7611 - val_loss: 0.5788 - val_acc: 0.6942 - val_auc: 0.7745 Epoch 45: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5791 - acc: 0.6933 - auc: 0.7612 - val_loss: 0.5805 - val_acc: 0.6920 - val_auc: 0.7726 Epoch 46: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5781 - acc: 0.6942 - auc: 0.7624 - val_loss: 0.5779 - val_acc: 0.6933 - val_auc: 0.7771 Epoch 47: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5772 - acc: 0.6949 - auc: 0.7634 - val_loss: 0.5853 - val_acc: 0.6863 - val_auc: 0.7729 Epoch 48: 2160000/2160000 [==============================] - 70s 33us/sample - loss: 0.5764 - acc: 0.6955 - auc: 0.7642 - val_loss: 0.5893 - val_acc: 0.6839 - val_auc: 0.7695 Epoch 49: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5761 - acc: 0.6957 - auc: 0.7645 - val_loss: 0.5753 - val_acc: 0.6976 - val_auc: 0.7772 Epoch 50: 2160000/2160000 [==============================] - 70s 33us/sample - loss: 0.5750 - acc: 0.6969 - auc: 0.7657 - val_loss: 0.5764 - val_acc: 0.6954 - val_auc: 0.7759 Epoch 51: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5741 - acc: 0.6977 - auc: 0.7667 - val_loss: 0.5839 - val_acc: 0.6852 - val_auc: 0.7791 Epoch 52: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5746 - acc: 0.6971 - auc: 0.7661 - val_loss: 0.5734 - val_acc: 0.6978 - val_auc: 0.7828 Epoch 53: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5729 - acc: 0.6987 - auc: 0.7681 - val_loss: 0.5733 - val_acc: 0.6976 - val_auc: 0.7788 Epoch 54: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5729 - acc: 0.6987 - auc: 0.7681 - val_loss: 0.5748 - val_acc: 0.6953 - val_auc: 0.7810 Epoch 55: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5723 - acc: 0.6990 - auc: 0.7687 - val_loss: 0.5761 - val_acc: 0.6923 - val_auc: 0.7841 Epoch 56: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5722 - acc: 0.6995 - auc: 0.7688 - val_loss: 0.5823 - val_acc: 0.6896 - val_auc: 0.7761 Epoch 57: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5715 - acc: 0.6999 - auc: 0.7695 - val_loss: 0.5814 - val_acc: 0.6902 - val_auc: 0.7790 Epoch 58: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5711 - acc: 0.7002 - auc: 0.7700 - val_loss: 0.5788 - val_acc: 0.6903 - val_auc: 0.7833 Epoch 59: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5704 - acc: 0.7008 - auc: 0.7707 - val_loss: 0.5858 - val_acc: 0.6859 - val_auc: 0.7781 Epoch 60: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5701 - acc: 0.7011 - auc: 0.7709 - val_loss: 0.5848 - val_acc: 0.6892 - val_auc: 0.7809 Epoch 61: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5695 - acc: 0.7016 - auc: 0.7716 - val_loss: 0.5708 - val_acc: 0.7003 - val_auc: 0.7882 Epoch 62: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5691 - acc: 0.7016 - auc: 0.7720 - val_loss: 0.5791 - val_acc: 0.6914 - val_auc: 0.7820 Epoch 63: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5687 - acc: 0.7022 - auc: 0.7724 - val_loss: 0.5701 - val_acc: 0.7027 - val_auc: 0.7816 Epoch 64: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5681 - acc: 0.7029 - auc: 0.7730 - val_loss: 0.5826 - val_acc: 0.6885 - val_auc: 0.7825 Epoch 65: 2160000/2160000 [==============================] - 88s 41us/sample - loss: 0.5675 - acc: 0.7031 - auc: 0.7737 - val_loss: 0.5809 - val_acc: 0.6917 - val_auc: 0.7846 Epoch 66: 2160000/2160000 [==============================] - 94s 43us/sample - loss: 0.5675 - acc: 0.7032 - auc: 0.7737 - val_loss: 0.5994 - val_acc: 0.6777 - val_auc: 0.7785 Epoch 67: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5675 - acc: 0.7031 - auc: 0.7737 - val_loss: 0.5758 - val_acc: 0.6929 - val_auc: 0.7881 Epoch 68: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5668 - acc: 0.7039 - auc: 0.7744 - val_loss: 0.5704 - val_acc: 0.7001 - val_auc: 0.7867 Epoch 69: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5662 - acc: 0.7043 - auc: 0.7750 - val_loss: 0.5687 - val_acc: 0.7005 - val_auc: 0.7886 Epoch 70: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5653 - acc: 0.7050 - auc: 0.7759 - val_loss: 0.5799 - val_acc: 0.6908 - val_auc: 0.7849 Epoch 71: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5661 - acc: 0.7044 - auc: 0.7750 - val_loss: 0.5770 - val_acc: 0.6937 - val_auc: 0.7814 Epoch 72: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5657 - acc: 0.7047 - auc: 0.7755 - val_loss: 0.5763 - val_acc: 0.6945 - val_auc: 0.7827 Epoch 73: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5646 - acc: 0.7055 - auc: 0.7765 - val_loss: 0.5730 - val_acc: 0.6953 - val_auc: 0.7865 Epoch 74: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5648 - acc: 0.7052 - auc: 0.7764 - val_loss: 0.5666 - val_acc: 0.7050 - val_auc: 0.7867 Epoch 75: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5646 - acc: 0.7055 - auc: 0.7766 - val_loss: 0.5807 - val_acc: 0.6879 - val_auc: 0.7875 Epoch 76: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5632 - acc: 0.7064 - auc: 0.7780 - val_loss: 0.5689 - val_acc: 0.6987 - val_auc: 0.7905 Epoch 77: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5637 - acc: 0.7061 - auc: 0.7775 - val_loss: 0.5666 - val_acc: 0.7020 - val_auc: 0.7899 Epoch 78: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5632 - acc: 0.7066 - auc: 0.7779 - val_loss: 0.5775 - val_acc: 0.6926 - val_auc: 0.7857 Epoch 79: 2160000/2160000 [==============================] - 64s 29us/sample - loss: 0.5630 - acc: 0.7067 - auc: 0.7781 - val_loss: 0.5667 - val_acc: 0.7034 - val_auc: 0.7900 Epoch 80: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5627 - acc: 0.7066 - auc: 0.7785 - val_loss: 0.5697 - val_acc: 0.7009 - val_auc: 0.7868 Epoch 81: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5626 - acc: 0.7070 - auc: 0.7785 - val_loss: 0.5681 - val_acc: 0.7005 - val_auc: 0.7890 Epoch 82: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5625 - acc: 0.7068 - auc: 0.7786 - val_loss: 0.5764 - val_acc: 0.6956 - val_auc: 0.7869 Epoch 83: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5618 - acc: 0.7079 - auc: 0.7794 - val_loss: 0.5669 - val_acc: 0.7033 - val_auc: 0.7906 Epoch 84: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5613 - acc: 0.7080 - auc: 0.7799 - val_loss: 0.5736 - val_acc: 0.6955 - val_auc: 0.7896 Epoch 85: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5610 - acc: 0.7083 - auc: 0.7802 - val_loss: 0.5658 - val_acc: 0.7033 - val_auc: 0.7905 Epoch 86: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5605 - acc: 0.7084 - auc: 0.7807 - val_loss: 0.5710 - val_acc: 0.6993 - val_auc: 0.7858 Epoch 87: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5602 - acc: 0.7087 - auc: 0.7810 - val_loss: 0.5772 - val_acc: 0.6932 - val_auc: 0.7910 Epoch 88: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5595 - acc: 0.7095 - auc: 0.7817 - val_loss: 0.5777 - val_acc: 0.6944 - val_auc: 0.7864 Epoch 89: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5596 - acc: 0.7093 - auc: 0.7816 - val_loss: 0.5702 - val_acc: 0.6991 - val_auc: 0.7908 Epoch 90: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5594 - acc: 0.7093 - auc: 0.7818 - val_loss: 0.5713 - val_acc: 0.6966 - val_auc: 0.7942 Epoch 91: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5589 - acc: 0.7099 - auc: 0.7822 - val_loss: 0.5750 - val_acc: 0.6934 - val_auc: 0.7897 Epoch 92: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5590 - acc: 0.7097 - auc: 0.7822 - val_loss: 0.5622 - val_acc: 0.7046 - val_auc: 0.7942 Epoch 93: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5584 - acc: 0.7100 - auc: 0.7827 - val_loss: 0.5743 - val_acc: 0.6938 - val_auc: 0.7874 Epoch 94: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5595 - acc: 0.7091 - auc: 0.7816 - val_loss: 0.5741 - val_acc: 0.6944 - val_auc: 0.7912 Epoch 95: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5588 - acc: 0.7098 - auc: 0.7823 - val_loss: 0.5786 - val_acc: 0.6920 - val_auc: 0.7906 Epoch 96: 2160000/2160000 [==============================] - 66s 31us/sample - loss: 0.5582 - acc: 0.7103 - auc: 0.7830 - val_loss: 0.5653 - val_acc: 0.7036 - val_auc: 0.7921 Epoch 97: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5580 - acc: 0.7104 - auc: 0.7831 - val_loss: 0.5709 - val_acc: 0.6968 - val_auc: 0.7934 Epoch 98: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5572 - acc: 0.7109 - auc: 0.7839 - val_loss: 0.5668 - val_acc: 0.7014 - val_auc: 0.7931 Epoch 99: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5573 - acc: 0.7109 - auc: 0.7838 - val_loss: 0.5680 - val_acc: 0.7006 - val_auc: 0.7946 Epoch 100: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5577 - acc: 0.7106 - auc: 0.7835 - val_loss: 0.5624 - val_acc: 0.7039 - val_auc: 0.7939 Epoch 101: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5567 - acc: 0.7115 - auc: 0.7844 - val_loss: 0.5613 - val_acc: 0.7062 - val_auc: 0.7966 Epoch 102: 2160000/2160000 [==============================] - 64s 29us/sample - loss: 0.5567 - acc: 0.7115 - auc: 0.7844 - val_loss: 0.5671 - val_acc: 0.7002 - val_auc: 0.7935 Epoch 103: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5564 - acc: 0.7118 - auc: 0.7847 - val_loss: 0.5663 - val_acc: 0.6999 - val_auc: 0.7926 Epoch 104: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5570 - acc: 0.7114 - auc: 0.7842 - val_loss: 0.5586 - val_acc: 0.7107 - val_auc: 0.7970 Epoch 105: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5561 - acc: 0.7117 - auc: 0.7849 - val_loss: 0.5675 - val_acc: 0.7015 - val_auc: 0.7925 Epoch 106: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5567 - acc: 0.7114 - auc: 0.7845 - val_loss: 0.5618 - val_acc: 0.7076 - val_auc: 0.7944 Epoch 107: 2160000/2160000 [==============================] - 62s 29us/sample - loss: 0.5555 - acc: 0.7124 - auc: 0.7856 - val_loss: 0.5616 - val_acc: 0.7058 - val_auc: 0.7944 Epoch 108: 2160000/2160000 [==============================] - 63s 29us/sample - loss: 0.5557 - acc: 0.7120 - auc: 0.7854 - val_loss: 0.5635 - val_acc: 0.7035 - val_auc: 0.7962 Epoch 109: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5556 - acc: 0.7123 - auc: 0.7855 - val_loss: 0.5683 - val_acc: 0.7000 - val_auc: 0.7963 Epoch 110: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5557 - acc: 0.7122 - auc: 0.7854 - val_loss: 0.5700 - val_acc: 0.6956 - val_auc: 0.7964 Epoch 111: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5551 - acc: 0.7126 - auc: 0.7858 - val_loss: 0.5648 - val_acc: 0.7043 - val_auc: 0.7925 Epoch 112: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5548 - acc: 0.7128 - auc: 0.7862 - val_loss: 0.5652 - val_acc: 0.7026 - val_auc: 0.7973 Epoch 113: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5550 - acc: 0.7130 - auc: 0.7861 - val_loss: 0.5687 - val_acc: 0.6989 - val_auc: 0.7942 Epoch 114: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5554 - acc: 0.7124 - auc: 0.7856 - val_loss: 0.5674 - val_acc: 0.6999 - val_auc: 0.7950 Epoch 115: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5545 - acc: 0.7133 - auc: 0.7866 - val_loss: 0.5630 - val_acc: 0.7038 - val_auc: 0.7954 Epoch 116: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5538 - acc: 0.7139 - auc: 0.7873 - val_loss: 0.5627 - val_acc: 0.7074 - val_auc: 0.7946 Epoch 117: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5537 - acc: 0.7137 - auc: 0.7874 - val_loss: 0.5617 - val_acc: 0.7070 - val_auc: 0.7951 Epoch 118: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5549 - acc: 0.7127 - auc: 0.7861 - val_loss: 0.5668 - val_acc: 0.7030 - val_auc: 0.7921 Epoch 119: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5541 - acc: 0.7135 - auc: 0.7869 - val_loss: 0.5668 - val_acc: 0.7014 - val_auc: 0.7972 Epoch 120: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5538 - acc: 0.7135 - auc: 0.7871 - val_loss: 0.5761 - val_acc: 0.6890 - val_auc: 0.7955 Epoch 121: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5534 - acc: 0.7139 - auc: 0.7875 - val_loss: 0.5682 - val_acc: 0.7000 - val_auc: 0.7971 Epoch 122: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5533 - acc: 0.7144 - auc: 0.7878 - val_loss: 0.5679 - val_acc: 0.6989 - val_auc: 0.7981 Epoch 123: 2160000/2160000 [==============================] - 84s 39us/sample - loss: 0.5530 - acc: 0.7144 - auc: 0.7881 - val_loss: 0.5732 - val_acc: 0.6970 - val_auc: 0.7919 Epoch 124: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.5528 - acc: 0.7142 - auc: 0.7882 - val_loss: 0.5804 - val_acc: 0.6910 - val_auc: 0.7930 Epoch 125: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5529 - acc: 0.7146 - auc: 0.7881 - val_loss: 0.5640 - val_acc: 0.7020 - val_auc: 0.7969 Epoch 126: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5532 - acc: 0.7143 - auc: 0.7878 - val_loss: 0.5677 - val_acc: 0.7000 - val_auc: 0.7953 Epoch 127: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5524 - acc: 0.7149 - auc: 0.7885 - val_loss: 0.5634 - val_acc: 0.7024 - val_auc: 0.7979 Epoch 128: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5523 - acc: 0.7146 - auc: 0.7886 - val_loss: 0.5779 - val_acc: 0.6918 - val_auc: 0.7957 Epoch 129: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5523 - acc: 0.7150 - auc: 0.7887 - val_loss: 0.5691 - val_acc: 0.6977 - val_auc: 0.7971 Epoch 130: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5518 - acc: 0.7153 - auc: 0.7892 - val_loss: 0.5633 - val_acc: 0.7034 - val_auc: 0.7971 Epoch 131: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5523 - acc: 0.7150 - auc: 0.7886 - val_loss: 0.5683 - val_acc: 0.6967 - val_auc: 0.7966 Epoch 132: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5516 - acc: 0.7155 - auc: 0.7893 - val_loss: 0.5626 - val_acc: 0.7060 - val_auc: 0.7984 Epoch 133: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5514 - acc: 0.7155 - auc: 0.7894 - val_loss: 0.5631 - val_acc: 0.7043 - val_auc: 0.8009 Epoch 134: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5515 - acc: 0.7156 - auc: 0.7893 - val_loss: 0.5627 - val_acc: 0.7048 - val_auc: 0.7985 Epoch 135: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5515 - acc: 0.7153 - auc: 0.7894 - val_loss: 0.5643 - val_acc: 0.7046 - val_auc: 0.7964 Epoch 136: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5518 - acc: 0.7153 - auc: 0.7892 - val_loss: 0.5646 - val_acc: 0.7032 - val_auc: 0.7967 Epoch 137: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5517 - acc: 0.7152 - auc: 0.7892 - val_loss: 0.5581 - val_acc: 0.7092 - val_auc: 0.8001 Epoch 138: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5507 - acc: 0.7158 - auc: 0.7902 - val_loss: 0.5632 - val_acc: 0.7051 - val_auc: 0.7963 Epoch 139: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5516 - acc: 0.7153 - auc: 0.7893 - val_loss: 0.5644 - val_acc: 0.7055 - val_auc: 0.7948 Epoch 140: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5509 - acc: 0.7161 - auc: 0.7900 - val_loss: 0.5619 - val_acc: 0.7066 - val_auc: 0.7990 Epoch 141: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5506 - acc: 0.7162 - auc: 0.7903 - val_loss: 0.5720 - val_acc: 0.6926 - val_auc: 0.7989 Epoch 142: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5500 - acc: 0.7167 - auc: 0.7909 - val_loss: 0.5670 - val_acc: 0.7019 - val_auc: 0.7987 Epoch 143: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5502 - acc: 0.7167 - auc: 0.7906 - val_loss: 0.5530 - val_acc: 0.7125 - val_auc: 0.8017 Epoch 144: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5498 - acc: 0.7166 - auc: 0.7910 - val_loss: 0.5754 - val_acc: 0.6940 - val_auc: 0.7963 Epoch 145: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5503 - acc: 0.7163 - auc: 0.7905 - val_loss: 0.5638 - val_acc: 0.7026 - val_auc: 0.8008 Epoch 146: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5499 - acc: 0.7168 - auc: 0.7909 - val_loss: 0.5698 - val_acc: 0.6993 - val_auc: 0.7931 Epoch 147: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5496 - acc: 0.7167 - auc: 0.7912 - val_loss: 0.5644 - val_acc: 0.7019 - val_auc: 0.7978 Epoch 148: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5496 - acc: 0.7169 - auc: 0.7912 - val_loss: 0.5647 - val_acc: 0.7008 - val_auc: 0.8038 Epoch 149: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5496 - acc: 0.7170 - auc: 0.7912 - val_loss: 0.5636 - val_acc: 0.7062 - val_auc: 0.7961 Epoch 150: 2160000/2160000 [==============================] - 70s 33us/sample - loss: 0.5494 - acc: 0.7169 - auc: 0.7914 - val_loss: 0.5548 - val_acc: 0.7117 - val_auc: 0.7991 Epoch 151: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5493 - acc: 0.7169 - auc: 0.7915 - val_loss: 0.5627 - val_acc: 0.7028 - val_auc: 0.8009 Epoch 152: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5486 - acc: 0.7173 - auc: 0.7920 - val_loss: 0.5666 - val_acc: 0.7028 - val_auc: 0.8022 Epoch 153: 2160000/2160000 [==============================] - 83s 39us/sample - loss: 0.5487 - acc: 0.7177 - auc: 0.7921 - val_loss: 0.5583 - val_acc: 0.7089 - val_auc: 0.8017 Epoch 154: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5488 - acc: 0.7174 - auc: 0.7921 - val_loss: 0.5589 - val_acc: 0.7082 - val_auc: 0.8003 Epoch 155: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5478 - acc: 0.7184 - auc: 0.7929 - val_loss: 0.5587 - val_acc: 0.7072 - val_auc: 0.8022 Epoch 156: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5479 - acc: 0.7184 - auc: 0.7928 - val_loss: 0.5680 - val_acc: 0.7020 - val_auc: 0.8001 Epoch 157: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5488 - acc: 0.7176 - auc: 0.7920 - val_loss: 0.5609 - val_acc: 0.7056 - val_auc: 0.8010 Epoch 158: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5484 - acc: 0.7179 - auc: 0.7923 - val_loss: 0.5727 - val_acc: 0.6974 - val_auc: 0.7966 Epoch 159: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5482 - acc: 0.7181 - auc: 0.7926 - val_loss: 0.5739 - val_acc: 0.6967 - val_auc: 0.7947 Epoch 160: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5495 - acc: 0.7168 - auc: 0.7913 - val_loss: 0.5664 - val_acc: 0.7038 - val_auc: 0.7957 Epoch 161: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5484 - acc: 0.7177 - auc: 0.7923 - val_loss: 0.5640 - val_acc: 0.7041 - val_auc: 0.7957 Epoch 162: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5486 - acc: 0.7173 - auc: 0.7921 - val_loss: 0.5661 - val_acc: 0.6998 - val_auc: 0.8022 Epoch 163: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5476 - acc: 0.7182 - auc: 0.7930 - val_loss: 0.5641 - val_acc: 0.7019 - val_auc: 0.7990 Epoch 164: 2160000/2160000 [==============================] - 89s 41us/sample - loss: 0.5476 - acc: 0.7187 - auc: 0.7931 - val_loss: 0.5731 - val_acc: 0.6949 - val_auc: 0.8001 Epoch 165: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5474 - acc: 0.7185 - auc: 0.7933 - val_loss: 0.5689 - val_acc: 0.6980 - val_auc: 0.8018 Epoch 166: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5474 - acc: 0.7186 - auc: 0.7933 - val_loss: 0.5568 - val_acc: 0.7097 - val_auc: 0.8013 Epoch 167: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5476 - acc: 0.7186 - auc: 0.7931 - val_loss: 0.5541 - val_acc: 0.7129 - val_auc: 0.8026 Epoch 168: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5471 - acc: 0.7189 - auc: 0.7936 - val_loss: 0.5566 - val_acc: 0.7098 - val_auc: 0.8000 Epoch 169: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5474 - acc: 0.7184 - auc: 0.7933 - val_loss: 0.5614 - val_acc: 0.7055 - val_auc: 0.8002 Epoch 170: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5466 - acc: 0.7194 - auc: 0.7940 - val_loss: 0.5573 - val_acc: 0.7116 - val_auc: 0.7998 Epoch 171: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5470 - acc: 0.7187 - auc: 0.7936 - val_loss: 0.5584 - val_acc: 0.7080 - val_auc: 0.8007 Epoch 172: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5460 - acc: 0.7195 - auc: 0.7945 - val_loss: 0.5747 - val_acc: 0.6916 - val_auc: 0.8005 Epoch 173: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5464 - acc: 0.7193 - auc: 0.7942 - val_loss: 0.5640 - val_acc: 0.7045 - val_auc: 0.8004 Epoch 174: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5465 - acc: 0.7190 - auc: 0.7940 - val_loss: 0.5533 - val_acc: 0.7137 - val_auc: 0.7999 Epoch 175: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5466 - acc: 0.7191 - auc: 0.7940 - val_loss: 0.5628 - val_acc: 0.7072 - val_auc: 0.7984 Epoch 176: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5462 - acc: 0.7197 - auc: 0.7945 - val_loss: 0.5654 - val_acc: 0.7037 - val_auc: 0.7990 Epoch 177: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5467 - acc: 0.7188 - auc: 0.7939 - val_loss: 0.5566 - val_acc: 0.7106 - val_auc: 0.8007 Epoch 178: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5465 - acc: 0.7189 - auc: 0.7941 - val_loss: 0.5738 - val_acc: 0.6931 - val_auc: 0.8014 Epoch 179: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5463 - acc: 0.7191 - auc: 0.7942 - val_loss: 0.5642 - val_acc: 0.7061 - val_auc: 0.8020 Epoch 180: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5460 - acc: 0.7193 - auc: 0.7946 - val_loss: 0.5600 - val_acc: 0.7066 - val_auc: 0.8057 Epoch 181: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5458 - acc: 0.7197 - auc: 0.7947 - val_loss: 0.5569 - val_acc: 0.7104 - val_auc: 0.8009 Epoch 182: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5455 - acc: 0.7199 - auc: 0.7950 - val_loss: 0.5654 - val_acc: 0.7023 - val_auc: 0.8018 Epoch 183: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5461 - acc: 0.7191 - auc: 0.7944 - val_loss: 0.5610 - val_acc: 0.7076 - val_auc: 0.7999 Epoch 184: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5460 - acc: 0.7196 - auc: 0.7946 - val_loss: 0.5610 - val_acc: 0.7042 - val_auc: 0.8014 Epoch 185: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5466 - acc: 0.7188 - auc: 0.7939 - val_loss: 0.5661 - val_acc: 0.7036 - val_auc: 0.7989 Epoch 186: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5467 - acc: 0.7190 - auc: 0.7939 - val_loss: 0.5585 - val_acc: 0.7090 - val_auc: 0.8012 Epoch 187: 2160000/2160000 [==============================] - 76s 35us/sample - loss: 0.5457 - acc: 0.7197 - auc: 0.7948 - val_loss: 0.5594 - val_acc: 0.7086 - val_auc: 0.7999 Epoch 188: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5454 - acc: 0.7202 - auc: 0.7952 - val_loss: 0.5540 - val_acc: 0.7148 - val_auc: 0.8042 Epoch 189: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5451 - acc: 0.7200 - auc: 0.7954 - val_loss: 0.5689 - val_acc: 0.6991 - val_auc: 0.8004 Epoch 190: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5452 - acc: 0.7202 - auc: 0.7953 - val_loss: 0.5685 - val_acc: 0.7016 - val_auc: 0.8004 Epoch 191: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5448 - acc: 0.7205 - auc: 0.7957 - val_loss: 0.5622 - val_acc: 0.7047 - val_auc: 0.8040 Epoch 192: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5441 - acc: 0.7212 - auc: 0.7964 - val_loss: 0.5568 - val_acc: 0.7105 - val_auc: 0.8040 Epoch 193: 2160000/2160000 [==============================] - 76s 35us/sample - loss: 0.5451 - acc: 0.7200 - auc: 0.7954 - val_loss: 0.5576 - val_acc: 0.7095 - val_auc: 0.8026 Epoch 194: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5446 - acc: 0.7205 - auc: 0.7958 - val_loss: 0.5536 - val_acc: 0.7133 - val_auc: 0.8064 Epoch 195: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5438 - acc: 0.7210 - auc: 0.7966 - val_loss: 0.5514 - val_acc: 0.7150 - val_auc: 0.8015 Epoch 196: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5448 - acc: 0.7199 - auc: 0.7956 - val_loss: 0.5652 - val_acc: 0.7044 - val_auc: 0.8007 Epoch 197: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.5457 - acc: 0.7197 - auc: 0.7949 - val_loss: 0.5620 - val_acc: 0.7045 - val_auc: 0.8019 Epoch 198: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5458 - acc: 0.7195 - auc: 0.7948 - val_loss: 0.5614 - val_acc: 0.7066 - val_auc: 0.8012 Epoch 199: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5445 - acc: 0.7205 - auc: 0.7960 - val_loss: 0.5696 - val_acc: 0.7012 - val_auc: 0.8019 Epoch 200: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5450 - acc: 0.7200 - auc: 0.7954 - val_loss: 0.5561 - val_acc: 0.7108 - val_auc: 0.8000
print_logs('keras_model-keras_elu.log')
Epoch 1: 2160000/2160000 [==============================] - 97s 45us/sample - loss: 0.6784 - acc: 0.5618 - auc: 0.5835 - val_loss: 0.6537 - val_acc: 0.6108 - val_auc: 0.6584 Epoch 2: 2160000/2160000 [==============================] - 97s 45us/sample - loss: 0.6528 - acc: 0.6127 - auc: 0.6513 - val_loss: 0.6424 - val_acc: 0.6287 - val_auc: 0.6705 Epoch 3: 2160000/2160000 [==============================] - 85s 39us/sample - loss: 0.6474 - acc: 0.6219 - auc: 0.6616 - val_loss: 0.6426 - val_acc: 0.6292 - val_auc: 0.6726 Epoch 4: 2160000/2160000 [==============================] - 95s 44us/sample - loss: 0.6449 - acc: 0.6270 - auc: 0.6663 - val_loss: 0.6400 - val_acc: 0.6332 - val_auc: 0.6773 Epoch 5: 2160000/2160000 [==============================] - 89s 41us/sample - loss: 0.6436 - acc: 0.6290 - auc: 0.6688 - val_loss: 0.6379 - val_acc: 0.6364 - val_auc: 0.6791 Epoch 6: 2160000/2160000 [==============================] - 84s 39us/sample - loss: 0.6421 - acc: 0.6308 - auc: 0.6717 - val_loss: 0.6370 - val_acc: 0.6375 - val_auc: 0.6804 Epoch 7: 2160000/2160000 [==============================] - 85s 39us/sample - loss: 0.6396 - acc: 0.6333 - auc: 0.6771 - val_loss: 0.6335 - val_acc: 0.6414 - val_auc: 0.6949 Epoch 8: 2160000/2160000 [==============================] - 89s 41us/sample - loss: 0.6346 - acc: 0.6382 - auc: 0.6881 - val_loss: 0.6234 - val_acc: 0.6498 - val_auc: 0.7088 Epoch 9: 2160000/2160000 [==============================] - 85s 39us/sample - loss: 0.6302 - acc: 0.6449 - auc: 0.6971 - val_loss: 0.6195 - val_acc: 0.6559 - val_auc: 0.7123 Epoch 10: 2160000/2160000 [==============================] - 85s 39us/sample - loss: 0.6261 - acc: 0.6499 - auc: 0.7038 - val_loss: 0.6140 - val_acc: 0.6622 - val_auc: 0.7211 Epoch 11: 2160000/2160000 [==============================] - 93s 43us/sample - loss: 0.6232 - acc: 0.6535 - auc: 0.7084 - val_loss: 0.6196 - val_acc: 0.6506 - val_auc: 0.7258 Epoch 12: 2160000/2160000 [==============================] - 87s 40us/sample - loss: 0.6218 - acc: 0.6555 - auc: 0.7106 - val_loss: 0.6129 - val_acc: 0.6583 - val_auc: 0.7276 Epoch 13: 2160000/2160000 [==============================] - 86s 40us/sample - loss: 0.6200 - acc: 0.6572 - auc: 0.7131 - val_loss: 0.6070 - val_acc: 0.6686 - val_auc: 0.7292 Epoch 14: 2160000/2160000 [==============================] - 85s 39us/sample - loss: 0.6185 - acc: 0.6585 - auc: 0.7150 - val_loss: 0.6156 - val_acc: 0.6644 - val_auc: 0.7304 Epoch 15: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.6167 - acc: 0.6606 - auc: 0.7174 - val_loss: 0.6067 - val_acc: 0.6708 - val_auc: 0.7312 Epoch 16: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.6150 - acc: 0.6619 - auc: 0.7195 - val_loss: 0.6072 - val_acc: 0.6724 - val_auc: 0.7333 Epoch 17: 2160000/2160000 [==============================] - 91s 42us/sample - loss: 0.6133 - acc: 0.6638 - auc: 0.7217 - val_loss: 0.6059 - val_acc: 0.6700 - val_auc: 0.7318 Epoch 18: 2160000/2160000 [==============================] - 90s 42us/sample - loss: 0.6119 - acc: 0.6649 - auc: 0.7235 - val_loss: 0.6028 - val_acc: 0.6677 - val_auc: 0.7398 Epoch 19: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.6110 - acc: 0.6658 - auc: 0.7247 - val_loss: 0.6027 - val_acc: 0.6752 - val_auc: 0.7387 Epoch 20: 2160000/2160000 [==============================] - 86s 40us/sample - loss: 0.6099 - acc: 0.6670 - auc: 0.7262 - val_loss: 0.6027 - val_acc: 0.6683 - val_auc: 0.7367 Epoch 21: 2160000/2160000 [==============================] - 92s 43us/sample - loss: 0.6086 - acc: 0.6687 - auc: 0.7279 - val_loss: 0.5961 - val_acc: 0.6765 - val_auc: 0.7427 Epoch 22: 2160000/2160000 [==============================] - 97s 45us/sample - loss: 0.6074 - acc: 0.6694 - auc: 0.7295 - val_loss: 0.5955 - val_acc: 0.6804 - val_auc: 0.7438 Epoch 23: 2160000/2160000 [==============================] - 92s 42us/sample - loss: 0.6069 - acc: 0.6699 - auc: 0.7302 - val_loss: 0.5914 - val_acc: 0.6837 - val_auc: 0.7488 Epoch 24: 2160000/2160000 [==============================] - 111s 51us/sample - loss: 0.6050 - acc: 0.6717 - auc: 0.7326 - val_loss: 0.5905 - val_acc: 0.6838 - val_auc: 0.7509 Epoch 25: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.6040 - acc: 0.6725 - auc: 0.7338 - val_loss: 0.5928 - val_acc: 0.6821 - val_auc: 0.7498 Epoch 26: 2160000/2160000 [==============================] - 88s 41us/sample - loss: 0.6029 - acc: 0.6732 - auc: 0.7351 - val_loss: 0.5883 - val_acc: 0.6855 - val_auc: 0.7521 Epoch 27: 2160000/2160000 [==============================] - 95s 44us/sample - loss: 0.6016 - acc: 0.6745 - auc: 0.7367 - val_loss: 0.5913 - val_acc: 0.6773 - val_auc: 0.7517 Epoch 28: 2160000/2160000 [==============================] - 87s 40us/sample - loss: 0.6004 - acc: 0.6755 - auc: 0.7381 - val_loss: 0.5918 - val_acc: 0.6783 - val_auc: 0.7473 Epoch 29: 2160000/2160000 [==============================] - 87s 40us/sample - loss: 0.5988 - acc: 0.6768 - auc: 0.7400 - val_loss: 0.5822 - val_acc: 0.6887 - val_auc: 0.7580 Epoch 30: 2160000/2160000 [==============================] - 86s 40us/sample - loss: 0.5975 - acc: 0.6781 - auc: 0.7415 - val_loss: 0.5864 - val_acc: 0.6859 - val_auc: 0.7596 Epoch 31: 2160000/2160000 [==============================] - 92s 42us/sample - loss: 0.5967 - acc: 0.6786 - auc: 0.7424 - val_loss: 0.5836 - val_acc: 0.6890 - val_auc: 0.7591 Epoch 32: 2160000/2160000 [==============================] - 84s 39us/sample - loss: 0.5956 - acc: 0.6793 - auc: 0.7435 - val_loss: 0.5779 - val_acc: 0.6926 - val_auc: 0.7624 Epoch 33: 2160000/2160000 [==============================] - 94s 44us/sample - loss: 0.5945 - acc: 0.6804 - auc: 0.7448 - val_loss: 0.5784 - val_acc: 0.6925 - val_auc: 0.7630 Epoch 34: 2160000/2160000 [==============================] - 108s 50us/sample - loss: 0.5936 - acc: 0.6814 - auc: 0.7458 - val_loss: 0.5821 - val_acc: 0.6869 - val_auc: 0.7581 Epoch 35: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5926 - acc: 0.6817 - auc: 0.7468 - val_loss: 0.5782 - val_acc: 0.6935 - val_auc: 0.7629 Epoch 36: 2160000/2160000 [==============================] - 66s 31us/sample - loss: 0.5918 - acc: 0.6825 - auc: 0.7477 - val_loss: 0.5829 - val_acc: 0.6880 - val_auc: 0.7624 Epoch 37: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5907 - acc: 0.6836 - auc: 0.7489 - val_loss: 0.5792 - val_acc: 0.6905 - val_auc: 0.7642 Epoch 38: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5906 - acc: 0.6835 - auc: 0.7490 - val_loss: 0.5787 - val_acc: 0.6925 - val_auc: 0.7647 Epoch 39: 2160000/2160000 [==============================] - 66s 31us/sample - loss: 0.5898 - acc: 0.6846 - auc: 0.7499 - val_loss: 0.5745 - val_acc: 0.6957 - val_auc: 0.7668 Epoch 40: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5892 - acc: 0.6846 - auc: 0.7505 - val_loss: 0.5753 - val_acc: 0.6951 - val_auc: 0.7670 Epoch 41: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5884 - acc: 0.6854 - auc: 0.7514 - val_loss: 0.5735 - val_acc: 0.6955 - val_auc: 0.7679 Epoch 42: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5878 - acc: 0.6856 - auc: 0.7520 - val_loss: 0.5712 - val_acc: 0.6977 - val_auc: 0.7692 Epoch 43: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5876 - acc: 0.6858 - auc: 0.7524 - val_loss: 0.5725 - val_acc: 0.6970 - val_auc: 0.7685 Epoch 44: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5872 - acc: 0.6863 - auc: 0.7527 - val_loss: 0.5735 - val_acc: 0.6953 - val_auc: 0.7664 Epoch 45: 2160000/2160000 [==============================] - 66s 31us/sample - loss: 0.5864 - acc: 0.6867 - auc: 0.7535 - val_loss: 0.5750 - val_acc: 0.6947 - val_auc: 0.7657 Epoch 46: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5863 - acc: 0.6867 - auc: 0.7535 - val_loss: 0.5705 - val_acc: 0.6986 - val_auc: 0.7711 Epoch 47: 2160000/2160000 [==============================] - 70s 33us/sample - loss: 0.5855 - acc: 0.6878 - auc: 0.7546 - val_loss: 0.5709 - val_acc: 0.6986 - val_auc: 0.7717 Epoch 48: 2160000/2160000 [==============================] - 66s 31us/sample - loss: 0.5851 - acc: 0.6879 - auc: 0.7550 - val_loss: 0.5708 - val_acc: 0.6988 - val_auc: 0.7701 Epoch 49: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5849 - acc: 0.6881 - auc: 0.7552 - val_loss: 0.5697 - val_acc: 0.6997 - val_auc: 0.7718 Epoch 50: 2160000/2160000 [==============================] - 66s 31us/sample - loss: 0.5844 - acc: 0.6885 - auc: 0.7557 - val_loss: 0.5754 - val_acc: 0.6944 - val_auc: 0.7706 Epoch 51: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5839 - acc: 0.6892 - auc: 0.7564 - val_loss: 0.5734 - val_acc: 0.6978 - val_auc: 0.7680 Epoch 52: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5833 - acc: 0.6899 - auc: 0.7570 - val_loss: 0.5671 - val_acc: 0.7004 - val_auc: 0.7730 Epoch 53: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5834 - acc: 0.6894 - auc: 0.7568 - val_loss: 0.5681 - val_acc: 0.7012 - val_auc: 0.7728 Epoch 54: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5830 - acc: 0.6898 - auc: 0.7573 - val_loss: 0.5682 - val_acc: 0.6993 - val_auc: 0.7723 Epoch 55: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5825 - acc: 0.6900 - auc: 0.7578 - val_loss: 0.5744 - val_acc: 0.6949 - val_auc: 0.7688 Epoch 56: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5819 - acc: 0.6910 - auc: 0.7585 - val_loss: 0.5692 - val_acc: 0.6995 - val_auc: 0.7737 Epoch 57: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5818 - acc: 0.6908 - auc: 0.7586 - val_loss: 0.5704 - val_acc: 0.7007 - val_auc: 0.7727 Epoch 58: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5814 - acc: 0.6912 - auc: 0.7591 - val_loss: 0.5680 - val_acc: 0.7003 - val_auc: 0.7742 Epoch 59: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5812 - acc: 0.6913 - auc: 0.7593 - val_loss: 0.5736 - val_acc: 0.6950 - val_auc: 0.7747 Epoch 60: 2160000/2160000 [==============================] - 65s 30us/sample - loss: 0.5808 - acc: 0.6916 - auc: 0.7598 - val_loss: 0.5660 - val_acc: 0.7020 - val_auc: 0.7747 Epoch 61: 2160000/2160000 [==============================] - 64s 30us/sample - loss: 0.5802 - acc: 0.6922 - auc: 0.7604 - val_loss: 0.5676 - val_acc: 0.7009 - val_auc: 0.7749 Epoch 62: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5798 - acc: 0.6926 - auc: 0.7608 - val_loss: 0.5679 - val_acc: 0.6993 - val_auc: 0.7740 Epoch 63: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5798 - acc: 0.6924 - auc: 0.7608 - val_loss: 0.5642 - val_acc: 0.7043 - val_auc: 0.7767 Epoch 64: 2160000/2160000 [==============================] - 66s 30us/sample - loss: 0.5796 - acc: 0.6926 - auc: 0.7610 - val_loss: 0.5709 - val_acc: 0.6987 - val_auc: 0.7744 Epoch 65: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5793 - acc: 0.6928 - auc: 0.7613 - val_loss: 0.5656 - val_acc: 0.7034 - val_auc: 0.7768 Epoch 66: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5792 - acc: 0.6926 - auc: 0.7613 - val_loss: 0.5650 - val_acc: 0.7036 - val_auc: 0.7761 Epoch 67: 2160000/2160000 [==============================] - 84s 39us/sample - loss: 0.5789 - acc: 0.6935 - auc: 0.7618 - val_loss: 0.5672 - val_acc: 0.7026 - val_auc: 0.7747 Epoch 68: 2160000/2160000 [==============================] - 87s 40us/sample - loss: 0.5788 - acc: 0.6933 - auc: 0.7619 - val_loss: 0.5668 - val_acc: 0.7026 - val_auc: 0.7760 Epoch 69: 2160000/2160000 [==============================] - 88s 41us/sample - loss: 0.5783 - acc: 0.6938 - auc: 0.7624 - val_loss: 0.5675 - val_acc: 0.7021 - val_auc: 0.7736 Epoch 70: 2160000/2160000 [==============================] - 99s 46us/sample - loss: 0.5778 - acc: 0.6941 - auc: 0.7629 - val_loss: 0.5679 - val_acc: 0.7018 - val_auc: 0.7747 Epoch 71: 2160000/2160000 [==============================] - 88s 41us/sample - loss: 0.5777 - acc: 0.6942 - auc: 0.7631 - val_loss: 0.5700 - val_acc: 0.7015 - val_auc: 0.7780 Epoch 72: 2160000/2160000 [==============================] - 93s 43us/sample - loss: 0.5774 - acc: 0.6944 - auc: 0.7633 - val_loss: 0.5646 - val_acc: 0.7029 - val_auc: 0.7766 Epoch 73: 2160000/2160000 [==============================] - 99s 46us/sample - loss: 0.5770 - acc: 0.6948 - auc: 0.7638 - val_loss: 0.5627 - val_acc: 0.7052 - val_auc: 0.7782 Epoch 74: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5769 - acc: 0.6947 - auc: 0.7638 - val_loss: 0.5642 - val_acc: 0.7042 - val_auc: 0.7773 Epoch 75: 2160000/2160000 [==============================] - 79s 36us/sample - loss: 0.5768 - acc: 0.6946 - auc: 0.7640 - val_loss: 0.5633 - val_acc: 0.7056 - val_auc: 0.7779 Epoch 76: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5765 - acc: 0.6954 - auc: 0.7644 - val_loss: 0.5620 - val_acc: 0.7065 - val_auc: 0.7786 Epoch 77: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5761 - acc: 0.6955 - auc: 0.7647 - val_loss: 0.5611 - val_acc: 0.7064 - val_auc: 0.7794 Epoch 78: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5758 - acc: 0.6959 - auc: 0.7650 - val_loss: 0.5620 - val_acc: 0.7064 - val_auc: 0.7793 Epoch 79: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5761 - acc: 0.6953 - auc: 0.7647 - val_loss: 0.5636 - val_acc: 0.7045 - val_auc: 0.7772 Epoch 80: 2160000/2160000 [==============================] - 81s 38us/sample - loss: 0.5757 - acc: 0.6959 - auc: 0.7651 - val_loss: 0.5652 - val_acc: 0.7023 - val_auc: 0.7778 Epoch 81: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5755 - acc: 0.6960 - auc: 0.7653 - val_loss: 0.5632 - val_acc: 0.7043 - val_auc: 0.7781 Epoch 82: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5755 - acc: 0.6959 - auc: 0.7653 - val_loss: 0.5617 - val_acc: 0.7067 - val_auc: 0.7793 Epoch 83: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5751 - acc: 0.6961 - auc: 0.7658 - val_loss: 0.5610 - val_acc: 0.7073 - val_auc: 0.7798 Epoch 84: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5748 - acc: 0.6967 - auc: 0.7660 - val_loss: 0.5611 - val_acc: 0.7060 - val_auc: 0.7799 Epoch 85: 2160000/2160000 [==============================] - 81s 38us/sample - loss: 0.5747 - acc: 0.6969 - auc: 0.7662 - val_loss: 0.5672 - val_acc: 0.7011 - val_auc: 0.7792 Epoch 86: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5746 - acc: 0.6965 - auc: 0.7662 - val_loss: 0.5612 - val_acc: 0.7054 - val_auc: 0.7798 Epoch 87: 2160000/2160000 [==============================] - 84s 39us/sample - loss: 0.5741 - acc: 0.6973 - auc: 0.7668 - val_loss: 0.5618 - val_acc: 0.7058 - val_auc: 0.7805 Epoch 88: 2160000/2160000 [==============================] - 85s 39us/sample - loss: 0.5740 - acc: 0.6974 - auc: 0.7669 - val_loss: 0.5623 - val_acc: 0.7051 - val_auc: 0.7786 Epoch 89: 2160000/2160000 [==============================] - 98s 45us/sample - loss: 0.5739 - acc: 0.6976 - auc: 0.7671 - val_loss: 0.5601 - val_acc: 0.7077 - val_auc: 0.7809 Epoch 90: 2160000/2160000 [==============================] - 89s 41us/sample - loss: 0.5741 - acc: 0.6971 - auc: 0.7667 - val_loss: 0.5617 - val_acc: 0.7061 - val_auc: 0.7792 Epoch 91: 2160000/2160000 [==============================] - 86s 40us/sample - loss: 0.5738 - acc: 0.6974 - auc: 0.7671 - val_loss: 0.5630 - val_acc: 0.7050 - val_auc: 0.7781 Epoch 92: 2160000/2160000 [==============================] - 89s 41us/sample - loss: 0.5735 - acc: 0.6978 - auc: 0.7674 - val_loss: 0.5611 - val_acc: 0.7071 - val_auc: 0.7809 Epoch 93: 2160000/2160000 [==============================] - 81s 37us/sample - loss: 0.5734 - acc: 0.6979 - auc: 0.7676 - val_loss: 0.5594 - val_acc: 0.7082 - val_auc: 0.7817 Epoch 94: 2160000/2160000 [==============================] - 118s 54us/sample - loss: 0.5730 - acc: 0.6978 - auc: 0.7679 - val_loss: 0.5590 - val_acc: 0.7087 - val_auc: 0.7819 Epoch 95: 2160000/2160000 [==============================] - 83s 39us/sample - loss: 0.5730 - acc: 0.6980 - auc: 0.7679 - val_loss: 0.5615 - val_acc: 0.7055 - val_auc: 0.7800 Epoch 96: 2160000/2160000 [==============================] - 81s 37us/sample - loss: 0.5729 - acc: 0.6980 - auc: 0.7681 - val_loss: 0.5610 - val_acc: 0.7072 - val_auc: 0.7800 Epoch 97: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5728 - acc: 0.6982 - auc: 0.7681 - val_loss: 0.5600 - val_acc: 0.7061 - val_auc: 0.7819 Epoch 98: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5722 - acc: 0.6985 - auc: 0.7688 - val_loss: 0.5585 - val_acc: 0.7088 - val_auc: 0.7819 Epoch 99: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5724 - acc: 0.6985 - auc: 0.7686 - val_loss: 0.5620 - val_acc: 0.7042 - val_auc: 0.7804 Epoch 100: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5722 - acc: 0.6990 - auc: 0.7688 - val_loss: 0.5600 - val_acc: 0.7075 - val_auc: 0.7809 Epoch 101: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5721 - acc: 0.6988 - auc: 0.7689 - val_loss: 0.5581 - val_acc: 0.7094 - val_auc: 0.7827 Epoch 102: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5720 - acc: 0.6987 - auc: 0.7690 - val_loss: 0.5583 - val_acc: 0.7086 - val_auc: 0.7824 Epoch 103: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5718 - acc: 0.6992 - auc: 0.7692 - val_loss: 0.5596 - val_acc: 0.7074 - val_auc: 0.7810 Epoch 104: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5717 - acc: 0.6993 - auc: 0.7693 - val_loss: 0.5608 - val_acc: 0.7075 - val_auc: 0.7801 Epoch 105: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5716 - acc: 0.6991 - auc: 0.7693 - val_loss: 0.5601 - val_acc: 0.7090 - val_auc: 0.7818 Epoch 106: 2160000/2160000 [==============================] - 86s 40us/sample - loss: 0.5712 - acc: 0.6996 - auc: 0.7697 - val_loss: 0.5590 - val_acc: 0.7085 - val_auc: 0.7827 Epoch 107: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5712 - acc: 0.6993 - auc: 0.7698 - val_loss: 0.5579 - val_acc: 0.7090 - val_auc: 0.7827 Epoch 108: 2160000/2160000 [==============================] - 77s 35us/sample - loss: 0.5712 - acc: 0.6997 - auc: 0.7698 - val_loss: 0.5587 - val_acc: 0.7088 - val_auc: 0.7833 Epoch 109: 2160000/2160000 [==============================] - 81s 38us/sample - loss: 0.5710 - acc: 0.7000 - auc: 0.7701 - val_loss: 0.5574 - val_acc: 0.7102 - val_auc: 0.7835 Epoch 110: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.5709 - acc: 0.6998 - auc: 0.7701 - val_loss: 0.5588 - val_acc: 0.7089 - val_auc: 0.7827 Epoch 111: 2160000/2160000 [==============================] - 90s 42us/sample - loss: 0.5708 - acc: 0.6999 - auc: 0.7702 - val_loss: 0.5592 - val_acc: 0.7088 - val_auc: 0.7823 Epoch 112: 2160000/2160000 [==============================] - 83s 39us/sample - loss: 0.5706 - acc: 0.7001 - auc: 0.7704 - val_loss: 0.5579 - val_acc: 0.7097 - val_auc: 0.7832 Epoch 113: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5702 - acc: 0.7003 - auc: 0.7707 - val_loss: 0.5591 - val_acc: 0.7090 - val_auc: 0.7833 Epoch 114: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5704 - acc: 0.7003 - auc: 0.7706 - val_loss: 0.5575 - val_acc: 0.7090 - val_auc: 0.7834 Epoch 115: 2160000/2160000 [==============================] - 107s 50us/sample - loss: 0.5699 - acc: 0.7006 - auc: 0.7711 - val_loss: 0.5592 - val_acc: 0.7082 - val_auc: 0.7834 Epoch 116: 2160000/2160000 [==============================] - 112s 52us/sample - loss: 0.5701 - acc: 0.7002 - auc: 0.7709 - val_loss: 0.5569 - val_acc: 0.7095 - val_auc: 0.7839 Epoch 117: 2160000/2160000 [==============================] - 111s 51us/sample - loss: 0.5698 - acc: 0.7008 - auc: 0.7712 - val_loss: 0.5565 - val_acc: 0.7105 - val_auc: 0.7840 Epoch 118: 2160000/2160000 [==============================] - 97s 45us/sample - loss: 0.5700 - acc: 0.7008 - auc: 0.7711 - val_loss: 0.5575 - val_acc: 0.7097 - val_auc: 0.7837 Epoch 119: 2160000/2160000 [==============================] - 98s 46us/sample - loss: 0.5694 - acc: 0.7013 - auc: 0.7716 - val_loss: 0.5574 - val_acc: 0.7097 - val_auc: 0.7832 Epoch 120: 2160000/2160000 [==============================] - 84s 39us/sample - loss: 0.5696 - acc: 0.7008 - auc: 0.7715 - val_loss: 0.5562 - val_acc: 0.7107 - val_auc: 0.7845 Epoch 121: 2160000/2160000 [==============================] - 83s 39us/sample - loss: 0.5695 - acc: 0.7011 - auc: 0.7716 - val_loss: 0.5558 - val_acc: 0.7115 - val_auc: 0.7849 Epoch 122: 2160000/2160000 [==============================] - 90s 42us/sample - loss: 0.5691 - acc: 0.7009 - auc: 0.7719 - val_loss: 0.5596 - val_acc: 0.7080 - val_auc: 0.7830 Epoch 123: 2160000/2160000 [==============================] - 85s 39us/sample - loss: 0.5691 - acc: 0.7013 - auc: 0.7720 - val_loss: 0.5569 - val_acc: 0.7103 - val_auc: 0.7843 Epoch 124: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5688 - acc: 0.7015 - auc: 0.7723 - val_loss: 0.5599 - val_acc: 0.7083 - val_auc: 0.7807 Epoch 125: 2160000/2160000 [==============================] - 91s 42us/sample - loss: 0.5690 - acc: 0.7013 - auc: 0.7720 - val_loss: 0.5556 - val_acc: 0.7115 - val_auc: 0.7850 Epoch 126: 2160000/2160000 [==============================] - 95s 44us/sample - loss: 0.5687 - acc: 0.7015 - auc: 0.7723 - val_loss: 0.5584 - val_acc: 0.7094 - val_auc: 0.7831 Epoch 127: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5686 - acc: 0.7015 - auc: 0.7725 - val_loss: 0.5558 - val_acc: 0.7107 - val_auc: 0.7853 Epoch 128: 2160000/2160000 [==============================] - 66s 31us/sample - loss: 0.5687 - acc: 0.7016 - auc: 0.7723 - val_loss: 0.5558 - val_acc: 0.7109 - val_auc: 0.7847 Epoch 129: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5682 - acc: 0.7020 - auc: 0.7728 - val_loss: 0.5575 - val_acc: 0.7086 - val_auc: 0.7841 Epoch 130: 2160000/2160000 [==============================] - 68s 31us/sample - loss: 0.5683 - acc: 0.7019 - auc: 0.7727 - val_loss: 0.5557 - val_acc: 0.7113 - val_auc: 0.7850 Epoch 131: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5681 - acc: 0.7020 - auc: 0.7730 - val_loss: 0.5563 - val_acc: 0.7099 - val_auc: 0.7847 Epoch 132: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5681 - acc: 0.7021 - auc: 0.7731 - val_loss: 0.5563 - val_acc: 0.7104 - val_auc: 0.7849 Epoch 133: 2160000/2160000 [==============================] - 83s 38us/sample - loss: 0.5679 - acc: 0.7022 - auc: 0.7732 - val_loss: 0.5572 - val_acc: 0.7088 - val_auc: 0.7850 Epoch 134: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5677 - acc: 0.7023 - auc: 0.7733 - val_loss: 0.5566 - val_acc: 0.7092 - val_auc: 0.7845 Epoch 135: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5677 - acc: 0.7025 - auc: 0.7734 - val_loss: 0.5559 - val_acc: 0.7110 - val_auc: 0.7846 Epoch 136: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5677 - acc: 0.7024 - auc: 0.7734 - val_loss: 0.5557 - val_acc: 0.7111 - val_auc: 0.7848 Epoch 137: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5675 - acc: 0.7024 - auc: 0.7735 - val_loss: 0.5544 - val_acc: 0.7118 - val_auc: 0.7863 Epoch 138: 2160000/2160000 [==============================] - 81s 38us/sample - loss: 0.5671 - acc: 0.7027 - auc: 0.7740 - val_loss: 0.5562 - val_acc: 0.7094 - val_auc: 0.7845 Epoch 139: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5672 - acc: 0.7031 - auc: 0.7738 - val_loss: 0.5549 - val_acc: 0.7115 - val_auc: 0.7858 Epoch 140: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5671 - acc: 0.7029 - auc: 0.7740 - val_loss: 0.5546 - val_acc: 0.7121 - val_auc: 0.7860 Epoch 141: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5669 - acc: 0.7030 - auc: 0.7742 - val_loss: 0.5560 - val_acc: 0.7100 - val_auc: 0.7851 Epoch 142: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5669 - acc: 0.7031 - auc: 0.7742 - val_loss: 0.5553 - val_acc: 0.7121 - val_auc: 0.7860 Epoch 143: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5667 - acc: 0.7033 - auc: 0.7744 - val_loss: 0.5551 - val_acc: 0.7122 - val_auc: 0.7867 Epoch 144: 2160000/2160000 [==============================] - 84s 39us/sample - loss: 0.5666 - acc: 0.7034 - auc: 0.7745 - val_loss: 0.5547 - val_acc: 0.7107 - val_auc: 0.7863 Epoch 145: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5665 - acc: 0.7033 - auc: 0.7746 - val_loss: 0.5546 - val_acc: 0.7123 - val_auc: 0.7865 Epoch 146: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5662 - acc: 0.7036 - auc: 0.7749 - val_loss: 0.5533 - val_acc: 0.7130 - val_auc: 0.7871 Epoch 147: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5664 - acc: 0.7037 - auc: 0.7748 - val_loss: 0.5550 - val_acc: 0.7116 - val_auc: 0.7861 Epoch 148: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5661 - acc: 0.7036 - auc: 0.7750 - val_loss: 0.5552 - val_acc: 0.7105 - val_auc: 0.7858 Epoch 149: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5663 - acc: 0.7038 - auc: 0.7749 - val_loss: 0.5553 - val_acc: 0.7102 - val_auc: 0.7864 Epoch 150: 2160000/2160000 [==============================] - 67s 31us/sample - loss: 0.5661 - acc: 0.7038 - auc: 0.7750 - val_loss: 0.5562 - val_acc: 0.7110 - val_auc: 0.7851 Epoch 151: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5658 - acc: 0.7041 - auc: 0.7753 - val_loss: 0.5617 - val_acc: 0.7045 - val_auc: 0.7836 Epoch 152: 2160000/2160000 [==============================] - 70s 33us/sample - loss: 0.5658 - acc: 0.7040 - auc: 0.7754 - val_loss: 0.5539 - val_acc: 0.7120 - val_auc: 0.7870 Epoch 153: 2160000/2160000 [==============================] - 69s 32us/sample - loss: 0.5657 - acc: 0.7041 - auc: 0.7754 - val_loss: 0.5543 - val_acc: 0.7109 - val_auc: 0.7869 Epoch 154: 2160000/2160000 [==============================] - 68s 32us/sample - loss: 0.5657 - acc: 0.7044 - auc: 0.7755 - val_loss: 0.5522 - val_acc: 0.7138 - val_auc: 0.7881 Epoch 155: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5655 - acc: 0.7043 - auc: 0.7756 - val_loss: 0.5534 - val_acc: 0.7122 - val_auc: 0.7880 Epoch 156: 2160000/2160000 [==============================] - 88s 41us/sample - loss: 0.5653 - acc: 0.7042 - auc: 0.7757 - val_loss: 0.5525 - val_acc: 0.7137 - val_auc: 0.7881 Epoch 157: 2160000/2160000 [==============================] - 81s 38us/sample - loss: 0.5654 - acc: 0.7046 - auc: 0.7758 - val_loss: 0.5526 - val_acc: 0.7136 - val_auc: 0.7881 Epoch 158: 2160000/2160000 [==============================] - 96s 45us/sample - loss: 0.5651 - acc: 0.7047 - auc: 0.7760 - val_loss: 0.5532 - val_acc: 0.7129 - val_auc: 0.7873 Epoch 159: 2160000/2160000 [==============================] - 105s 49us/sample - loss: 0.5651 - acc: 0.7048 - auc: 0.7760 - val_loss: 0.5541 - val_acc: 0.7113 - val_auc: 0.7881 Epoch 160: 2160000/2160000 [==============================] - 83s 38us/sample - loss: 0.5650 - acc: 0.7044 - auc: 0.7760 - val_loss: 0.5527 - val_acc: 0.7140 - val_auc: 0.7883 Epoch 161: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5649 - acc: 0.7047 - auc: 0.7762 - val_loss: 0.5531 - val_acc: 0.7135 - val_auc: 0.7877 Epoch 162: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5647 - acc: 0.7051 - auc: 0.7764 - val_loss: 0.5522 - val_acc: 0.7135 - val_auc: 0.7883 Epoch 163: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5647 - acc: 0.7048 - auc: 0.7764 - val_loss: 0.5546 - val_acc: 0.7128 - val_auc: 0.7880 Epoch 164: 2160000/2160000 [==============================] - 79s 36us/sample - loss: 0.5646 - acc: 0.7049 - auc: 0.7766 - val_loss: 0.5538 - val_acc: 0.7119 - val_auc: 0.7878 Epoch 165: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5645 - acc: 0.7052 - auc: 0.7765 - val_loss: 0.5529 - val_acc: 0.7135 - val_auc: 0.7884 Epoch 166: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5644 - acc: 0.7051 - auc: 0.7767 - val_loss: 0.5533 - val_acc: 0.7133 - val_auc: 0.7874 Epoch 167: 2160000/2160000 [==============================] - 70s 33us/sample - loss: 0.5642 - acc: 0.7054 - auc: 0.7769 - val_loss: 0.5524 - val_acc: 0.7133 - val_auc: 0.7879 Epoch 168: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5640 - acc: 0.7055 - auc: 0.7771 - val_loss: 0.5515 - val_acc: 0.7139 - val_auc: 0.7892 Epoch 169: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5640 - acc: 0.7056 - auc: 0.7771 - val_loss: 0.5522 - val_acc: 0.7140 - val_auc: 0.7882 Epoch 170: 2160000/2160000 [==============================] - 76s 35us/sample - loss: 0.5640 - acc: 0.7056 - auc: 0.7772 - val_loss: 0.5523 - val_acc: 0.7137 - val_auc: 0.7889 Epoch 171: 2160000/2160000 [==============================] - 73s 34us/sample - loss: 0.5641 - acc: 0.7055 - auc: 0.7770 - val_loss: 0.5512 - val_acc: 0.7142 - val_auc: 0.7891 Epoch 172: 2160000/2160000 [==============================] - 89s 41us/sample - loss: 0.5638 - acc: 0.7055 - auc: 0.7773 - val_loss: 0.5520 - val_acc: 0.7129 - val_auc: 0.7890 Epoch 173: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5636 - acc: 0.7056 - auc: 0.7775 - val_loss: 0.5513 - val_acc: 0.7145 - val_auc: 0.7893 Epoch 174: 2160000/2160000 [==============================] - 85s 39us/sample - loss: 0.5636 - acc: 0.7057 - auc: 0.7775 - val_loss: 0.5520 - val_acc: 0.7134 - val_auc: 0.7890 Epoch 175: 2160000/2160000 [==============================] - 86s 40us/sample - loss: 0.5635 - acc: 0.7059 - auc: 0.7776 - val_loss: 0.5509 - val_acc: 0.7145 - val_auc: 0.7896 Epoch 176: 2160000/2160000 [==============================] - 83s 39us/sample - loss: 0.5635 - acc: 0.7058 - auc: 0.7776 - val_loss: 0.5506 - val_acc: 0.7149 - val_auc: 0.7900 Epoch 177: 2160000/2160000 [==============================] - 90s 42us/sample - loss: 0.5633 - acc: 0.7059 - auc: 0.7778 - val_loss: 0.5518 - val_acc: 0.7148 - val_auc: 0.7894 Epoch 178: 2160000/2160000 [==============================] - 79s 37us/sample - loss: 0.5631 - acc: 0.7061 - auc: 0.7779 - val_loss: 0.5518 - val_acc: 0.7140 - val_auc: 0.7893 Epoch 179: 2160000/2160000 [==============================] - 85s 39us/sample - loss: 0.5633 - acc: 0.7061 - auc: 0.7779 - val_loss: 0.5516 - val_acc: 0.7142 - val_auc: 0.7891 Epoch 180: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5632 - acc: 0.7062 - auc: 0.7779 - val_loss: 0.5504 - val_acc: 0.7155 - val_auc: 0.7902 Epoch 181: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5630 - acc: 0.7062 - auc: 0.7781 - val_loss: 0.5510 - val_acc: 0.7143 - val_auc: 0.7895 Epoch 182: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5629 - acc: 0.7064 - auc: 0.7781 - val_loss: 0.5512 - val_acc: 0.7135 - val_auc: 0.7899 Epoch 183: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5629 - acc: 0.7064 - auc: 0.7782 - val_loss: 0.5516 - val_acc: 0.7138 - val_auc: 0.7899 Epoch 184: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5629 - acc: 0.7061 - auc: 0.7781 - val_loss: 0.5507 - val_acc: 0.7148 - val_auc: 0.7901 Epoch 185: 2160000/2160000 [==============================] - 78s 36us/sample - loss: 0.5627 - acc: 0.7065 - auc: 0.7784 - val_loss: 0.5509 - val_acc: 0.7147 - val_auc: 0.7901 Epoch 186: 2160000/2160000 [==============================] - 71s 33us/sample - loss: 0.5624 - acc: 0.7067 - auc: 0.7787 - val_loss: 0.5508 - val_acc: 0.7148 - val_auc: 0.7896 Epoch 187: 2160000/2160000 [==============================] - 70s 32us/sample - loss: 0.5626 - acc: 0.7065 - auc: 0.7785 - val_loss: 0.5514 - val_acc: 0.7133 - val_auc: 0.7893 Epoch 188: 2160000/2160000 [==============================] - 77s 35us/sample - loss: 0.5625 - acc: 0.7065 - auc: 0.7786 - val_loss: 0.5504 - val_acc: 0.7149 - val_auc: 0.7899 Epoch 189: 2160000/2160000 [==============================] - 74s 34us/sample - loss: 0.5623 - acc: 0.7066 - auc: 0.7788 - val_loss: 0.5506 - val_acc: 0.7144 - val_auc: 0.7898 Epoch 190: 2160000/2160000 [==============================] - 80s 37us/sample - loss: 0.5625 - acc: 0.7066 - auc: 0.7785 - val_loss: 0.5500 - val_acc: 0.7153 - val_auc: 0.7909 Epoch 191: 2160000/2160000 [==============================] - 79s 36us/sample - loss: 0.5620 - acc: 0.7070 - auc: 0.7790 - val_loss: 0.5500 - val_acc: 0.7153 - val_auc: 0.7903 Epoch 192: 2160000/2160000 [==============================] - 84s 39us/sample - loss: 0.5622 - acc: 0.7071 - auc: 0.7790 - val_loss: 0.5506 - val_acc: 0.7151 - val_auc: 0.7901 Epoch 193: 2160000/2160000 [==============================] - 79s 36us/sample - loss: 0.5621 - acc: 0.7067 - auc: 0.7790 - val_loss: 0.5510 - val_acc: 0.7139 - val_auc: 0.7906 Epoch 194: 2160000/2160000 [==============================] - 86s 40us/sample - loss: 0.5620 - acc: 0.7068 - auc: 0.7790 - val_loss: 0.5497 - val_acc: 0.7156 - val_auc: 0.7907 Epoch 195: 2160000/2160000 [==============================] - 82s 38us/sample - loss: 0.5618 - acc: 0.7068 - auc: 0.7792 - val_loss: 0.5506 - val_acc: 0.7147 - val_auc: 0.7907 Epoch 196: 2160000/2160000 [==============================] - 72s 33us/sample - loss: 0.5619 - acc: 0.7071 - auc: 0.7792 - val_loss: 0.5512 - val_acc: 0.7134 - val_auc: 0.7900 Epoch 197: 2160000/2160000 [==============================] - 77s 36us/sample - loss: 0.5618 - acc: 0.7070 - auc: 0.7792 - val_loss: 0.5503 - val_acc: 0.7144 - val_auc: 0.7900 Epoch 198: 2160000/2160000 [==============================] - 75s 35us/sample - loss: 0.5617 - acc: 0.7070 - auc: 0.7794 - val_loss: 0.5518 - val_acc: 0.7148 - val_auc: 0.7892 Epoch 199: 2160000/2160000 [==============================] - 92s 43us/sample - loss: 0.5616 - acc: 0.7072 - auc: 0.7795 - val_loss: 0.5512 - val_acc: 0.7149 - val_auc: 0.7906 Epoch 200: 2160000/2160000 [==============================] - 87s 40us/sample - loss: 0.5615 - acc: 0.7076 - auc: 0.7797 - val_loss: 0.5493 - val_acc: 0.7156 - val_auc: 0.7912