CS231n 2018作业1-features

  • Post author:
  • Post category:其他






GitHub






地址 https://github.com/Wangxb06/CS231n




Image features exercise



Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the

assignments page

on the course website.

We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.

All of your work for this exercise will be done in this notebook.

In [1]:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt

from __future__ import print_function

%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2

Load data


Similar to previous exercises, we will load CIFAR-10 data from disk.

In [2]:
from cs231n.features import color_histogram_hsv, hog_feature

def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
    # Load the raw CIFAR-10 data
    cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'

    X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
    
    # Subsample the data
    mask = list(range(num_training, num_training + num_validation))
    X_val = X_train[mask]
    y_val = y_train[mask]
    mask = list(range(num_training))
    X_train = X_train[mask]
    y_train = y_train[mask]
    mask = list(range(num_test))
    X_test = X_test[mask]
    y_test = y_test[mask]
    
    return X_train, y_train, X_val, y_val, X_test, y_test

# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
   del X_train, y_train
   del X_test, y_test
   print('Clear previously loaded data.')
except:
   pass

X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()

Extract Features


For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors.

Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for your interests.

The

hog_feature

and

color_histogram_hsv

functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image.

In [3]:
from cs231n.features import *

num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)

# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat

# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat

# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])

Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images

Train SVM on features


Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.

In [19]:
# Use the validation set to tune the learning rate and regularization strength

from cs231n.classifiers.linear_classifier import LinearSVM

learning_rates = [5e-7,1e-6,2e-6]
regularization_strengths = [5e-4, 1e-3, 2e-3]

results = {}
best_val = -1
best_svm = None

################################################################################
# TODO:                                                                        #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save    #
# the best trained classifer in best_svm. You might also want to play          #
# with different numbers of bins in the color histogram. If you are careful    #
# you should be able to get accuracy of near 0.44 on the validation set.       #
################################################################################
itertimes = 5001
for learningrate in learning_rates:
    for reg in regularization_strengths:
        clf = LinearSVM()
        clf.train(X_train_feats, y_train,
            learning_rate=learningrate,reg=reg,num_iters=itertimes,batch_size= 1000,verbose=True)
        
        y_test_pred = clf.predict(X_val_feats)
        acc_val = np.mean(y_val == y_test_pred,dtype=np.float32)
        y_val_pred = clf.predict(X_train_feats)
        acc_train = np.mean(y_train == y_val_pred,dtype=np.float32)
        results[(learningrate,reg)] = acc_train,acc_val
        if acc_val > best_val:
            best_val = acc_val
            best_svm = clf
################################################################################
#                              END OF YOUR CODE                                #
################################################################################

# Print out results.
for lr, reg in sorted(results):
    train_accuracy, val_accuracy = results[(lr, reg)]
    print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
                lr, reg, train_accuracy, val_accuracy))
    
print('best validation accuracy achieved during cross-validation: %f' % best_val)

iteration 0 / 5001: loss 9.000274
iteration 100 / 5001: loss 8.996877
iteration 200 / 5001: loss 8.993689
iteration 300 / 5001: loss 8.989137
iteration 400 / 5001: loss 8.985364
iteration 500 / 5001: loss 8.982795
iteration 600 / 5001: loss 8.979144
iteration 700 / 5001: loss 8.975264
iteration 800 / 5001: loss 8.972430
iteration 900 / 5001: loss 8.970207
iteration 1000 / 5001: loss 8.966603
iteration 1100 / 5001: loss 8.962413
iteration 1200 / 5001: loss 8.958702
iteration 1300 / 5001: loss 8.952866
iteration 1400 / 5001: loss 8.950742
iteration 1500 / 5001: loss 8.945848
iteration 1600 / 5001: loss 8.947361
iteration 1700 / 5001: loss 8.943924
iteration 1800 / 5001: loss 8.934685
iteration 1900 / 5001: loss 8.932142
iteration 2000 / 5001: loss 8.933735
iteration 2100 / 5001: loss 8.926261
iteration 2200 / 5001: loss 8.921317
iteration 2300 / 5001: loss 8.918068
iteration 2400 / 5001: loss 8.911522
iteration 2500 / 5001: loss 8.909517
iteration 2600 / 5001: loss 8.913387
iteration 2700 / 5001: loss 8.903629
iteration 2800 / 5001: loss 8.898118
iteration 2900 / 5001: loss 8.901233
iteration 3000 / 5001: loss 8.889439
iteration 3100 / 5001: loss 8.893594
iteration 3200 / 5001: loss 8.890071
iteration 3300 / 5001: loss 8.883126
iteration 3400 / 5001: loss 8.882011
iteration 3500 / 5001: loss 8.875187
iteration 3600 / 5001: loss 8.871379
iteration 3700 / 5001: loss 8.865259
iteration 3800 / 5001: loss 8.867150
iteration 3900 / 5001: loss 8.858230
iteration 4000 / 5001: loss 8.861323
iteration 4100 / 5001: loss 8.854613
iteration 4200 / 5001: loss 8.857092
iteration 4300 / 5001: loss 8.847428
iteration 4400 / 5001: loss 8.843335
iteration 4500 / 5001: loss 8.845503
iteration 4600 / 5001: loss 8.834610
iteration 4700 / 5001: loss 8.833037
iteration 4800 / 5001: loss 8.831865
iteration 4900 / 5001: loss 8.824899
iteration 5000 / 5001: loss 8.830666
iteration 0 / 5001: loss 9.000245
iteration 100 / 5001: loss 8.997517
iteration 200 / 5001: loss 8.992762
iteration 300 / 5001: loss 8.991019
iteration 400 / 5001: loss 8.987413
iteration 500 / 5001: loss 8.982581
iteration 600 / 5001: loss 8.978623
iteration 700 / 5001: loss 8.975379
iteration 800 / 5001: loss 8.971627
iteration 900 / 5001: loss 8.968334
iteration 1000 / 5001: loss 8.962524
iteration 1100 / 5001: loss 8.960125
iteration 1200 / 5001: loss 8.956618
iteration 1300 / 5001: loss 8.953325
iteration 1400 / 5001: loss 8.950805
iteration 1500 / 5001: loss 8.946713
iteration 1600 / 5001: loss 8.944601
iteration 1700 / 5001: loss 8.942360
iteration 1800 / 5001: loss 8.935672
iteration 1900 / 5001: loss 8.933616
iteration 2000 / 5001: loss 8.930223
iteration 2100 / 5001: loss 8.925726
iteration 2200 / 5001: loss 8.922452
iteration 2300 / 5001: loss 8.917473
iteration 2400 / 5001: loss 8.917600
iteration 2500 / 5001: loss 8.913358
iteration 2600 / 5001: loss 8.908366
iteration 2700 / 5001: loss 8.903424
iteration 2800 / 5001: loss 8.901028
iteration 2900 / 5001: loss 8.899409
iteration 3000 / 5001: loss 8.892284
iteration 3100 / 5001: loss 8.894368
iteration 3200 / 5001: loss 8.891326
iteration 3300 / 5001: loss 8.885285
iteration 3400 / 5001: loss 8.881798
iteration 3500 / 5001: loss 8.877224
iteration 3600 / 5001: loss 8.872492
iteration 3700 / 5001: loss 8.879904
iteration 3800 / 5001: loss 8.864758
iteration 3900 / 5001: loss 8.858150
iteration 4000 / 5001: loss 8.855230
iteration 4100 / 5001: loss 8.858029
iteration 4200 / 5001: loss 8.855068
iteration 4300 / 5001: loss 8.856583
iteration 4400 / 5001: loss 8.841206
iteration 4500 / 5001: loss 8.844847
iteration 4600 / 5001: loss 8.843058
iteration 4700 / 5001: loss 8.830601
iteration 4800 / 5001: loss 8.829603
iteration 4900 / 5001: loss 8.819032
iteration 5000 / 5001: loss 8.824079
iteration 0 / 5001: loss 9.000019
iteration 100 / 5001: loss 8.996678
iteration 200 / 5001: loss 8.992257
iteration 300 / 5001: loss 8.989583
iteration 400 / 5001: loss 8.984823
iteration 500 / 5001: loss 8.983093
iteration 600 / 5001: loss 8.978602
iteration 700 / 5001: loss 8.975349
iteration 800 / 5001: loss 8.971849
iteration 900 / 5001: loss 8.967648
iteration 1000 / 5001: loss 8.962560
iteration 1100 / 5001: loss 8.960976
iteration 1200 / 5001: loss 8.956371
iteration 1300 / 5001: loss 8.953732
iteration 1400 / 5001: loss 8.950938
iteration 1500 / 5001: loss 8.947326
iteration 1600 / 5001: loss 8.945254
iteration 1700 / 5001: loss 8.940975
iteration 1800 / 5001: loss 8.936473
iteration 1900 / 5001: loss 8.933457
iteration 2000 / 5001: loss 8.926123
iteration 2100 / 5001: loss 8.926939
iteration 2200 / 5001: loss 8.923747
iteration 2300 / 5001: loss 8.914440
iteration 2400 / 5001: loss 8.917879
iteration 2500 / 5001: loss 8.913645
iteration 2600 / 5001: loss 8.911817
iteration 2700 / 5001: loss 8.907947
iteration 2800 / 5001: loss 8.896826
iteration 2900 / 5001: loss 8.895845
iteration 3000 / 5001: loss 8.892090
iteration 3100 / 5001: loss 8.889492
iteration 3200 / 5001: loss 8.887246
iteration 3300 / 5001: loss 8.885487
iteration 3400 / 5001: loss 8.879017
iteration 3500 / 5001: loss 8.872160
iteration 3600 / 5001: loss 8.874747
iteration 3700 / 5001: loss 8.872730
iteration 3800 / 5001: loss 8.867771
iteration 3900 / 5001: loss 8.859533
iteration 4000 / 5001: loss 8.864881
iteration 4100 / 5001: loss 8.853745
iteration 4200 / 5001: loss 8.849395
iteration 4300 / 5001: loss 8.844947
iteration 4400 / 5001: loss 8.845456
iteration 4500 / 5001: loss 8.849887
iteration 4600 / 5001: loss 8.839662
iteration 4700 / 5001: loss 8.841321
iteration 4800 / 5001: loss 8.825505
iteration 4900 / 5001: loss 8.828142
iteration 5000 / 5001: loss 8.819145
iteration 0 / 5001: loss 8.999844
iteration 100 / 5001: loss 8.992100
iteration 200 / 5001: loss 8.985792
iteration 300 / 5001: loss 8.980136
iteration 400 / 5001: loss 8.971730
iteration 500 / 5001: loss 8.965388
iteration 600 / 5001: loss 8.957111
iteration 700 / 5001: loss 8.951310
iteration 800 / 5001: loss 8.943041
iteration 900 / 5001: loss 8.936161
iteration 1000 / 5001: loss 8.927600
iteration 1100 / 5001: loss 8.924084
iteration 1200 / 5001: loss 8.918596
iteration 1300 / 5001: loss 8.912268
iteration 1400 / 5001: loss 8.903441
iteration 1500 / 5001: loss 8.893053
iteration 1600 / 5001: loss 8.886388
iteration 1700 / 5001: loss 8.880046
iteration 1800 / 5001: loss 8.868954
iteration 1900 / 5001: loss 8.871932
iteration 2000 / 5001: loss 8.858779
iteration 2100 / 5001: loss 8.851354
iteration 2200 / 5001: loss 8.846996
iteration 2300 / 5001: loss 8.836674
iteration 2400 / 5001: loss 8.835812
iteration 2500 / 5001: loss 8.827764
iteration 2600 / 5001: loss 8.815052
iteration 2700 / 5001: loss 8.809636
iteration 2800 / 5001: loss 8.807389
iteration 2900 / 5001: loss 8.792380
iteration 3000 / 5001: loss 8.781627
iteration 3100 / 5001: loss 8.770309
iteration 3200 / 5001: loss 8.762752
iteration 3300 / 5001: loss 8.761680
iteration 3400 / 5001: loss 8.758796
iteration 3500 / 5001: loss 8.751306
iteration 3600 / 5001: loss 8.738351
iteration 3700 / 5001: loss 8.727420
iteration 3800 / 5001: loss 8.739264
iteration 3900 / 5001: loss 8.724599
iteration 4000 / 5001: loss 8.715270
iteration 4100 / 5001: loss 8.705359
iteration 4200 / 5001: loss 8.717998
iteration 4300 / 5001: loss 8.693795
iteration 4400 / 5001: loss 8.688687
iteration 4500 / 5001: loss 8.687635
iteration 4600 / 5001: loss 8.677907
iteration 4700 / 5001: loss 8.664229
iteration 4800 / 5001: loss 8.670889
iteration 4900 / 5001: loss 8.642964
iteration 5000 / 5001: loss 8.656204
iteration 0 / 5001: loss 8.999712
iteration 100 / 5001: loss 8.992689
iteration 200 / 5001: loss 8.985794
iteration 300 / 5001: loss 8.978356
iteration 400 / 5001: loss 8.971956
iteration 500 / 5001: loss 8.962777
iteration 600 / 5001: loss 8.957442
iteration 700 / 5001: loss 8.949210
iteration 800 / 5001: loss 8.945413
iteration 900 / 5001: loss 8.936836
iteration 1000 / 5001: loss 8.933021
iteration 1100 / 5001: loss 8.927274
iteration 1200 / 5001: loss 8.917594
iteration 1300 / 5001: loss 8.909188
iteration 1400 / 5001: loss 8.896891
iteration 1500 / 5001: loss 8.893647
iteration 1600 / 5001: loss 8.886650
iteration 1700 / 5001: loss 8.878502
iteration 1800 / 5001: loss 8.875375
iteration 1900 / 5001: loss 8.858565
iteration 2000 / 5001: loss 8.856387
iteration 2100 / 5001: loss 8.852024
iteration 2200 / 5001: loss 8.847579
iteration 2300 / 5001: loss 8.841469
iteration 2400 / 5001: loss 8.833510
iteration 2500 / 5001: loss 8.814244
iteration 2600 / 5001: loss 8.818050
iteration 2700 / 5001: loss 8.805771
iteration 2800 / 5001: loss 8.800218
iteration 2900 / 5001: loss 8.802139
iteration 3000 / 5001: loss 8.778811
iteration 3100 / 5001: loss 8.776073
iteration 3200 / 5001: loss 8.777760
iteration 3300 / 5001: loss 8.756910
iteration 3400 / 5001: loss 8.746543
iteration 3500 / 5001: loss 8.744209
iteration 3600 / 5001: loss 8.751080
iteration 3700 / 5001: loss 8.729003
iteration 3800 / 5001: loss 8.736213
iteration 3900 / 5001: loss 8.710450
iteration 4000 / 5001: loss 8.730971
iteration 4100 / 5001: loss 8.699991
iteration 4200 / 5001: loss 8.707532
iteration 4300 / 5001: loss 8.684920
iteration 4400 / 5001: loss 8.698681
iteration 4500 / 5001: loss 8.678560
iteration 4600 / 5001: loss 8.660615
iteration 4700 / 5001: loss 8.651569
iteration 4800 / 5001: loss 8.674205
iteration 4900 / 5001: loss 8.646142
iteration 5000 / 5001: loss 8.642565
iteration 0 / 5001: loss 9.000454
iteration 100 / 5001: loss 8.992641
iteration 200 / 5001: loss 8.985738
iteration 300 / 5001: loss 8.980173
iteration 400 / 5001: loss 8.972212
iteration 500 / 5001: loss 8.965030
iteration 600 / 5001: loss 8.958212
iteration 700 / 5001: loss 8.952412
iteration 800 / 5001: loss 8.942868
iteration 900 / 5001: loss 8.939589
iteration 1000 / 5001: loss 8.930094
iteration 1100 / 5001: loss 8.922968
iteration 1200 / 5001: loss 8.916980
iteration 1300 / 5001: loss 8.909424
iteration 1400 / 5001: loss 8.905417
iteration 1500 / 5001: loss 8.892680
iteration 1600 / 5001: loss 8.884086
iteration 1700 / 5001: loss 8.882793
iteration 1800 / 5001: loss 8.873064
iteration 1900 / 5001: loss 8.872143
iteration 2000 / 5001: loss 8.860754
iteration 2100 / 5001: loss 8.850879
iteration 2200 / 5001: loss 8.834424
iteration 2300 / 5001: loss 8.831350
iteration 2400 / 5001: loss 8.825694
iteration 2500 / 5001: loss 8.822170
iteration 2600 / 5001: loss 8.811244
iteration 2700 / 5001: loss 8.817462
iteration 2800 / 5001: loss 8.796984
iteration 2900 / 5001: loss 8.790425
iteration 3000 / 5001: loss 8.793434
iteration 3100 / 5001: loss 8.773102
iteration 3200 / 5001: loss 8.773066
iteration 3300 / 5001: loss 8.760243
iteration 3400 / 5001: loss 8.748888
iteration 3500 / 5001: loss 8.751979
iteration 3600 / 5001: loss 8.737816
iteration 3700 / 5001: loss 8.746606
iteration 3800 / 5001: loss 8.733076
iteration 3900 / 5001: loss 8.723739
iteration 4000 / 5001: loss 8.716498
iteration 4100 / 5001: loss 8.701578
iteration 4200 / 5001: loss 8.714597
iteration 4300 / 5001: loss 8.700270
iteration 4400 / 5001: loss 8.718360
iteration 4500 / 5001: loss 8.666996
iteration 4600 / 5001: loss 8.689551
iteration 4700 / 5001: loss 8.673247
iteration 4800 / 5001: loss 8.656280
iteration 4900 / 5001: loss 8.657117
iteration 5000 / 5001: loss 8.658500
iteration 0 / 5001: loss 9.000037
iteration 100 / 5001: loss 8.985409
iteration 200 / 5001: loss 8.970056
iteration 300 / 5001: loss 8.956821
iteration 400 / 5001: loss 8.943098
iteration 500 / 5001: loss 8.928343
iteration 600 / 5001: loss 8.915329
iteration 700 / 5001: loss 8.901385
iteration 800 / 5001: loss 8.882493
iteration 900 / 5001: loss 8.871488
iteration 1000 / 5001: loss 8.860953
iteration 1100 / 5001: loss 8.843884
iteration 1200 / 5001: loss 8.834872
iteration 1300 / 5001: loss 8.805758
iteration 1400 / 5001: loss 8.803594
iteration 1500 / 5001: loss 8.773966
iteration 1600 / 5001: loss 8.772574
iteration 1700 / 5001: loss 8.757809
iteration 1800 / 5001: loss 8.744080
iteration 1900 / 5001: loss 8.736403
iteration 2000 / 5001: loss 8.731065
iteration 2100 / 5001: loss 8.693059
iteration 2200 / 5001: loss 8.686967
iteration 2300 / 5001: loss 8.673065
iteration 2400 / 5001: loss 8.660695
iteration 2500 / 5001: loss 8.651704
iteration 2600 / 5001: loss 8.631247
iteration 2700 / 5001: loss 8.635769
iteration 2800 / 5001: loss 8.615583
iteration 2900 / 5001: loss 8.578775
iteration 3000 / 5001: loss 8.576917
iteration 3100 / 5001: loss 8.557291
iteration 3200 / 5001: loss 8.541567
iteration 3300 / 5001: loss 8.529796
iteration 3400 / 5001: loss 8.525546
iteration 3500 / 5001: loss 8.511728
iteration 3600 / 5001: loss 8.474170
iteration 3700 / 5001: loss 8.475362
iteration 3800 / 5001: loss 8.463162
iteration 3900 / 5001: loss 8.455787
iteration 4000 / 5001: loss 8.437574
iteration 4100 / 5001: loss 8.424123
iteration 4200 / 5001: loss 8.408526
iteration 4300 / 5001: loss 8.386228
iteration 4400 / 5001: loss 8.372966
iteration 4500 / 5001: loss 8.342845
iteration 4600 / 5001: loss 8.321461
iteration 4700 / 5001: loss 8.355824
iteration 4800 / 5001: loss 8.313732
iteration 4900 / 5001: loss 8.335816
iteration 5000 / 5001: loss 8.277523
iteration 0 / 5001: loss 9.000334
iteration 100 / 5001: loss 8.987506
iteration 200 / 5001: loss 8.971817
iteration 300 / 5001: loss 8.959261
iteration 400 / 5001: loss 8.946625
iteration 500 / 5001: loss 8.932012
iteration 600 / 5001: loss 8.915595
iteration 700 / 5001: loss 8.900019
iteration 800 / 5001: loss 8.889209
iteration 900 / 5001: loss 8.876927
iteration 1000 / 5001: loss 8.859684
iteration 1100 / 5001: loss 8.836797
iteration 1200 / 5001: loss 8.828636
iteration 1300 / 5001: loss 8.807938
iteration 1400 / 5001: loss 8.798492
iteration 1500 / 5001: loss 8.784693
iteration 1600 / 5001: loss 8.774986
iteration 1700 / 5001: loss 8.758276
iteration 1800 / 5001: loss 8.749441
iteration 1900 / 5001: loss 8.734094
iteration 2000 / 5001: loss 8.717970
iteration 2100 / 5001: loss 8.701130
iteration 2200 / 5001: loss 8.675409
iteration 2300 / 5001: loss 8.676948
iteration 2400 / 5001: loss 8.665531
iteration 2500 / 5001: loss 8.637552
iteration 2600 / 5001: loss 8.632817
iteration 2700 / 5001: loss 8.614392
iteration 2800 / 5001: loss 8.605826
iteration 2900 / 5001: loss 8.605130
iteration 3000 / 5001: loss 8.564422
iteration 3100 / 5001: loss 8.567007
iteration 3200 / 5001: loss 8.560088
iteration 3300 / 5001: loss 8.527107
iteration 3400 / 5001: loss 8.514909
iteration 3500 / 5001: loss 8.497792
iteration 3600 / 5001: loss 8.478199
iteration 3700 / 5001: loss 8.478662
iteration 3800 / 5001: loss 8.472183
iteration 3900 / 5001: loss 8.469713
iteration 4000 / 5001: loss 8.452112
iteration 4100 / 5001: loss 8.427891
iteration 4200 / 5001: loss 8.400093
iteration 4300 / 5001: loss 8.400063
iteration 4400 / 5001: loss 8.384995
iteration 4500 / 5001: loss 8.395924
iteration 4600 / 5001: loss 8.346289
iteration 4700 / 5001: loss 8.347959
iteration 4800 / 5001: loss 8.361081
iteration 4900 / 5001: loss 8.325870
iteration 5000 / 5001: loss 8.299318
iteration 0 / 5001: loss 9.000004
iteration 100 / 5001: loss 8.986532
iteration 200 / 5001: loss 8.971519
iteration 300 / 5001: loss 8.957899
iteration 400 / 5001: loss 8.946398
iteration 500 / 5001: loss 8.932046
iteration 600 / 5001: loss 8.918962
iteration 700 / 5001: loss 8.905777
iteration 800 / 5001: loss 8.888301
iteration 900 / 5001: loss 8.875551
iteration 1000 / 5001: loss 8.859345
iteration 1100 / 5001: loss 8.838495
iteration 1200 / 5001: loss 8.829886
iteration 1300 / 5001: loss 8.811187
iteration 1400 / 5001: loss 8.811658
iteration 1500 / 5001: loss 8.791415
iteration 1600 / 5001: loss 8.791506
iteration 1700 / 5001: loss 8.748428
iteration 1800 / 5001: loss 8.744543
iteration 1900 / 5001: loss 8.744481
iteration 2000 / 5001: loss 8.722142
iteration 2100 / 5001: loss 8.716220
iteration 2200 / 5001: loss 8.687906
iteration 2300 / 5001: loss 8.673400
iteration 2400 / 5001: loss 8.661965
iteration 2500 / 5001: loss 8.637091
iteration 2600 / 5001: loss 8.629223
iteration 2700 / 5001: loss 8.623392
iteration 2800 / 5001: loss 8.604662
iteration 2900 / 5001: loss 8.595189
iteration 3000 / 5001: loss 8.554411
iteration 3100 / 5001: loss 8.547505
iteration 3200 / 5001: loss 8.557263
iteration 3300 / 5001: loss 8.536591
iteration 3400 / 5001: loss 8.539746
iteration 3500 / 5001: loss 8.511505
iteration 3600 / 5001: loss 8.502451
iteration 3700 / 5001: loss 8.506744
iteration 3800 / 5001: loss 8.464206
iteration 3900 / 5001: loss 8.435817
iteration 4000 / 5001: loss 8.429009
iteration 4100 / 5001: loss 8.415976
iteration 4200 / 5001: loss 8.410854
iteration 4300 / 5001: loss 8.385152
iteration 4400 / 5001: loss 8.380294
iteration 4500 / 5001: loss 8.365829
iteration 4600 / 5001: loss 8.379761
iteration 4700 / 5001: loss 8.329730
iteration 4800 / 5001: loss 8.316310
iteration 4900 / 5001: loss 8.325863
iteration 5000 / 5001: loss 8.288597
lr 5.000000e-07 reg 5.000000e-04 train accuracy: 0.416020 val accuracy: 0.415000
lr 5.000000e-07 reg 1.000000e-03 train accuracy: 0.413286 val accuracy: 0.407000
lr 5.000000e-07 reg 2.000000e-03 train accuracy: 0.408857 val accuracy: 0.418000
lr 1.000000e-06 reg 5.000000e-04 train accuracy: 0.413531 val accuracy: 0.422000
lr 1.000000e-06 reg 1.000000e-03 train accuracy: 0.413633 val accuracy: 0.415000
lr 1.000000e-06 reg 2.000000e-03 train accuracy: 0.413490 val accuracy: 0.410000
lr 2.000000e-06 reg 5.000000e-04 train accuracy: 0.414551 val accuracy: 0.422000
lr 2.000000e-06 reg 1.000000e-03 train accuracy: 0.415980 val accuracy: 0.414000
lr 2.000000e-06 reg 2.000000e-03 train accuracy: 0.415612 val accuracy: 0.415000
best validation accuracy achieved during cross-validation: 0.422000

In [20]:
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)

0.419

In [21]:
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".

examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
    idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
    idxs = np.random.choice(idxs, examples_per_class, replace=False)
    for i, idx in enumerate(idxs):
        plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
        plt.imshow(X_test[idx].astype('uint8'))
        plt.axis('off')
        if i == 0:
            plt.title(cls_name)
plt.show()

Inline question 1:


Describe the misclassification results that you see. Do they make sense? 图像的物体轮廓和错分的类的物体轮廓相似或是颜色相近。

Neural Network on image features


Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.

For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.

In [22]:
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]

print(X_train_feats.shape)

(49000, 155)
(49000, 154)

In [38]:
from cs231n.classifiers.neural_net import TwoLayerNet

input_dim = X_train_feats.shape[1]
hidden_dim = 200
num_classes = 10
best_net = None

################################################################################
# TODO: Train a two-layer neural network on image features. You may want to    #
# cross-validate various parameters as in previous sections. Store your best   #
# model in the best_net variable.                                              #
################################################################################
bestacc = 0
for hidden_size in [hidden_dim]:
    for learning_rate in [1e-1,5e-1]:
        for learning_rate_decay in [0.999]:
            for reg in [3e-4,1e-3,3e-3]:
                net = TwoLayerNet(input_dim, hidden_dim, num_classes)
                # Train the network
                stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
                            num_iters=3000, batch_size=500,
                            learning_rate=learning_rate, learning_rate_decay=learning_rate_decay,
                            reg=reg, verbose=True)

                # Predict on the validation set
                val_acc = (net.predict(X_val_feats) == y_val).mean()
                print('hidden_size = %d,learning_rate = %f,learning_rate_decay = %f,reg = %f,Validation accuracy =%f '%(hidden_size,learning_rate,learning_rate_decay,reg,val_acc))
                if bestacc<val_acc:
                    bestacc = val_acc
                    best_net = net
################################################################################
#                              END OF YOUR CODE                                #
################################################################################

iteration 0 / 3000: loss 2.302585
iteration 100 / 3000: loss 2.302729
iteration 200 / 3000: loss 2.157468
iteration 300 / 3000: loss 1.815652
iteration 400 / 3000: loss 1.607845
iteration 500 / 3000: loss 1.453182
iteration 600 / 3000: loss 1.388963
iteration 700 / 3000: loss 1.419875
iteration 800 / 3000: loss 1.485941
iteration 900 / 3000: loss 1.381507
iteration 1000 / 3000: loss 1.288655
iteration 1100 / 3000: loss 1.291013
iteration 1200 / 3000: loss 1.402743
iteration 1300 / 3000: loss 1.279878
iteration 1400 / 3000: loss 1.329011
iteration 1500 / 3000: loss 1.207615
iteration 1600 / 3000: loss 1.236158
iteration 1700 / 3000: loss 1.282987
iteration 1800 / 3000: loss 1.313573
iteration 1900 / 3000: loss 1.231337
iteration 2000 / 3000: loss 1.158391
iteration 2100 / 3000: loss 1.175842
iteration 2200 / 3000: loss 1.158788
iteration 2300 / 3000: loss 1.222745
iteration 2400 / 3000: loss 1.234942
iteration 2500 / 3000: loss 1.106650
iteration 2600 / 3000: loss 1.197288
iteration 2700 / 3000: loss 1.198282
iteration 2800 / 3000: loss 1.145784
iteration 2900 / 3000: loss 1.230237
hidden_size = 200,learning_rate = 0.100000,learning_rate_decay = 0.999000,reg = 0.000300,                      Validation accuracy =0.579000 
iteration 0 / 3000: loss 2.302586
iteration 100 / 3000: loss 2.302430
iteration 200 / 3000: loss 2.181151
iteration 300 / 3000: loss 1.831448
iteration 400 / 3000: loss 1.615149
iteration 500 / 3000: loss 1.476787
iteration 600 / 3000: loss 1.501958
iteration 700 / 3000: loss 1.465278
iteration 800 / 3000: loss 1.469504
iteration 900 / 3000: loss 1.394179
iteration 1000 / 3000: loss 1.366266
iteration 1100 / 3000: loss 1.321902
iteration 1200 / 3000: loss 1.327500
iteration 1300 / 3000: loss 1.433660
iteration 1400 / 3000: loss 1.323989
iteration 1500 / 3000: loss 1.354662
iteration 1600 / 3000: loss 1.347036
iteration 1700 / 3000: loss 1.337948
iteration 1800 / 3000: loss 1.249414
iteration 1900 / 3000: loss 1.281975
iteration 2000 / 3000: loss 1.312262
iteration 2100 / 3000: loss 1.282523
iteration 2200 / 3000: loss 1.244994
iteration 2300 / 3000: loss 1.202784
iteration 2400 / 3000: loss 1.257801
iteration 2500 / 3000: loss 1.189694
iteration 2600 / 3000: loss 1.174609
iteration 2700 / 3000: loss 1.221305
iteration 2800 / 3000: loss 1.205834
iteration 2900 / 3000: loss 1.204931
hidden_size = 200,learning_rate = 0.100000,learning_rate_decay = 0.999000,reg = 0.001000,                      Validation accuracy =0.573000 
iteration 0 / 3000: loss 2.302586
iteration 100 / 3000: loss 2.302278
iteration 200 / 3000: loss 2.189857
iteration 300 / 3000: loss 1.833549
iteration 400 / 3000: loss 1.638028
iteration 500 / 3000: loss 1.522804
iteration 600 / 3000: loss 1.474484
iteration 700 / 3000: loss 1.477479
iteration 800 / 3000: loss 1.394374
iteration 900 / 3000: loss 1.470889
iteration 1000 / 3000: loss 1.445777
iteration 1100 / 3000: loss 1.389561
iteration 1200 / 3000: loss 1.442031
iteration 1300 / 3000: loss 1.413036
iteration 1400 / 3000: loss 1.373513
iteration 1500 / 3000: loss 1.307440
iteration 1600 / 3000: loss 1.317245
iteration 1700 / 3000: loss 1.398170
iteration 1800 / 3000: loss 1.337983
iteration 1900 / 3000: loss 1.320985
iteration 2000 / 3000: loss 1.332182
iteration 2100 / 3000: loss 1.368192
iteration 2200 / 3000: loss 1.367513
iteration 2300 / 3000: loss 1.314622
iteration 2400 / 3000: loss 1.287991
iteration 2500 / 3000: loss 1.354172
iteration 2600 / 3000: loss 1.333494
iteration 2700 / 3000: loss 1.361529
iteration 2800 / 3000: loss 1.275157
iteration 2900 / 3000: loss 1.314783
hidden_size = 200,learning_rate = 0.100000,learning_rate_decay = 0.999000,reg = 0.003000,                      Validation accuracy =0.555000 
iteration 0 / 3000: loss 2.302585
iteration 100 / 3000: loss 1.591626
iteration 200 / 3000: loss 1.370067
iteration 300 / 3000: loss 1.321008
iteration 400 / 3000: loss 1.214749
iteration 500 / 3000: loss 1.129858
iteration 600 / 3000: loss 1.109910
iteration 700 / 3000: loss 1.147933
iteration 800 / 3000: loss 1.019015
iteration 900 / 3000: loss 1.118047
iteration 1000 / 3000: loss 1.064586
iteration 1100 / 3000: loss 0.966876
iteration 1200 / 3000: loss 1.018922
iteration 1300 / 3000: loss 1.051983
iteration 1400 / 3000: loss 1.010870
iteration 1500 / 3000: loss 0.958339
iteration 1600 / 3000: loss 0.943132
iteration 1700 / 3000: loss 1.025576
iteration 1800 / 3000: loss 0.983655
iteration 1900 / 3000: loss 0.893179
iteration 2000 / 3000: loss 0.916219
iteration 2100 / 3000: loss 0.831638
iteration 2200 / 3000: loss 0.881251
iteration 2300 / 3000: loss 0.895742
iteration 2400 / 3000: loss 0.867612
iteration 2500 / 3000: loss 0.845255
iteration 2600 / 3000: loss 0.864922
iteration 2700 / 3000: loss 0.874401
iteration 2800 / 3000: loss 0.809481
iteration 2900 / 3000: loss 0.860748
hidden_size = 200,learning_rate = 0.500000,learning_rate_decay = 0.999000,reg = 0.000300,                      Validation accuracy =0.583000 
iteration 0 / 3000: loss 2.302585
iteration 100 / 3000: loss 1.468134
iteration 200 / 3000: loss 1.433393
iteration 300 / 3000: loss 1.376202
iteration 400 / 3000: loss 1.304741
iteration 500 / 3000: loss 1.244035
iteration 600 / 3000: loss 1.217934
iteration 700 / 3000: loss 1.184296
iteration 800 / 3000: loss 1.187113
iteration 900 / 3000: loss 1.203770
iteration 1000 / 3000: loss 1.199397
iteration 1100 / 3000: loss 1.215035
iteration 1200 / 3000: loss 1.137480
iteration 1300 / 3000: loss 1.081251
iteration 1400 / 3000: loss 1.093453
iteration 1500 / 3000: loss 1.132282
iteration 1600 / 3000: loss 1.194675
iteration 1700 / 3000: loss 1.043954
iteration 1800 / 3000: loss 1.137123
iteration 1900 / 3000: loss 1.076998
iteration 2000 / 3000: loss 1.150516
iteration 2100 / 3000: loss 0.982572
iteration 2200 / 3000: loss 1.108079
iteration 2300 / 3000: loss 1.088264
iteration 2400 / 3000: loss 1.115443
iteration 2500 / 3000: loss 1.058571
iteration 2600 / 3000: loss 1.106416
iteration 2700 / 3000: loss 1.140680
iteration 2800 / 3000: loss 1.151254
iteration 2900 / 3000: loss 1.070111
hidden_size = 200,learning_rate = 0.500000,learning_rate_decay = 0.999000,reg = 0.001000,                      Validation accuracy =0.584000 
iteration 0 / 3000: loss 2.302586
iteration 100 / 3000: loss 1.531372
iteration 200 / 3000: loss 1.475529
iteration 300 / 3000: loss 1.445394
iteration 400 / 3000: loss 1.378196
iteration 500 / 3000: loss 1.363917
iteration 600 / 3000: loss 1.365908
iteration 700 / 3000: loss 1.371893
iteration 800 / 3000: loss 1.342221
iteration 900 / 3000: loss 1.312124
iteration 1000 / 3000: loss 1.301741
iteration 1100 / 3000: loss 1.283645
iteration 1200 / 3000: loss 1.225358
iteration 1300 / 3000: loss 1.386171
iteration 1400 / 3000: loss 1.318980
iteration 1500 / 3000: loss 1.343715
iteration 1600 / 3000: loss 1.298113
iteration 1700 / 3000: loss 1.318896
iteration 1800 / 3000: loss 1.327029
iteration 1900 / 3000: loss 1.353493
iteration 2000 / 3000: loss 1.280791
iteration 2100 / 3000: loss 1.365322
iteration 2200 / 3000: loss 1.407613
iteration 2300 / 3000: loss 1.326209
iteration 2400 / 3000: loss 1.393393
iteration 2500 / 3000: loss 1.359286
iteration 2600 / 3000: loss 1.293590
iteration 2700 / 3000: loss 1.277245
iteration 2800 / 3000: loss 1.279208
iteration 2900 / 3000: loss 1.305238
hidden_size = 200,learning_rate = 0.500000,learning_rate_decay = 0.999000,reg = 0.003000,                      Validation accuracy =0.584000 

In [39]:
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.

test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)

0.586



版权声明:本文为lifewang原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。