Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions learning_curve.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

data = load_digits()
print data.DESCR
num_trials = 10
num_trials = 2500
train_percentages = range(5,95,5)
test_accuracies = numpy.zeros(len(train_percentages))

Expand All @@ -17,7 +17,15 @@
# You should repeat each training percentage num_trials times to smooth out variability
# for consistency with the previous example use model = LogisticRegression(C=10**-10) for your learner

# TODO: your code here
for i in range(len(train_percentages)):
trial_accuracies = numpy.zeros(num_trials)
for trial in range(num_trials):
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, train_size=train_percentages[i])
model = LogisticRegression(C=10**-10)
model.fit(X_train, y_train)
trial_accuracies[trial] = model.score(X_test, y_test)
test_accuracies[i] = sum(trial_accuracies) / num_trials


fig = plt.figure()
plt.plot(train_percentages, test_accuracies)
Expand Down
5 changes: 5 additions & 0 deletions questions.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@

1. The general trend of the curve is positive correlation
2. There is more noise towards the two ends of the curve, probably because there is too much imbalance either way.
3. The graph is fairly smooth around 3000 trials
4. As C increases the graph becomes more of a curve than a line, appearing to hit some sort of asymptote.