Cmput 455 Activities and Readings


Lecture 1 Reading and Activities

Reading: Krakovsky, Reinforcement Renaissance

Do by Sep 6

All papers/articles are on the eClass reading page

Quiz 0 and Quiz 1

Do by Sep 6

All quizzes are on the eClass course page

Activity 1a: Read Course Information

Read by Sep 7 before class.

Activity 1b: Install Python 3 and refresh your Python 3 knowledge

Do by Sep 7 before class, and later review as needed.

Lecture 2 Activities

Activity 2a: Python sample code for 455 code

Do by Sep 13

Activity 2b: Download and run Go0 and Go1 program code

Do by Sep 13

Download go.tgz from the Python code page and unpack it. See First Go programs - Go0 and Go1 for instructions and explanations.

Activity 2c

Do by Sep 13

Get familiar with the Go0 and Go1 program codes which you downloaded in activity 2b. Study the data structure for the Go board, and the algorithms for playing a move, checking legality, and capturing stones.

Activity 2d: install GoGui

Do by Sep 10

Download and install the GoGui user interface. You can use it for both Go and GoMoku. Then connect the Assignment 1 starter program.

Activity 2e: efficiency of finding blocks in

In class we discussed the code which implements a floodfill (dfs) to find blocks. Is this efficient? Can you think of a faster way?

Lecture 3 Reading and Activities


Read by Sep 17: Heingartner, Maybe We Should Leave That Up to the Computer

Activity 3a and 3b: not used this term

Activity 3c: Download Assignment 1 starter code and test cases

Do by Sep 17. See Assignment 1 for details.

Activity 3d: Play a game of Go

Do by Sep 20:

Play a game of Go against a colleague, against the Go1 program, or against an online opponent. See resources - Go.

Activity 3e: finding captured stones

Do by Sep 13: Analyze the algorithm to identify captured stones which is implemented in go/, functions _detect_and_process_capture and _has_liberty. Is it efficient? Can it be improved?

Activity 3f: Watch Two Herb Simon Videos

Watch by Sep 13.

Lecture 4 Activities

Activity 4a

Do by Sep 17:

Get familiar with the gogui-regress tool for automated testing. See Regression testing with gogui-regress.

Activity 4b: Watch Two Daniel Kahneman Videos

Watch by Sep 20.

Lecture 5 Reading and Activities


Read by Sep 20: O'Neil, How algorithms rule our working lives.

Activity 5a: compute size of game tree

Do by Sep 20:

Given a game tree with constant branching factor b=10 and constant depth d=8. What is the total size of the tree? First, use the exact formula. Then, use the simple approximation bd. How large is the approximation error, in percent (of the exact value)?

Activity 5b: compute size of game tree

Do by Sep 20:

Same as activity 5a, but with b=8 and d=10. Which tree is bigger, the one in 5a or this one? How much bigger?

Activity 5c: compute number of states in DAG

Do by Sep 20:

Given a DAG with constant branching factor b=10 and constant depth d=8. Let the root be at level 0. Assume that for nodes n at levels 2 and higher, there are two different ways to reach n from the previous level. This means that two different parent nodes p1 and p2 on the previous level both have a move that leads to n.

What is the number of nodes on level 8 of the DAG?

What is the total size of this DAG? How many times smaller is it than the tree from 5a?

Activity 5d: Count states in Tic-Tac-Toe DAG

Do by Sep 20:

See the description of Activity 5d in the Lecture 5 slides.

Activity 5e: Effective branching factor in Tic-Tac-Toe DAG

Do by Sep 20:

Compute the effective branching factor bd, as explained in Lecture 5 slides, for all levels of the Tic-Tac-Toe DAG from Activity 5d.

Lecture 6 Activities

Activity 6a: Profile Go1 code

Do by Sep 24:

Add file to your go folder and profile Go1 with it. Try out some changes/optimizations and profile it again.

Activity 6b: Profile your Assignment 1 code

Do by Sep 24:

Using Activity 6a as a starting point, now profile your own GoMoku program. Note that this activity is not required for Assignment 1. However it will help you prepare for the upcoming assignment 2.

Lecture 7 Reading and Activities


Read by Sep 27: Greenemeyer, 20 years after Deep Blue. Scientific American, 2017.

Activity 7a: Review resources related to blind search

This is an optional activity and extends beyond the material of the course. For example, you can review the slides on Blind Search Extras in the resources.

Activity 7b: experiment with

Do by Sep 27:

Re-run the sample code and change the parameters b and d. Under which conditions does random sampling do well, or poorly?

Activity 7c: experiment with changing the sampling limit in

Do by Oct 4:

The sample code uses 1000 tries. Change this value and re-run. How does the number of successful runs change with this limit?

Activity 7d: Expected Number of Steps for Random Sampling

This is an optional activity and is more challenging than usual activities.

Analyze the expected number of steps of random sampling as discussed in the slides. Read and follow the hints there.

Activity 7e: Random Sampling Without Repetition

This is an optional activity and is more challenging than usual activities.

Implement the version of random sampling without repetition as discussed in the slides. Measure the performance of this new code in the framework from

Lecture 8 Activities


Read by Oct 4: Schaeffer et al, Checkers is solved. Science, 2007.

Activity 8a: Size of proof tree in the best case

Do by Oct 4

Given a uniform (b,d) tree, where the root is an OR node. Assume we can win. Find a closed formula for the total number of nodes in the proof tree in the best case. Hint 1: The counts of nodes per level are: 1, 1, b, b, b^2, b^2, b^3, b^3,.. Hint 2: consider the difference between even and odd depths d

Activity 8b: Size of disproof tree in the best case

Do by Oct 4

In the same setting as activity 8a, now find the best case size of the disproof tree, if the root is a loss. Hint 1: The nodes per level are now: 1, b, b, b^2, b^2, b^3, b^3,.. Hint 2: there is a very simple relation between the solution of the previous problem and this one. Find and use it.

Activity 8c: Experiment with solving TicTacToe positions using boolean minimax and negamax

Do by Oct 4

Run the solvers in and Set up other opening positions and solve them.

Lecture 9 Reading and Activities

Activity 9a: Point target for winning in Go

Do by Oct 11

Given a board of size n times n, a komi k, and a player color. What is the minimum number of points that the player needs to win the game?

Activity 9b: Implement your 9a solution in Python

Do by Oct 11

Implement your solution from 9a in a Python function.

Lecture 10 Activities

Activity 10a: Implement a heuristic for TicTacToe

Do by Oct 11

See discussion in Lecture 10 slides.

Activity 10b: Measure the effect of your heuristic for solving TicTacToe

Do by Oct 11

Add a case in to solve TicTacToe with your new heuristic. What is the speed difference?

Activity 10c: Measure nodes searched instead of time used

Do by Oct 11

Modify and to return the number of nodes searched as well as the search result. Then run the experiments in with your new instrumented code. What is the tradeoff between adding a heuristic in 10b in terms of nodes searched vs time per node?

Lecture 11 Reading


Read by Oct 19: Van der Werf et al, Solving Go on small boards.

No other activities

Just finish assignment 2.

Lecture 12 Activities


Read by Oct 25: Computing Elo Ratings of Move Patterns in the Game of Go. RĂ©mi Coulom, Computer Games Workshop 2007, Amsterdam.

Activity 12a: Estimate Pi using Simulation

Do by Oct 25

Run to estimate the value of pi using simulation. Increase the number of simulations and see if the value becomes more accurate.

Implement a shape different from a quarter circle, for example a triangle or a quarter ellipse, and re-run the code. Compare with the exact area computed using math.

Then, estimate the volume of a unit ball in 3d centered at (0,0,0) (formula x^2 + y^2 + z^2 < 1) and compare with the exact volume.

Activity 12b:

Do by Oct 25

Re-run the experiment Sim(1000) vs Perfect TicTacToe player with more games. Does Sim(1000) lose any games?

Activity 12c: experiment with simulation-based TicTacToe player

Do by Oct 25

Read simulation code in and

Run and vary the number of simulations in the call to randomTTT at the end

Activity 12d: experiment with simulation-based Go player

Do by Oct 25

Read simulation code for Go in Go3 and Go4, especially function play_game in and the move generation functions called by it.

Optional Activity 12e

Write a perfect player with simulation-based tiebreaking, as discussed in the lecture 12 slides. Run the experiments from that slide with this player to see if it performs better against random.

Lecture 14 Reading and Activities


Read by Nov 1

Parts of Browne et al, A Survey of Monte Carlo Tree Search Methods, as follows:

Activity 14a: experiment with

Do by Nov 1

Using the code in, experiment with different settings for p and limit. Study how many simulations you need to get different levels of accuracy.

Activity 14b: experiment with

Do by Nov 1

Play several rounds of How close is your guess? How many simulations did you choose?

Optional Activity 14c: Mystery game

Program your own simulation-based game. It works similar to Go3, but you chose the winrates of different moves randomly.

Lecture 16 Activities


Read by Nov 15: A Few Useful Things to Know about Machine Learning. Pedro Domingos, Communications of the ACM 55 (10), October 2012, Pages 78-87.

Lecture 18 Activities


Read by Nov 15: Maddison, Huang, Sutskever and Silver, Move Evaluation in Go using Deep Convolutional Neural Networks.CoRR abs/1412.6564.
Optional additional reading: Clark and Storkey, Training Deep Convolutional Neural Networks to Play Go. ICML 2015.

Lecture 19 Activities

Activities 19a - 19d: Watch videos on neural networks

Do by Nov 22:

Watch this series of animations about neural networks by Grant Sanderson of 3Blue1Brown. This is a little bit longer than usual activities (10-21 minutes per video), but I believe that these videos are an excellent resource.

Activity 19e: Neural networks as function approximation

Do by Nov 22:

Explore the demos (briefly shown in class) on Michael Nielsen's page A visual proof that neural nets can compute any function. Understand in detail the intuition behind the "visual proof".

Lecture 20 Activities


By Nov 29:
Watch Rich Sutton, RL tutorial
Read Silver, Huang et al, Mastering the game of Go with deep neural networks and tree search. Nature 529, 484-489 (28 January 2016). The "first AlphaGo" paper.

Lecture 21 Activities

Lecture 22 Activities


Read by Dec 3: Silver, Schrittwieser et al, Mastering the game of Go without human knowledge. Nature 550, 354-359. The "AlphaGo Zero" paper.
Silver, Hubert et al, A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. The "Alpha Zero" paper.
Optional reading: Supplementary Materials for this paper.

Lecture 23 Activities

Lecture 24 Activities

Created: Sep 21, 2017 Last modified: Aug 14/2021

Martin Müller, Ting-Han Wei, James Wright