Re-creating the results in EUSIPCO 2022 submission "Trainable Compound Activation Functions for Machine Learning": 1. Get data. download and uncompress: mn389_5.mat These are characters "3", "8", and "9" from MNIST with added dither. 2. Download the Aesara version of the PBN Toolkit at http://class-specific.com/pbntk/ Skim through the docs to familiarize yourself with the various sections of the graphical interface, and terminology. 3. Download the model definitions and event list files: tut1z.py, tut1zzz_dbn.py 4. Run the toolkit (see instructions at: http://class-specific.com/pbntk/ ) python pbntk.py o If you are not training from scratch, load the pre-trained parameters: (Linux command): tar xvzf tut1zzz_dbn_mn389_5_lyr*.mat Train the 1st layer RBM with initial TCAs a. set batchsize (BS) to 125 b. enter 'mn389_5' into TRAIN DATA field, and press LOAD c. enter 'tut1zzz_dbn' into MODEL field and press LOAD If you have unpacked the pre-trained parameters, they will be loaded in, otherwise randomly initialized. To delete pre-trained parameters, press DELETE. d. see that "rnd" is unchecked in the RBM Section - this disables stochastic CD, and uses deterministic training. e. disable updating the TCA's by un-checking "Update" for layers 1 and 3 (layers start at 0) and press APPLY. f. Clear any compiled RBM functions by pressing RBM g. Set desired RBM layer (LYR) to 0 g. Set L.RATE (learning rate) to 3e-4 (optional) Set DECAY to 0.99995 h. Train the RBM by repeatedly pressing TRN in the RBM Section. You should get cost as low as 0.014. If the RBM gets stuck in a bad initial condition, sometimes you can "unstick" the RBM by zeroing the hidden variables by pressing "Zro" for LYR=0. i. To view the hidden variables, enable "PLT" checkbox on top right, compile the forward function by pressing FWD, and then press GO in the Fwd Section. Train the 1st layer TCAs a. clear the "Activn" field in layer 2 (3rd layer), and press APPLY. This shortens the network to just 1 layer + TCA. b. compile the UPDN by pressing UPDN c. Set L.RATE (learning rate) to 3e-4 d. Check "PLT" in the PBN Section. This enables plotting. e. Train UPDN (check "TRN"), and un-check to stop. f. Train for a while to get a plot history. g. Enable updating the TCA's by checking "Update" for layer 1 , and APPLY h. Re-compile the UPDN by pressing UPDN i. re-start training. The reconstruction cost should drop drastically. j. train until convergence, then save the parameters by pressing SAVE in the Model Section. Cost should go down to about .0085 k. To view the hidden variables, enable "PLT" checkbox on top right, re-compile the forward function by pressing FWD (because network was shortened) , and then press GO in the Fwd Section. Re-train the DBN using UPDN a. re-enable the top layers by typing "5,0" in the Activn field for layer 2 (3rd), then APPLY. b. delete any parameters by pressing DELETE, then LOAD to initialize new parameters c. disable updating the TCA's by un-checking "Update" for layers 1 and 3 (layers start at 0) and press APPLY. d. check "rnd" in the RBM Section - this enables stochastic CD in the top-level RBM. e. add some free energy cost by putting ".5" in the XE field. f. compile the UPDN by pressing UPDN g. Enable plotting by checking PLT in the PBN Section. h. select "valid" data partion in the Fwd Section i. Set L.RATE (learning rate) to 3e-4 j. train UPDN (check "TRN"), and un-check to stop. May take a long time. Train until it converges - should have about cost of about .03 and about 90 - 100 errors. Enable TCAs, and continue training k. Now enable updating the TCA's by checking "Update" for layer 1 and 3 , and APPLY m. Continue training - cost should drop and errors should drop to about 60 or 70. n. To view the hidden variables, check the PLT checkbox on the top right, press FWD to compile the forward function, and GO to run the forward algorithm. Notes: You can clear plotting history by checking PLT on top right, then pressing Refresh.