Re-creating the supplementary results in Signal Processing Letters paper: https://ieeexplore.ieee.org/document/9541076/ P. M. Baggenstoss, "Discriminative Alignment of Projected Belief Networks," in IEEE Signal Processing Letters, vol. 28, pp. 1963-1967, 2021, doi: 10.1109/LSP.2021.3113833. ---------- Getting Started ---------------- Download the PBN Toolkit and Documentation and skim through the docs to familiarize yourself with the various sections of the graphical interface, and terminology. Start the Toolkit using: python pbntk.py ---------- Data and Parameters ---------------- ds21.mat : contains just character '3' from reduced MNIST data described in paper ds22.mat : contains just character '8' from reduced MNIST data described in paper ds23.mat : contains just character '9' from reduced MNIST data described in paper ds212223.mat : contains characters '3','8','9' from subset of MNIST described in paper pbnc389.py : network definition pbnc389_ds21.tgz : contains trained straight-PBN network params for class ds21 (character '3') pbnc389_ds22.tgz : contains trained straight-PBN network params for class ds22 (character '8') pbnc389_ds23.tgz : contains trained straight-PBN network params for class ds23 (character '9') pbnc389.tgz : contains all trained network parameters for PBNs pbnc389_updn.tgz : contains initial network parameters using UPDN algorithm pbnc389_dnnmxp.tgz : contains DNN parameters (make sure to check MAXP) run_cclass.m : MATLAB script to display classifier performance ----------- Re-creating the DNN results ------------------- To set up the Toolkit for this network: o (optional) Set block size (BS) to 250 o Set "cWt" field to 0 o Load the data set 'ds212223' . Press "Load" in Data Section. o If you want to load the pre-trained parameters: o Uncompress the parameters (Linux command): "tar xvzf pbnc389_dnnmxp.tgz" o Or, to re-create the DNN network from scratch: o Delete all network parameters (Linux command): "rm pbnc389_ds212223_lyr*.mat" o Load model 'pbnc389' : Press "Load" in Model Section. o Check 'maxp' checkbox in DNN Section. o Compile forward algorithm: Press "FWD" in Forward Section. o To re-create the DNN network from scratch: o Compile DNN: press "DNN" in DNN section. Wait for it to compile. o Set L2.REG to 1e-5 o Set L.RATE to 1e-4 o Train DNN: press 'TRN' in DNN Section. Wait until Cost stabilizes at about .01 o (optional) Experiment with different L2.REG, L.RATE, training epochs. o Or, to use the pre-trained parameters, extract the parameter files: (Linux command) tar xvzf pbnc389_dnnmxp.tgz then, re-load the model. o Test classifier performance under each data partition (train, valid, test) by selecting data partition, then pressing 'GO' in the FWD Section. You should see similar to the published results: "Tot err= 523 of 17924 Accuracy= 0.9708212452577549" o To visualize the hidden variables, check the 'PLT' checkbox in upper right before pressing 'GO' in FWD Section. o To save the output for use in combining with PBN, select "test" partition, then press 'SAVE' checkbox before pressing 'GO' in FWD Section. It will save to: "pbnc389_ds212223_ds212223_out2.mat" ----------- Re-creating the straight-PBN results ------------------- o Set up the Toolkit: o uncheck 'maxp' checkbox in DNN Section. o Set "cWt" field to 0 o (re)Compile forward algorithm: Press "FWD" in Forward Section. for X in 1,2,3: o 'X' is a variable, for example 'ds2X' means 'ds21', 'ds22' ... o Load the data set 'ds2X' (Set "TRAIN DATA" field, then "LOAD") o To re-create from scratch, delete all network parameters (Linux command): rm pbnc389_ds2X_lyr*.mat o Or load the pre-trained parametrs (Linux command): tar xvzf pbnc389_ds2X.tgz o Load model 'pbnc389' : Press "Load" in Model Section. o clear the 'Actvn' box (sixth layer, layer 5 starting at 0, Type fc3) in the network parameter table then press 'APPLY'. Clearing the activation function of a layer will disable that layer. The 6th layer is a classifier output layer, not needed in a straight PBN. o Compile PBN: press "PBN" in PBN section. Wait for it to compile. (you only need to do this once) o If you are training from scratch: o compile the UPDN algorithm: (you only need to do this once) press "UPDN" in UPDN section. Wait for it to compile. o Set DECAY to 0.99995 o Set L.RATE to 1e-4 o Train the UPDN: press 'TRN' in UPDN Section. Wait until the Cost is about 2.0. This is the mean square reconstruction error after the data passes through the network, then back to the visible data. Training using UPDN creates excellent initial parameters for the PBN. o Check 'upd.s2in' in PBN Section. o Un-check 'SAVE' on top right of Toolkit. o Train PBN: press 'TRN' in PBN Section. Wait until J function reaches about -300, and the delta is less than 0.02 You can experiment with values of DECAY, L.RATE, etc. o Save the parameters: press 'SAVE' in Model Section. o Once PBN is trained, you can experiment with back-projection synthesis: o Compile the back-projection: press 'SYN' in PBN-Syn Section. o Select layer to back-project from: 'LYR' field in PBN-Syn Section (lyr 0-4). o Check 'PLT' in top right of Toolkit. o Press 'GO' in PBN-Syn Section o To evaluate the PBN, and save the output, so that the classifier performance can be later determined: o set the 'EVAL DATA' field to 'ds21,ds22,ds23' o check 'SAVE' on top right of Toolkit o uncheck 'PLT' on top right of Toolkit o select 'test' partition in FWD Section o press 'EVAL' in PBN Section o Once you have done the EVAL step for X=1,2,3, now test classifier performance: in MATLAB: >> run_cclass('pbnc389',{'ds21','ds22','ds23'},3,4,1,[0,0,0],'','jout'); you should get about "Min error 98 of 1500, 6.533333 percent" ----------- Re-creating the discriminatively-aligned PBN results ------------------- o Set up the Toolkit: o uncheck 'maxp' checkbox in DNN Section. o Check 'upd.s2in' in PBN Section. o Load the data set 'ds212223' (Set "TRAIN DATA" field, then "LOAD") o Make sure 'class' is checked in PBN section o If you are not training from scratch, load the pre-trained parameters: (Linux command): tar xvzf pbnc389.tgz o To re-create from scratch, o delete all network parameters : (Linux command): rm pbnc389_ds212223_c*_lyr*.mat o If you are training from scratch, it is a good idea to create a single set of initial parameters, to be used for all class assumptions: o first set "cWt" to 0 o load the model "pbnc389" o clear the 'Actvn' box (sixth layer, layer 5 starting at 0, Type fc3) in the network parameter table then press 'APPLY'. o compile the UPDN algorithm: press "UPDN" in UPDN section. Wait for it to compile. o Set DECAY to 0.99995 o Set L.RATE to 1e-4 o Train the UPDN: press 'TRN' in UPDN Section. Wait until the Cost is about 2.0. This is the mean square reconstruction error after the data passes through the network, then back to the visible data. o restore the 6th layer by entering "5" in the 'Actvn' box,then press 'APPLY'. o re-compile the FWD alg. o initialize the 6th layer using PCA: set LYR=5 in the Weights Section, then press PCA o To save this set of initial parameters for all class assumptions, set "cWt" to 1, 2, and 3, each time pressing "SAVE" in the MODEL section. for X in 1,2,3: o Set "cWt" field to X (i.e. to 1, 2, or 3) o Set "XE" field in PBN Section to 500 o Set "C" field in PBN Section to 2 o Set learning rate "LR" to 3e-5 o Set "DECAY" to 0.99999 o Load model 'pbnc389' : Press "Load" in Model Section. o (re)Compile forward algorithm: Press "FWD" in Forward Section (onlx needed for X=1). o Compile PBN: press "PBN" in PBN section. Wait for it to compile. (you only need to do this once, i.e. for X=1) o Un-check 'SAVE' on top right of Toolkit. o Train PBN: press 'TRN' in PBN Section. Wait until J function stops increasing should reach about -371 for X=1 should reach about -375 for X=2 should reach about -369 for X=3 You can experiment with values of DECAY, L.RATE, etc. o Save the parameters: press 'SAVE' in Model Section. o To evaluate the PBN, and save the output, so that the classifier performance can be later determined: o Insure that the 'EVAL DATA' field is 'ds212223' o check 'SAVE' on top right of Toolkit o uncheck 'PLT' on top right of Toolkit o select 'test' partition in FWD Section for X in 1,2,3: o Set "cWt" field to X (i.e. to 1, 2, or 3) o Load model 'pbnc389' : Press "Load" in Model Section. o press 'EVAL' in PBN Section --> saves to "pbnc389_ds212223_cX_ds212223_jout2.mat" o press 'GO' in FWD Section --> saves to "pbnc389_ds212223_cX_ds212223_out2.mat" o Now test classifier performance: in MATLAB: >> run_cclass('pbnc389','ds212223',3,5,2,[0,0,0]); You should get about: "Min errors 632 of 17924, 3.525999 percent" "Min errors (comb) 461 of 17924, 2.571971 percent"