From a95f047bc9f78deb119cbc0ae7cde202e5d5efdc Mon Sep 17 00:00:00 2001 From: Fahad Khalid <f.khalid@fz-juelich.de> Date: Tue, 3 Sep 2019 09:28:27 +0200 Subject: [PATCH] Minor text updates. --- caffe/README.md | 2 +- datasets/README.md | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/caffe/README.md b/caffe/README.md index 941c3d6..1804dce 100644 --- a/caffe/README.md +++ b/caffe/README.md @@ -16,7 +16,7 @@ or one or more custom layers can be written in Python. The `mnist_cmd` sub-directory contains configuration and job scripts for running Caffe as a command line tool with only built-in layers. This example represents use case 1 as described above. The `lenet_solver.prototxt` and `lenet_train_test.prototxt` -were taken from the MNIST examples directory available in the Caffe repository available +were taken from the MNIST examples directory available in the Caffe repository [here](https://github.com/BVLC/caffe/tree/master/examples/mnist). Minor changes have been made just so the path to the input dataset is correct. The `caffe` command in the job submission scripts can be modified as follows to run training on diff --git a/datasets/README.md b/datasets/README.md index 19e9a40..f69478b 100644 --- a/datasets/README.md +++ b/datasets/README.md @@ -8,9 +8,9 @@ maintained by the respective framework developers, as these are the same samples uses when getting started with the framework. However, the original examples are designed to automatically download the required -dataset in a framework-defined directory. This is not a feasible option as compute -nodes on the supercomputers do not have access to the Internet. Therefore, the samples -have been slightly modified to load data from this `datasets` directory. It contains +dataset in a framework-defined directory. This is not a feasible option when working +with supercomputers as compute nodes do not have access to the Internet. Therefore, the +samples have been slightly modified to load data from this `datasets` directory. It contains the MNIST dataset in different formats because samples for different frameworks expect the dataset in a different format. -- GitLab