I find it best to run these tests in a ramfs to make them run faster (and
not pound the disk).
+
+A quick word about running some of these tests...
+
+
+There are two tests, one simulating nor and the other nand, labelled
+
+*nor*.sh and *nand*.sh
+
+These can be run in the local directory as follows:
+$ ./init_fw_update_test_nand.sh
+$ ./run_fw_update_test_nand.sh
+
+NB These create simulation files in the current directory, so only one
+instance can be run in a directory.
+
+The number of iterations can be set by adding a numerical parameter
+
+$ ./init_fw_update_test_nand.sh
+$ ./run_fw_update_test_nand.sh 5000
+
+Since the test creates snapshot files between each iteration the test is
+relatively slow if run against a hard disk. Far better to run against a ram
+disk
+
+$ mkdir xxx
+$ mount -t tmpfs none xxx
+$ cd xxx
+$ cp ../*sh .
+$ ln -s ../yaffs_test yaffs_test
+$ ./init_fw_update_test_nand.sh
+$ ./run_fw_update_test_nand.sh
+
+The above is also wrapped in a script called manage_nor_test.sh which
+creates all the above. The managed_nor_test.sh script accepts an optional
+parameter to specify an instance name. Named instances will create named
+directories.
+
+If you want to run multuiple instances then it is easy to do so with
+xterm as follows:
+
+$ xterm -e "`pwd`/manage_nor_test.sh 1"&
+$ xterm -e "`pwd`/manage_nor_test.sh 2"&
+$ xterm -e "`pwd`/manage_nor_test.sh 3"&
+...
+
--- /dev/null
+#! /bin/sh
+
+dir_id=-none
+[ -z $1 ] || dir_id=$1
+
+RUNDIR=`pwd`/tmpnand$dir_id
+mkdir $RUNDIR
+sudo mount -t tmpfs none $RUNDIR
+sudo chmod a+wr $RUNDIR
+cd $RUNDIR
+cp ../*sh .
+ln -s ../yaffs_test yaffs_test
+
+./init_fw_update_test_nand.sh
+./run_fw_update_test_nand.sh
+
--- /dev/null
+#! /bin/sh
+
+dir_id=-none
+[ -z $1 ] || dir_id=$1
+
+RUNDIR=`pwd`/tmpnor$dir_id
+mkdir $RUNDIR
+sudo mount -t tmpfs none $RUNDIR
+sudo chmod a+wr $RUNDIR
+cd $RUNDIR
+cp ../*sh .
+ln -s ../yaffs_test yaffs_test
+
+./init_fw_update_test_nor.sh
+./run_fw_update_test_nor.sh
+
bi->blockState = YAFFS_BLOCK_STATE_DIRTY;
+ /* If this is the block being garbage collected then stop gc'ing this block */
+ if(blockNo == dev->gcBlock)
+ dev->gcBlock = -1;
+
if (!bi->needsRetiring) {
yaffs_InvalidateCheckpoint(dev);
erasedOk = yaffs_EraseBlockInNAND(dev, blockNo);
bi->hasShrinkHeader = 0; /* clear the flag so that the block can erase */
- /* Take off the number of soft deleted entries because
- * they're going to get really deleted during GC.
- */
- if(dev->gcChunk == 0) /* first time through for this block */
- dev->nFreeChunks -= bi->softDeletions;
-
dev->isDoingGC = 1;
if (isCheckpointBlock ||
* No need to copy this, just forget about it and
* fix up the object.
*/
+
+ /* Free chunks already includes softdeleted chunks.
+ * How ever this chunk is going to soon be really deleted
+ * which will increment free chunks.
+ * We have to decrement free chunks so this works out properly.
+ */
+ dev->nFreeChunks--;
object->nDataChunks--;
- /* If the gc completed then clear the current gcBlock so that we find another. */
- if (bi->blockState != YAFFS_BLOCK_STATE_COLLECTING) {
+ if (bi->blockState == YAFFS_BLOCK_STATE_COLLECTING) {
+ /*
+ * The gc did not complete. Set block state back to FULL
+ * because checkpointing does not restore gc.
+ */
+ bi->blockState = YAFFS_BLOCK_STATE_FULL;
+ } else {
+ /* The gc completed. */
chunksAfter = yaffs_GetErasedChunks(dev);
if (chunksBefore >= chunksAfter) {
T(YAFFS_TRACE_GC,