If a Job was executed in recovery mode, in the repository a list of all already executed elements is kept. So in case the load failed, you have the option to restart the flow that caused the error and continue from there, rather than starting all again, just to find out that still the last flow fails because of e.g. not enough disk space.
But again, this feature restarts the entire e.g. DataFlow, it does not capture the status inside a flow - this would be impossible (how do you guarantee that the same data is read from the database in the same order??) or would at least require lots of overhead to write the current buffer to disk. So all your objects should be aware that they might be started a second time.
Actually this is not much different to the rules above, Restartability for Delta Loads, as we use e.g. Table Comparison Transform anyway. But there is one important issue: When you used a script to truncate/delete rows prior to loading. In case the job is restarted at the point of failure, the script was already executed successfully, hence it would be skipped. To deal with that issue, those WorkFlows have to have the property Recover as a Unit turned on, so they are treated as successfully executed only once everything inside was executed without a problem, otherwise they start the first object inside this WorkFlow again.