[prev] [thread] [next] [lurker] [Date index for 2006/12/18]
> Um... df would still show the space used, unless the processes are > not running. Is this to assume to only $$ is running, and none of > the processes for the other files are running? And where, exactly, > is the hate here? This seems reasonable. Some script has been failing to delete the file each time because of a race condition. This is normally not a problem in UNIX because each part of the system that wants the file says "I'm done with it", then eventually all the parts that want the file give it up, and the space gets reclaimed. That's useful, meaningful, and desirable behaviour. > How often do you really encounter this problem? I encounter this *non* problem every day. It's so common for programs to unlink() their files and let the system handle cleaning it up when everyone's done that it's not even notable. > Seems like a bit of > an edge case to me, and given that it's an edge case, it seems > reasonable to at least have a warning. Rule, I don't know, about three or four, is "don't check for error conditions you're not going to handle". > Would the warning hurt you in > some way? Would it break existing scripts? Yes. It would break existing scripts that correctly assume that in the normal case no part of the script produces any output, and that any output is an error that should be taken seriously. And you don't want to rm -f $file Because the other reasons that you might find "-f" necessary SHOULD be errors. (and don't even start on 'rm 2>/dev/null') (or scripts that silently ignore errors) Hateful things, the lot of them. > Eh, I disagree. In the current environment, I can't get the > functionality I want (warn me if a file I'm removing is opened for > writing), Why do you care if it's opened for writing? Or reading? Or at all? Wouldn't this solve the original problem? #!/bin/sh # purge - really remove a file! for i do [ -f $i ] && > $i done exec rm ${1+"$@"}
Generated at 03:01 on 20 Dec 2006 by mariachi 0.52