Anonymous | Login | 2024-11-03 19:04 UTC |
Main | My View | View Issues | Change Log | Docs |
Viewing Issue Simple Details [ Jump to Notes ] | [ Issue History ] [ Print ] | ||||||
ID | Category | Severity | Type | Date Submitted | Last Update | ||
0000545 | [1003.1(2008)/Issue 7] Base Definitions and Headers | Objection | Enhancement Request | 2012-02-25 02:15 | 2012-04-02 15:07 | ||
Reporter | oiaohm | View Status | public | ||||
Assigned To | ajosey | ||||||
Priority | normal | Resolution | Duplicate | ||||
Status | Closed | ||||||
Name | Peter Dolding | ||||||
Organization | |||||||
User Reference | |||||||
Section | XBD 3.170 Filename | ||||||
Page Number | 60 | ||||||
Line Number | 1781 | ||||||
Interp Status | --- | ||||||
Final Accepted Text | |||||||
Summary | 0000545: Escape Filenames as per Uniform Resource Identifier Encoding | ||||||
Description |
This is to solve the problem once and for all of file-names containing chars that cause shell and other programs todo bad and unexpected things. Like rm -i * and a file called -rf in that directory changing this to rm -i -rf <everything else * found> This fixes the 0-32 issue since they will become %00-%32. This also fixes the - in filename issue. This also removes a historic limit. Null and / now can be used in file names. Really 00-47 escaped might be wise as well as 123-127. This is the finer detail I have not worked out fully the exact chars that must be mandatory to be escaped. Migration should be ok since applications using file-names containing % are rare. So while migrating file operations can be allowed to accept non escaped forms. Its just that lookups. As far as I can see my change idea is not going to break anything completely that is currently on posix. Unless its something really rare. This may upset items like http servers as first due to double encoding but also allow higher effective operation of http servers and anything else using URI since the files are pre encoded and can be cached system wide encoded. Programs displaying filenames to users will also have to be updated so users are not seeing %32 for space or equal. This does not make the application non functional. Maybe annoying to users. Annoying to users and the system does what the user tells it I can live with. As this can be fixed in time. rm -rf * on something the user was not wanting todo may not be fixable ever. ls and other file utils how to display these filenames to users is a different issue. ls and other file utils feeding into other applications passing encoded solves so many problems. I see no valid other way to solve the issue of shell and application malfunction due to contained chars. The chars have to be removed from view of applications. Removing chars from on disc file system support is not possible. By removing chars from disc support you give somewhere malware could hide. Reason for using Uniform Resource Identifier encoding not \n.... encoding. Simple issue you don't what shell processing it unless directed to. So ls `echo *` with a file "this that" works. |
||||||
Desired Action |
Vol. 1, Page 60, line 1782-1784: Change: "The characters composing the name may be selected from the set of all character values excluding the <slash> character and the null byte." to: "The characters composing the name may be all character values including the <slash> character and the null byte on disc. All names processed by and provided to applications will be escaped as per Uniform Resource Identifier Encoding to prevent any issue with processing names due to contained chars. No application must see raw chars 0-32,45 and <other harmful chars to shell> from any file operation. These raw chars are mandatory to be escaped." |
||||||
Tags | No tags attached. | ||||||
Attached Files | |||||||
|
Relationships | ||||||
|
Notes | |
(0001148) wlerch (reporter) 2012-02-25 04:15 |
In other words, this proposal introduces a distinction between what I would call "raw" filenames, which may contain any arbitrary characters but are never directly visible through any POSIX interface, and "encoded" filenames, used in all POSIX APIs but restricted by a set of rules that not only forbid control characters and other "dangerous" characters, but also requires that any % character must belong to a valid URI encoded sequence. Given that POSIX has no business dictating implementations how to store filenames on disc, would these "raw" filenames even need to be mentioned in the normative text of the standard? Perhaps just in an explanation of the encoding rules in the Rationale? Would you consider "%66%6f%6f" and "%66%6F%6F" to be equivalent to "foo", or would you consider them invalid? |
(0001149) oiaohm (reporter) 2012-02-25 05:35 |
wlerch Uniform Resource Identifier would class would class "%66%6f%6f" and "%66%6F%6F"as foo. I am not going to change that because humans typing filename caps and no caps you can bet they will mix it up. I would even tolerate the horrible "%66%6f%6F". I should have quoted. http://www.ietf.org/rfc/rfc3986.txt [^] section 2.1 is how to encode and decode. 2.2. Reserved Characters in the standard is not be broad enough for what we require to make shell safe. Particular that in non reserved chars 2.3 of standard is - that cannot stand basically because that leaves the - directive issue still in effect when doing rm -i * with a -rf file in the directory. Yet a URI is not upset getting chars encoded when they don't need to be. The good part about URI encoding extras don't alter the decoder at all. The difference here is I am not forbidding. If you called fopen("\t\nsome file","r"); After my alteration and the posix abi runs URI decode on that the string will not change this is where URI is good is a noop if string does not contain %. So a program can send all those nasty control chars to a file-system to store without being blocked. Blocked can equal failed applications. If program has some valid reason to be doing this we really have no reason to block it. If I was forbidding I would have alter Vol. 1, Page 77, append after 2199 and Vol. 2, page 480 and Vol. 2, page 1382 and Vol.2, page 1382. What I am not. I might need to alter those minor-ally to add the fact if you wish to use %,<slash> and Null in a filename you have to encode them. So the rest of the 127 chars don't have to be encoded from application side unless you are executing other applications on shell. The encoding prevents POSIX shell from doing stupid things. "No application must see raw chars 0-32,45 and <other harmful chars to shell> from any file operation." I am trying to refer to a one way filter. Application can send what ever it likes bar <slash> Null and % those are the only chars must be encoded when sending stuff to the POSIX File APIs. POSIX API will just always send the Application encoded if it is a filename and application is querying what it name is or reading process tables... whatever application will see encoded. The application can decode and use file-name decoded if it wishes or just used the encoded as is. Reason for making it a one way filter is to break as min number of existing as possible of course this change may break a few applications. Hopefully only a small few. POSIX API is now presuming application is stupid its been poorly coded and coder has not considered the fact that it might get \n\t <space> or other nasty chars in a filename so has encoded those to prevent applications from hitting errors. This does not mean the that POSIX API itself cannot process those chars without issues. There are issues with filenames containing NULL I have not addressed. Null requires use of defined length strings not just Null terminated strings at file-system driver level. You need encoding before we can consider taking on this devil. That null can be contained in a filename after this change when they decode the filename programmers have to be aware of it. Defined length strings required in case of Null or not decoding Null. This gets POSIX out of the job of dictating to disc or application. Ok POSIX refuses to answer applications in a risky way but that is all POSIX does about the problem. That refuse to provide data that applications out there cannot handle basically cures problem in almost all cases other than the rare cases where the encoding change breaks the application. Expanding what POSIX can handle at the same time stopping historic issue dead. Price I think is quite small one exact char % that cannot be directly used that has to be escaped. Yes raw filenames need to be mentioned in the sense we have no right to dedicate limits. Raw file-names must always be considered to be any combination of chars that could possibly ever happen even ones we would not particularly like. POSIX need to be design to take what ever comes it way no matter how nasty. |
(0001150) oiaohm (reporter) 2012-02-25 05:49 |
URI decode also handles like %sdas Since s is not hex. % goes straight through. So only cases with char 0-9, A-F,a-f following % twice will cause it to error and put a different char there. So % errors are limit with URI decoding. Its the min nasty encoding I can pick that the POSIX shell is not going to try to mess with. |
(0001151) wpollock (reporter) 2012-02-25 06:40 |
I have a dim memory that encoding was discussed as a potential solution but rejected, long ago. But if encoding filenames is on the table again, we should consider other schemes and decide which one would best serve. For example, instead of URI ("percent") encoding, consider Punycoding. See <https://www.rfc-editor.org/rfc/rfc3492.txt>. [^] Punycoding has several advantages. Quoting from the RFC: It uniquely and reversibly transforms a Unicode string into an ASCII string. ASCII characters in the Unicode string are represented literally, and non-ASCII characters are represented by ASCII characters that are allowed in host name labels (letters, digits, and hyphens) So not only does this solve the newlines issue, it would allow any Unicode filename, and nothing much needs to change; all filenames seen by POSIX utilities (and most parts of the kernel) would just see the subset of ASCII allowed in URIs, which is a proper subset of the POSIX portable filename character set. Most current filenames won't need encoding at all. When encoded, the names are compact. Another possibility is to endorse Joliet-like extensions (i.e. ISO-9660), where files can have any name, but an additional compliant name exists too and is passed to any utility not known to work with arbitrary filenames. (Think about how MS-DOS supports Unicode filenames on old FAT filesystems.) But I like punycode. It is already used in most web browsers to support IRIs. It solves other problems (Of Unicode for filenames) cleanly. Note, none of these proposals handle the case when a given name has multiple forms. Fortunately, I don't think that is an issue for POSIX. But if it is decided that it is a problem, the change for encoding can specify Unicode normalization too. |
(0001152) oiaohm (reporter) 2012-02-25 08:28 |
Joliet-like extensions we can forgot right now. They are mostly superseeded by UDF. Joliet is only for cd sided media. DVD plus is UDF that is OSTA Compressed Unicode what is another horrid beast in the class of punycode. punycode is infact limited used. Also not suitable by any way shape or form. wpollock. punycode on a UTF-8 system looks like any other string that could be a filename. "<sono><supiido><de> u+305D u+306E u+30B9 u+30D4 u+30FC u+30C9 u+3067 Punycode: d9juau41awczczp" How do I know that that Punycode is russian and not some name a random number generated created. You don't on a UTF-8 filesystem. Next is all UTF-8 systems are going to have the crud kicked out of them by Punycode because it will be convert to UTF-16 before compression adding even more load. Punycode is a very complex encode and decode. http://www.ietf.org/rfc/rfc3986.txt [^] This has some very distinct properties. Most posix OS's are UTF-8. Only real hurt OS by going is Windows that has had poor POSIX conformance anyhow. But that really does not matter that much thinking URL strings that web-browsers use are URI and are UTF-8 anyhow. UTF-16 don't even dare send that on the Internet mostly will not work. So we can say that users have to be dealing with URI daily so any OS kinda has to support them well. Next URI encoding and decoding has two very key differences to lots of other encoding options. Encoding of URI can detect already encoded parts and not encode again. punycode turns really horrid in the next bits. You need to be able to part decode not fully decode punycode how do you half decode. You can display <space> in your interface but you cannot display \t. With URI you can leave \t encoded and decode <space> then display that to user no problem half decoding works with URI. Sorting files by last char in name try doing that with punycode. Left to right and right to left languages URI getting last char and sorting by that is not hard. Punycode was rejected from URI because its basically not workable for something you are planing to be messing with. Simple string operations don't work on Punycode either. Joining to Punycode strings. Decode both strings join strings recode strings. Join URI strings strcat them if you need it encode run encode if you need it decoded run decode. Yes URI does not care if you just joined an encoded string to a decoded one except for that one case of %with hex following on decode turning into a char. So as long as user was not wanting a file-name with %hex no error applying a decode or encode. Punycode also will require existing applications to be altered that have internally existing file-names. You are presuming that no existing application have unicode file-names inside. Not a good move there POSIX might say no but most Linux and BSD and Unix systems say UTF-8 file-names are acceptable. URI is also used in desktop and web servers already. So field tested for file access. URI is far more prevalent that most people wake up. file:///somewhere [^] is URI. There is a lot of processing to compress or decompress using Punycoding. Just to really ruin your day Punycode is rarely send to browsers or between DNS servers. Most DNS do UTF8 with checksums in the form of signing. Reason its lighter to send UTF8 and send checksums than what it is todo Punycode. ASCII is not used by most websites any more. Most websites are UTF8 so user saves a web page from the internet we have to run a punycode compressing not wise. UTF8 to UTF16 on windows is bad enough without UTF8 to UTF16 to punycode. Listing files the URI covert is a lot faster than punycode. URI is already tested for working with filesystems. URI not cause lethal levels of cpu load we know how bad it is. Key to not causing lethal levels of cpu load is allowing min decoding possible. "Note, none of these proposals handle the case when a given name has multiple forms. Fortunately, I don't think that is an issue for POSIX. But if it is decided that it is a problem, the change for encoding can specify Unicode normalization too." URI is UTF-8. Sticking to URI encoding takes out all the ASCII portable chars stuff. Yes we have Unicode in this. %C3%80 the character LATIN CAPITAL LETTER A WITH GRAVE. Basically Unicode normalisation we don't need it with URI. Its already normalised. Since the stream is UTF-8 unicode is already in. URI does not insist that unicode chars be encoded with %. Its only encoding trouble making chars to 3 bytes. The rest of the stream like that complex name get left alone. Of course I have nothing against adding a function to POSIX to ask a directory what chars in a filename can be used. Also with punycode what happens if you do write punycode to disk somehow. How does another OS read it. URI written to disk by mistake is still readable by a human. All encoding used on file-names must be human readable just in case its written to disc. In fact punycode might be in most web browsers but is mostly not used even for IRIs due to being too heavy. I am open to other encoding other than URI. They need to meet the following. 1)Not fight with POSIX shells. 2)Be human readable in case of opps I got written to disk when I should not been. 3)Be currently used on most platforms for file operations somewhere. 4)Be able to be in part decoded state without causing massive processing load. URI passes all this. Punycode is not even close. |
(0001153) bhaible (reporter) 2012-02-25 10:36 |
You should forget about this proposal rather today than tomorrow. Why? Because it would introduce an uncountable number of bugs and security issues in all programs that deal with filenames. The big problem of the URI encoding is that there is a concept of "raw filename" and the concept of "encoded filename", but programs which receive a filename in the form of a "const char *" string don't know which of the two it is. * The encode function is not idempotent: encode(encode(s)) can be different from encode(s). For example, if s is "foo bar", encode(s) = "foo%20bar", encode(encode(s)) = "foo%2520bar". * The decode function is not idempotent: decode(decode(s)) can be different from decode(s). For example, if s is "foo%2520bar", decode(s) = "foo%20bar", decode(decode(s)) = "foo bar". * Therefore, when a program receives a file name such as "foo%20bar", it cannot know whether it denotes a raw or an encoded filename. * Both raw filenames and encoded filenames would be ASCII compatible, therefore encoded in a way they cannot be superficially distinguished. * In programming languages, both would be "const char *". You will not get a compiler warning when you pass an encoded filename to a function/method that expects a raw filename and vice versa. Basically, this proposal attempts to bring the security issues of web applications (think of cross-site scripting) into the world of POSIX applications. This distinction between raw and encoded filenames has been realized in the Java programming language, by Sun: For an object of type java.io.File, f.toURL() returns an URL with the raw filename, whereas f.toURI().toURL() returns an URL with the encoded filename. As a result, programs which work with filenames in the form of URLs require a complete source code review if they don't want to have bugs when dealing with filenames such as "foo%20bar". |
(0001154) oiaohm (reporter) 2012-02-25 11:53 |
bhaible that is the only bug. Cured by never decoding the %25 in a POSIX file name string. "I might need to alter those minor-ally to add the fact if you wish to use %,<slash> and Null in a filename you have to encode them." In fact you should never decode them. I was aware of this issue. I have it covered. bhaible. Null, <slash>, % must always remain encoded in a file-name and that is the problem cured. You cannot place Null in a Null terminated string and <slash> cannot be or splitting paths is not possible. 3 forbin chars to be a decoded state instead of 2 total forbin to use at all chars. Yet with the change you can still place the 3 forbin chars into a raw file-name without issue. >The encode function is not idempotent: encode(encode(s)) can be different from encode(s). For example, if s is "foo bar", encode(s) = "foo%20bar", encode(encode(s)) = "foo%2520bar". This double stacking only happens because your encode and decode is dumb two rules stop the problem dead. 1) You make %25 a never decode you don't have the decode issue. You also need NULL and <slash> in this camp. 2) You rule if you % in a string follow by 2 hex numbers its still encoded. These two very simple rules stop the double stacking issue dead. Result is foo%2520bar = only "foo%20bar" when on raw but always foo%2520bar in a POSIX file-name string and foo%20bar="foo bar" always. No mix up no confusion. Locking % as %25 basically allows you to encode and decode as much as you like at application side. If you decode %25 is like decoding %00 special handling of true raw is required. URI encoding/decoding can be done safe two rules one each applied to encoding and decoding. bhaible "Basically, this proposal attempts to bring the security issues of web applications (think of cross-site scripting) into the world of POSIX applications." We all ready have this problem on shell. Raw when you support null chars and slashs is way more complex than what java.io.file does. You now must use length defined strings instead of null terminated for raw filename. It also becomes a linked struct because separator ceases to exist when raw. I am never giving the application pure raw or allowing application pure raw. Of course pure raw support could be enabled. There does need to be a distinction between pure raw file-names and encoded. Simple fact is true pure raw file-names using all chars cannot be displayed in a NULL terminated string. The thing is I don't see many applications ever need to be handling true raw. |
(0001155) bhaible (reporter) 2012-02-25 14:03 |
> These two very simple rules stop the double stacking issue dead. No. Regardless of how you define the details, there will be confusion between raw and encoded filenames. You want to have a function 'encode', that maps a filename to a filename, and a function 'decode', that also maps a filename to a filename, in such a way that decode(encode(f)) = f for all filenames f. And you want encode(A) != A for some particular filename A (such as a filename which contains a newline). Now look at the filenames B = encode(A) and C = encode(encode(A)). Or at the filenames B = encode(encode(A)) and C = encode(encode(encode(A))). What happens if both files B and C exist on disk? When a user asks a program to open or erase the file C, will it open or erase decode(C), that is B, or C? You cannot avoid this confusion, regardless how you define the details of the functions 'encode' and 'decode'. > I don't see many applications ever need to be handling true raw. Should the system call open() take a raw filename or an encoded filename as argument? |
(0001156) oiaohm (reporter) 2012-02-26 11:31 |
bhaible "Should the system call open() take a raw filename or an encoded filename as argument?" A pure raw filename containing null or slash cannot be passed into open now. fopen to handle every single char has to take a min encoded. That is 3 chars encoded using URI encoding. %, null and slash. Every other char can be passed not encoded. So application people unless they choose to don't ever have to touch a fully raw filename. In fact you don't now touch raw filenames. Because a null termianted string split by <slash> is not how the filesystem looks. Now doing one of the exec calls. All the filename has to be encoded to safe to shell status in time. Change over have to allow some flex. If full encoded gets passed into fopen. fopen decode function works the same as if it got 3 encoded. There is really no need for fopen and a lot of other functions to insist that everything is encoded from the application call side. It also means the chars that have to be encoded for safety can be revised without breaking stuff. If you wish to work with true raw. That is filenames containing null and slash and % not encoded we would have to create a new struct. Something like struct { char *; int length; bfilename * next } Reason %00 allowing null in filename now means no null terminated string can be used to store <slash> in filename means no separator in string either. This kind of struct is getting very close to what raw disc is. Path seperator does not exist in on disc as / in most filesystems. Path seperator exists as a link between structs on a file system its not a char posix just happens to choose to make it a char. Only reason why most file systems cannot use / in filenames on posix systems is that posix is using it as separator and there is no way to represent it that will not upset posix. %00 is trickier since a lot of file-systems use null terminated strings. So attempting to place a %00 would be have to be rejected by most file-systems. bhaible B = encode(encode(A)) and C = encode(encode(encode(A))). With a safe encode B and C are 100 percent equal. If this is really changing what your data is you are not using an encode I am defining. But a raw unsafe encode. "You rule if % in a string follow by 2 hex numbers its still encoded" So when running encoded don't encode again if pattern matches encoded state. Running unsafe forms of encodes and decodes you can stuff things up badly. You find a lot of implementations of URI use safe encodes and decodes. This is more implementation detail than a major problem. Only place where the 3 forbin chars to be not in an encoded state should be decoded is inside the POSIX interface to OS or inside programs in vars knowing to be fully decoded. There is a char encode on a posix filename or URI that the encode does not change this char is /, /hello%2Fworld/ and /hello/world/ are two different files by Safe URI. Application makers have 3 chars to worry about passing right. % <slash> and null. If application never decodes %2F is never has to break the line into tokens. Same with decode % or null require special handling. You how must have each section between the / of where an application is as its own var. If you have a raw % or a raw NULL or a raw / making up the name part of a filename that is not encoded by what I am talking about that is not a posix filename. Some % bits can be handled. But once you start encoding % you are risking issues. If you wanted to be really strict a % that does not have two hex following could be a error to any function like fopen. I would prefer at this stage to avoid doing this just on the least area disruption factor to old applications. With safe encode applications can choose to decode the ones they can safely display. Leave the ones they cannot safely display. Safe decodes could be different application to application. |
(0001157) oiaohm (reporter) 2012-02-26 11:41 |
bhaible the big important thing is that Posix does not handle raw filenames now. Posix is only using a representation of a filename and path. Lack of means to encode filenames in a standard and define way. Creates bugs like space in filenames causing filename on command line to be tokened wrong. Using URI encoding if A was trouble because something was using A to split stuff you can encode A. The only other option to adding encoding is to start forbin lots of chars in filenames. So no more spaces in filenames?? I don't think that will go down too well bhaible. Ok I have not written the fine details up perfectly. This is my first attempt ever submitting something to Posix standard. English is not something I am particularly good at. If someone wants to write it up better I will be very happy. But it has to address the issue of bad token splitting by shells and other failures resulting in system miss behaviour. |
(0001158) bhaible (reporter) 2012-02-26 14:45 |
> B = encode(encode(A)) and C = encode(encode(encode(A))). With a safe encode B and C are 100 percent equal. No. If B = C, then encode(A) = decode(B) = decode(C) = encode(encode(A)), and then A = decode(encode(A)) = decode(encode(encode(A))) = encode(A), contrary to the assumption how A was chosen. This is a MATHEMATICAL CONCLUSION: If you have the concept of "raw filenames" and "encoded filenames" and two function 'encode' and 'decode' as described above, then either the 'encode' function is the identity function - in which case you don't need to talk so extensively about it - or there are filenames A with encode(A) != A, and for all such filenames there will be confusion if both B = encode(A) and C = encode(encode(A)) exist on disk. This holds regardless whether the 'encode' function is based on URI escaping, hex escaping, upper-casing, or whatever. It is the concept of "raw filenames" and "encoded filenames" which is broken. |
(0001159) bkorb (reporter) 2012-02-26 17:42 |
I propose we give the guy a break: ``This is my first attempt ever submitting something to Posix''. So instead of purely throwing up roadblocks with fairly trivial workarounds, please also explain why the workarounds won't work. Fundamentally, Bruno is saying that the strings would need to be marked in some way as being encoded or not encoded. For example, if the first character of the name _were_ some control character, then the name is encoded, otherwise not. Given that there is a "competing" proposal on the table that would proscribe NUL through \x1f, it would seem that one of those could be reserved as a magic marker. This would resolve Bruno's objections, but it would have POSIX inventing new stuff. Normally (but not always), POSIX tends to avoid doing that. |
(0001160) bkorb (reporter) 2012-02-26 17:46 |
\x1A SUB (substitute :) |
(0001161) oiaohm (reporter) 2012-02-27 02:15 |
I will point out something big and critical. On a Linux I get a diskeditor I insert <slash> into a filename at raw level on any ext filesystem. I am now screwed by posix. I have now created a file that is now read write delete protected from everyone. Because posix says that <slash> is a directory separator and there is no other way to represent<slash> to send it to userspace. Posix has a presume that might not be reality. Presume that filename will not contain Null and <slash> might not be true. If you are attempt to put raw into fopen you are kidding yourself because the OS to posix interface has already process the raw filesystem struct and encoded it in a very particular way. When it goes back to the raw filesystem it has to be decoded. What I am proposing is adding a little more to the encoding and decoding at this level. Everything above this just has to cope. Full raw to posix should only been done at the interface between posix and the OS. Applications never provide full raw to the POSIX API. In fact in the Null and slash cases application cannot provide raw of them at moment. \x[hex][hex] runs into the problem that the posix shell currently processes them out of existence. We need something the posix shell will leave alone at let get to the filesystem level where posix filename gets converted to raw disk filename and back. % from URI is left alone by posix shells. Also if you search a filesystem for % is a rarely used char. There are two things we are going to kick in the teeth that I know gconf and smarty. gconf I can let slide. There is a reason. because its always %gconf The two chars following the % are not both hex so raw error there is detectable so the correct filename can be written to disc %gconf on disc. Remember when read back from disc unless application decodes % will appear as %25 from a directory listing so %25gconf will be the directory listing. smarty that is trickier %%hex is in its files so smarty will be broken by change funny enough reinstalled it will still manage to work. There are going to be a few of these cases where the raw error is not detectable yet application still works. But you cannot make a omelette without breaking a few eggs. bhaible if we are only looking at a handful of broken applications out of the many thousands out there I don't see it as a big problem. There are going to be some that will be positively effected. Nautilus it metafiles are written to disc as URI encoded. So now these filenames on disc will be smaller. Stack of chars the filesystem can safely store as solo chars are not being stored. I have messed a little with this using a fuse filesystem to emulate the change. So it works with quite min breakage from everything I have seen so far. Little bit of pain with a few broken cases here and there we are not talking a super huge number of broken cases. Currently posix has in-band attacks against its shell. Leading to Posix shell doing bad and evil things to end users. No unified encoding to prevent this. The in-band attacks is a huge number of broken cases where users are having things go wrong for them. Worse is some of these in-band attacks on the Posix shell are coming from outside. So if you can uri encode everything that you from web in php that you are now passing to a system to do something that nasty injection attack with -arguement feed in does not work. We have problem here. No solution is going to be 100 percent painless. I know this we are going to break something I know of 2 it could hurt a little. NUL through \x1A blocked does not help us fix the shell disaster we have fully. The reason why this has not been fixed is because the path forward equals some broken. Result of the some broken now is the problem fixed forever more. Yes long term I would make % null and <slash> that are not encoded so like the %gconf = error. Short term we have to be a little lax and allow a little side that is not exactly wise. Yes the NUL through \x1A blocked just gives more ways for people to write a file to disc that we cannot read write or modify even as a root user. http://www.youtube.com/watch?v=v8F8BqSa-XY [^] Good watch. Really this fault has been written about in the earliest Unix and Posix books. It keeps on being put under the carpet and lets just hope it goes away. This is well and truly over due to be address properly. Due to the fact we have left it so long. Its surprising how few applications are going to be badly effected correcting it. If you are using safe encode and safe decode even that the filenames on disk are not exactly right the applications still work. Yes smartly with safe encode and decode does not bust up completely. %%[hex][hex] first percent gets detected as error and written to disc as percent second gets written as char. encode prints out %25[char] in directly listing. Yet the smarty open request for %% [hex][hex] still works. This is where the encoding and decoding gets strange. There will be errors but most of the errors will not to preventing the programs from running. Of course in time we would want the coders to change there applications not to be running into this stuff. bhaible you complete presume is that file-names are not already encoded. Come to reality please filenames are encoded its just a question of is there enough encoding to handle all cases from the raw disc level. The answer is no there is not enough encoding to handle what could be on the raw disc level. Next question is there a issue with token system in shell yes. So tokens in a stream should be escaped they are not. We have two major issues here. 1)Stuff that should be escaped to prevent tokening processes from going wrong is not. 2)We cannot display what is on disc at times. bhaible If you don't like my solution you are free to open a new issue and provide a solution that solves them. Don't throw up that mathematical proof at me. That proof only applies to encoded and decodes that don't check. A checking rule in the encode %[hex][hex] If find % check if the next to chars are hex or not. If hex don't encode %. This is called stack proof encoding. It will resist stacking. decode(encode(encode(A))) = encode(A) This is not true for URI encoding if the encode is stack proof. Stack proof decode(encode(encode(A))) = minencode(A) Reason the second encode noops. Safe decode will not decode particular chars as well. In the case of URI a safe decode will not decode % so %25 is off decode list %00 and <slash> would also be off decode list for posix sanity and string sanity. minencode(A) is a URI encode that only encodes % <slash> Null chars. If A contains none of those chars A=A . Only place for the unsafe decode is right before it goes into file-system structs. Same with a unsafe encode is when it leaves file-sytem structs. There is mathematically safe encoding and decoding to be stacked. URI encoding is one that you can do mathematically safe as long as you obey a few rules. The rules make the encoding and decoding noop when they should all stack safe encoding and decoding does this. Detect when to noop is part of stack safe encoding and decoding. So if a person encodes twice bad things don't happen. Also not all types of encoding can be made stack safe. bkorb the char I have taken as the control chars to detect encoded is the %[hex][hex] itself. I know there will be some side effects from doing this. This should be less harmful than the current issues we will keep by doing nothing. Taking a control char we could also run into a nasty usage case as well. |
(0001162) oiaohm (reporter) 2012-02-27 02:40 |
I missed I do have a unicode check as well. If the hex is over 127 how many char unicode is it refering to. This is another thing that detects the percent error. Because there is not following % for the right number of bytes. The filter to detect the errors with URI encoding is not that bad. Of course it would be simpler if there was no % chars to hunt down. At implementation we have to presume there will be a few programs out there using % still in ways that are not right. So do the best we can to cope. Its all about error management. Less errors overall will exist after my change than what exist now. Of course there are going to be a few unhappy people who have to correct there programs. |
(0001163) oiaohm (reporter) 2012-02-29 02:42 |
The big thing here is once we have a encoding. That is standard. As long as the filesystem does not forbid % 0-9, a-f and A-F we can write files encoded. So the case of NTFS restricting file-names is not a problem other than the fact the file names written to disc will be longer due to restriction placed on us. This way we can make more chars portable between platforms without worrying completely about the file-system under it. Even when a collision happens using URI encoding this will not make the files unreadable. Might confuse application its not become a magic place to hide data. Of course it might be useful to add a few functions to ask does what chars in filenames are supported in the directory native and a function to tell you if a filename is acceptable size to file-system. Both would be useful now. Like if you have a 8.3 filesystem name space for some reason on something. Once we have encoding we can bed this in very well. Expand the list of portable chars for posix usage. |
(0001164) markh (reporter) 2012-02-29 08:30 |
Do not underestimate the security holes that could be introduced by accepting multiple byte strings to refer to the same file. This was the subject of a number of UTF-8-related security vulnerabilities when UTF-8 first became popular 10-20 years ago. Early UTF-8 decoders accepted overlong sequences, so they were happy to convert C0 AE into U+002E (ASCII '.') for example. Perhaps the most notorious example was the Microsoft IIS web server, which checked for ".." to prevent access to the parent directory, but C0 AE C0 AE was passed through to the OS where it was treated the same as ".." allowing any web user access to the entire disk. Vulnerabilities of this sort led to the UTF-8 specifications being amended to require rejection of overlong sequences, so only the two byte sequence ".." could end up translating to that string. Despite this, vulnerabilities of this sort still pop up from time to time; similar vulnerabilities were discovered and corrected in PHP and in Java's JVM just a few years ago. I can imagine that there may be numerous POSIX applications that would have a similar vulnerability if alternate strings such as %2E%2E could be used to bypass existing security checks. Unlike the UTF-8 fix which was to simply disallow non-canonical representations, this enhancement request suggests that both representations should be allowed, which means that in this case the application has to know to check for both representations. Even in an application that was specifically modified to also check for %2E%2E, it could easily have been neglected to make the check case insensitive or to check for all of the combinations ".%2e" and "%2e.". Unlike %-encoding in URLs, the application cannot simply arrange to perform such checks after all decoding, since the implementation performs the decoding at a lower level. Although the ".." vulnerability could potentially be avoided by requiring the implementation to not accept %-escapes to refer to "." or "..", these are not the only strings that applications look for. For example some applications store files containing user-defined content in a directory which also contains internal files. The internal files may have particular names, or a certain prefix (e.g. ! or _) or suffix, which the application prohibits in names specified by the user. However if the user is able to specify a file name such as %21private_key to bypass the application's check on names beginning with '!', new security vulnerabilities could be introduced. |
(0001165) oiaohm (reporter) 2012-02-29 09:38 |
The . and .. I had forgot to cover. Is . and .. real or generated. This is how I split things. The answer is generated same is <slash>. Yes another way to create a file that cannot see on a file-system is to create it as . or .. with direct disc editing. Could a .. file exist on a file-system as a real file yes. Can we represent that in an accessible form at moment no we cannot. Encoding is for the real file on disc name not for items like <slash> . and .. since by the time you come to decoding slash . and .. should have already been processed out. "%2e." would signal a real file or directory of .. on the disc it self same with the reverse not a directory change. I had forgot about . and .. has to be in the same class as <slash> if encoded real file on disc. This is where problems come in at moment there can be real files on disc due to posix we cannot access them. This is not good for secuirty. I will make this clear. There will be no limiting of chars can be %[hex][hex] Of course when you do a %[hex][hex] you are now referring to on disc storage. Not to anything that does not exist in the on disc storage. So there is no need to prevent %2E%2E or %2E from existing since . and .. don't exist on disc in most file-systems. For file-systems where they do exist as real entries pointing to structs for . and .. it should be fail since . and .. should not have been encoded as a directory change unless its a user created directory. markh this brings us to serous point. "example some applications store files containing user-defined content in a directory which also contains internal files. The internal files may have particular names, or a certain prefix (e.g. ! or _) or suffix, which the application prohibits in names specified by the user." Is this ever safe. The answer is scarily never is this safe. Its poor design that opens up Pandora box. If you are base if a file is secret based on start of its file-name you are asking to lose the data. User and Application data should not be in the same directory in the first place this way if something does go wrong and user can write an file they cannot ruin things big time. Let say %21private_key caused a hole this means the directory by OS secuirty had to be writeable or readable by that user when it should not have been. So a bug the user could have overwritten or accessed !private_key anyhow. These are Pandora boxes we need found and exterminated. If I break this kind of stuff I don't give much of a rats markh. Its bad it should not be done. In the migration there might be grounds for a PANDORA BOX allowance not something I am particularly happy with since in my eyes the application should be fixed. Where what is encoded is restricted so old applications using stupid things like ! or suffix instead of directories to sort files have time to fix themselves. Like an environmental var that list what chars are to be encoded and can be encoded. A percent clash I cannot solve this way. This might cure other issues temporary. Long term applications will need to get use to the fact the %stuff will be the forever more. Yes the issue you are referring to is a hidden Pandora box created by bad web coders very often. Formal layout of directories is required to reduce possible damaging in case of issue so security can be set on directory correctly. Yes sorting by directory for secuirty is simpler than files. Net writeable directories and Net applications directories should be two different directories. |
(0001166) oiaohm (reporter) 2012-02-29 09:58 |
markh I need to go and look at UTF-8 encoding spec. If C0 don't down-code any more. This solves my percent collision problem. C0 25 would not equal 25 in a normal string. C0 could be used with all bar the 0-31. Possible C0 could be used instead of the percent completely. C0[hex][hex] be the 0-1F. I was not thinking that C0 was usable-able. Of course this most likely is not portable to systems not using UTF-8 in any way shape or form. |
(0001167) msbrown (manager) 2012-03-08 16:52 |
The underlying issue is the same as that being addressed in 0000251 and this is being closed as a duplicate of that report. |
(0001169) Don Cragun (manager) 2012-03-08 17:12 |
During the Austin Group call today, the participants agreed that the points noted by Konrad Schwartz are important considerations against the proposed change. It should also be noted that by the definition of pathname resolution, any file that has its name changed by editing a filesystem to create a filename that contains a <slash> character can never be accessed (including by open(), stat(), and unlink()) on any conforming implementation. If a user corrupted a filesystem by editing the underlying device, they can correct it the same way (and the standard doesn't need to provide a way for applications or users to deal with this situation). For easy future reference, Konrad's comments were: If any sort of escaping mechanism is to be introduced, it must be one already defined by POSIX, e.g., the C escape mechanism or, since we are at the shell level, $'...'. However, it seems to me that the proposed -q flag would also require escaping of characters and words special to the shell, to prevent inadvertent interpretation when using eval. The quoting mechanism would have to be simplistic, quoting every filename with $'...', to allow for future expansion of the Shell. Furthermore, I think it needs to be restated that the problem doesn't actually exist for "find": find ... -exec ... {} + has no problems with filenames with special characters. In general, none of the proposals deal with character encoding issues. Extending the Kernel to allow file names . or .. or with / is wrong, since then there is no convenient transfer syntax available to specify the current directory the parent directory, or file paths, or alternatively would require all programs to implement $'...' quoting. Finally, I agree with David Holland that the whole thing seems seriously misguided. |
(0001171) oiaohm (reporter) 2012-03-08 22:00 |
Don Cragun "Forbid newline, or even bytes 1 through 31 (inclusive), in filenames" Is also serous-ally misguided. More serous-ally misguided than this. "If a user corrupted a filesystem by editing the underlying device, they can correct it the same way (and the standard doesn't need to provide a way for applications or users to deal with this situation)." Really the problem here is the user might have never done it. Lets just give a nightmare location. Lets say windows or mac decided to allow <slash> in there filenames. Lets just say then end up burn on to a sold CD or something. So effectively posix cannot open it. This is why David A. Wheeler solution is serous-ally miss guided banning things. If I want to make media a posix system cannot open by normal end users use what you forbin in your filename rules. This will prevent a posix system from being able to open the media. "Extending the Kernel to allow file names . or .. or with / is wrong, since then there is no convenient transfer syntax available to specify the current directory the parent directory, or file paths, or alternatively would require all programs to implement $'...' quoting." In fact this is wrong. This is why I have encoding .. and encoded .. don't match. So the convenient transfer syntax is still there. Really I am saying from kernel file-system driver up. So that . or .. or / are not files. So since they are not files they are not encoded. Same point I want to make media for a game station or something else posix systems cannot handle what you forbin I can use to make that media unreadable to a posix system. find ... -exec ... {} + If that exec in there is a bash script and it tokens somewhere by space or enter it breaks again. This is a no solution to the problem. Its a hack that covers one case. Does not address the issue with the shell miss handling. Yes a rm $1 in a bash script that find calls still malfunctions with a file like "dog cat". So it deletes file dog and cat but does not delete file "dog cat". Rule of secure coding don't token by what can be a input that should appear as one piece. Current filesystem is providing stuff that the shell tokens by. We cannot forbid space in filenames. There is a major token issue with shell. The fact filenames can contain space that shell is using to token it creates errors that people miss. msbrown if this is not the solution. I know 251 is not the solution either. Close both and open a new issue listing what we have to address. The issue is current posix design breaks what you have to do for secure coding. Posix design has a built in weird machine that needs to be fixed. Interaction between shell and filesystem has the makings to turn into a full blowing weird machine. www.youtube.com/watch?v=3kEfedtQVOY 251 does not cover space or other chars that can cause the shell to malfunction. Yes we have a secuirty issue designed into the posix spec. How are we going to remove it. 251 does not address it. My solution does and you don't like it. |
(0001172) oiaohm (reporter) 2012-03-08 22:24 |
msbrown you got it wrong this is not the same issue as 251. The two issues overlap yes. I have seen that the problem is way broader than 251 and that 251 once I start banning filenames that the shell tokens by or other wise uses special the resulting filenames are non workable. <space> ! # ${}..... Yes a long list of chars have to be forbid by the forbid path to make shell safe. Don Cragun the C escape mechanism is not usable since the shell will process this out of existence. We have filesystem and shell being makings of this weird machine. Don Cragun to go the C escape mechanism every shell has to be altered if this is the path you want to go fine. As long as we kill the weird machine dead I am happy. While the weird machine exists we have problem that need solution. Since 251 does not address the weird machine its a different bug. Sorry this is a broad view problem. Shell and Filesystem are both linked to the problem one or both has to be changed to prevent this weird machines. Don Cragun please try to think how to address the weird machine posix has part built. This a serous problem to address is going to cause disruption that must be addressed. |
(0001173) oiaohm (reporter) 2012-03-08 22:34 |
The problem I have is what encoding you use to cure this problem the shell must not decode. Because a shell can call another shell inside. So since C escape is decoded currently by shell. To use C escape I would have to introduce a forbid rule that shells cannot any more decode C escape. I would have believed in my mind that making such a radical alteration to posix would have been rejected. If I am wrong I will change this to use C escape put forward a bigger proposal to alter the rules of being a posix shell as well. |
(0001179) Don Cragun (manager) 2012-03-29 23:10 edited on: 2012-03-29 23:19 |
Peter, After discussing your comments during the conference call today, we believe that there may be a language barrier that is causing some of our comments to be misunderstood. We would be happy to have you join us on our next conference call so that we can more easily discuss the issue if the comments below do not make clear why we are rejecting this proposal. The meeting announcement giving the time and phone numbers is sent to the austin-group-l alias before every meeting. 1. We agree that the suggested fix proposed by the submitter of bug 251 is not appropriate for the standard. Nonetheless, we believe that it raises a valid concern (especially with allowing <newline> in filenames) and we are going to use that bug to make changes to the standard to address the issues raised in 0000251 and this bug. Therefore, we are closing this bug AGAIN as a duplicate of 0000251. 2. Any proposal that allows a "/" (<slash>) character or a null byte in a filename will be rejected. The use of "/" as a filename separator in a pathname is inherent in the definition of pathname resolution and will not be changed. 3. Any proposal that includes encoding of pathnames that must be performed by users interacting with the system (whether by providing input requested by a shell script, a C program, or a graphical user interface) will not be accepted. Naive users won't know what to do. Even experienced users would need to know what type of application was being used to determine what, if any, encoding is required. Furthermore, if the encoding is not idempotent (and the proposed encoding is not) there is too much chance for a user (or the operating system) to be confused as to when encoding is needed and when it has already been performed. 4. A lot of what the submitter suggested as the fix for bug 251 is to make it easier for careless shell script writers to avoid problems. Almost everything proposed in the suggested fix provided by the submitter of that bug can be avoided if the script writer uses existing standard shell features to quote pathnames that may contain special characters. If you look at the notes in 0000251 you will see that the direction that we are taking is that "/" and null have to remain as illegal characters in a filename AND <newline> needs to be added to the list of characters that cannot appear in a filename We don't care if a filename may contain a character that is special to the shell because shell scripts should quote any shell variables that could contain problematic characters. If a shell script blindly uses $1 instead of "$1" when processing a command line argument supplied by the user, that is a shell script bug; not a bug in the standard. 5. In the context of www.youtube.com/watch?v=3kEfedtQVOY, the definition of a pathanme in the POSIX standard does not create a "weird machine". A pathname is a string containing filenames separated by "/" characters terminated by a null byte. Some implementations further restrict filenames by disallowing other characters, but that doesn't make the parser for a pathname "weird" (it just makes filenames that contain characters that are not in the portable filename character non-portable). This has been true since the first POSIX standard well over thirty years ago. We see no need to change it now. 6. Allowing <space> (and other characters in a filename that have special meaning to the shell) does not cause problems for C program input nor for correctly written shell scripts that use appropriate quoting mechanisms. (However, allowing <newline> in a filename does cause ambiguities that will be resolved in the next revision of the standard by fixes that will be supplied in response to bug 251.) Careless and lazy shell script writers (as well as careless and lazy programmers using any other language) often create insecure applications. Having a standard doesn't (and never will) prevent that. 7. Yes, users can create media that is not readable or useable on POSIX systems. Yes, priviledged users can scribble on a filesystem and make it unreadable or unuseable on POSIX systems. Yes, users could create a filesystem containing one or more files that contain "/" characters in them. If that happens, those files will not be accessible on a POSIX system. This falls into the case we call "Weirdnix". We do not feel that the standard needs to be changed to accept all possible weirdnix objects that can be mounted onto a POSIX system. Yes Microsoft could decide to make "/" a valid character in a filename. The likelihood of that is so small that we don't care. When and if that happens, we can reopen this issue. (Note, Apple's OS/X can't make "/" a valid character in a filename if Apple wants to keep the UNIX brand it has for conformance to the Single UNIX Specification.) |
(0001180) oiaohm (reporter) 2012-03-30 13:03 |
"A lot of what the submitter suggested as the fix for bug 251 is to make it easier for careless shell script writers to avoid problems." This is where it not the careless shell script writer who gets bitten the worse. Its the new user to the system who does like ls * and expect something very particular to happen. New users is not going know that X path is going to cause problem. Token %1 and the like into bits by shell really should not be automatic. So I do a rm %1 it gets passed to rm as 1 argument unless I have told the shell to particularly process it. What makes what is going on a wierd machine is that the shell is doing stuff user has not directly asked it to do. User has to ask the shell not todo things. Removing \n from filesystem is not a solution its avoidance. Saying person could code there shell better is bad answer. New user no skill using shell and it does not do what they are expecting this is bad. There is a design mistake in the shell. "correctly written shell scripts that use appropriate quoting mechanisms." Note the word here User has to know to use appropriate quoting mechanisms or surprise shell did something different to what you asked it to. Correct for user friendlyness is to act the quoted state unless particularly asked not to be. The \n problem a releated problem. It is still token processing with chars that are not safe to token process with. C programs do have issues with space at times. A weird machine is built by making code that appears todo one thing but in fact it is doing something different to what you expect. rm %1 you first thought is that is rm "%1" Not that its going to token. Early on posix got something backwards. Problem is I don't have a clue how to fix it correctly. The backwards method to the issue. If it was rm "%1" to token or equal problem would not exist for lots of cases. Note \n links to end of line processing separator. The space token is a different token process in the shell. This is why there should be two bugs. One about IFS and one about the generic space token issue and shell special chars. IFS has options to be changed to support \0 so curing the \n problems other than causing a lot of programs to have to alter code base. IFS means end of line should be able to be anything any program expecting end of line to be \n is technically broken by spec that is 251. Space token is way harder because there is no control interface to change it. You are stuck with it. |
(0001182) Don Cragun (manager) 2012-04-01 21:44 |
In response to Note: 0001180: This bug requests making all byte values legal in a filename, and encoding some byte values in some circumstances. This bug is not about the shell's methods of command line parsing. If you don't like the way the standard's shell performs command line parsing, please open a different bug to address that issue. Note, however, that this standard is intended to document existing practice of systems based on the UNIX Operating System. If you want to change the fundamental way the shell performs field splitting, word expansions, command substitutions, etc. in ways that are incompatible with the way the shell has worked for the last 40 years, those changes will not be accepted. If you want to create a new command line interpreter, you need to have it deployed on a commercial system first, be able to provide copyright release to documentation on exactly how your new interpreter works, and get at least one of the three sponsoring organizations of the Austin Group to support inclusion of it in a future revision of the standard. As has already been noted, any attempt to make <slash> and NULL valid characters in a filename will never be accepted as a change to the POSIX standards and the Single UNIX Specifications. That part of this bug is rejected. The other part of this bug is the handling of some characters in filenames; that part of this bug will be resolved by changes that will be made in response to bug number 251. This bug is, therefore, closed again as a duplicate of 0000251. |
(0001183) oiaohm (reporter) 2012-04-02 08:42 |
Don Cragun "The other part of this bug is the handling of some characters in filenames; that part of this bug will be resolved by changes that will be made in response to bug number 251." That is the problem 545 can deal with all the chars that 251 addresses. But 251 does not address all the chars 545 does. So they are not duplicate. Related yes. 251 does not attempt to address shell token chars like <space> being in file names. These are two closely related bugs. One mostly links to end of line. One links to how arguments are split from entered line. 251 is upset about end of line char. That is not the main issue of this bug. What I am after here is predictability for end user. So that rm * does what is appears to suggest. Not barf to delete something because it has a space. rm -i * particularly. As user you expect this to interactively ask on each file you wish to delete. Now if first file happens to be -rf that is still acceptable file under 251. The rest of the files in the directory get deleted without the user being informed. Yet -rf remains as a land mine for the next time someone tries to delete a file from that directory. This is the forms of shell chars that I am referring to as problems. 251 does not cover what 545 does. If you wish to close 545 a new bug needs to be open listing the interface issue. The duplicate link is a error because they are not duplicate. Duplicate the other way. 251 could be closed as duplicate of 545 because 545 covers everything that is 251. Reverse is not true. Basically stop trying to close this with 251 and go read 251 and take clear note it does not cover space - or other shell and command line chars that can be in file-names that cause applications under posix to take the wrong turn and do what the user is not expecting. 251 does not address 545 issue at all. Items like applications not supporting -- also come under the issue I am referring to. This is shell predictability. "This bug requests making all byte values legal in a filename, and encoding some byte values in some circumstances." Because this is the way I can see to address the shell and programs taking chars that they should not be reacting to. So encoded - from a file name prevents -rf filename doing anything to rm and so on. You don't like the idea of making all byte safe. My problem is having a method to make all bytes safe allows shell malfunction events to be neutralised. Ie encode this filename so shell will not split it ever. Have to decode so rm or any other posix application can malfunction due to file-name input. Support for all chars in filenames is a side effect of the solution I worked out. Really I find the idea that supporting all chars is forbin as a stupid idea. Since support for all chars might be the only way to fix this problem in all cases. A filename in php python.... whatever interp might require different chars encoded to avoid conflict. Go back and read the first two lines of 545 description. "This is to solve the problem once and for all of file-names containing chars that cause shell and other programs todo bad and unexpected things. Like rm -i * and a file called -rf in that directory changing this to rm -i -rf <everything else * found>" Don Cragun "This bug is not about the shell's methods of command line parsing." This is a flat face lie. 545 has been the complete time about command line parsing processing and issues. I made this clear in the first 2 lines of the description of this bug. If my solution past there is wrong does not change what the core of this bug is about. This is why the bug is not 251 if you had bothered to read the description of both it should be very clear they are two different bugs. |
(0001185) agadmin (administrator) 2012-04-02 15:07 |
Closing and locking - Adminstrator (Mark Brown). Please feel free to attend one of the open/public issue discussion meetings if you wish to argue this further. These meetings are regularly published in the Austin Group mailing list. http://www.opengroup.org/austin/lists.html [^] . |
Mantis 1.1.6[^] Copyright © 2000 - 2008 Mantis Group |