content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Application to Stress Test in a Windows .NET Application I am developing a Windows .NET application (WinForms) and I need to simulate a stress test of the database and the application ( more than 100 conections). What tools do you recommend? A: Tools like AutomatedQA TestComplete allow you to make a script which simulates a user controlling your application. Running multiple scripts at the same time could be your stress test.
Application to Stress Test in a Windows .NET Application
I am developing a Windows .NET application (WinForms) and I need to simulate a stress test of the database and the application ( more than 100 conections). What tools do you recommend?
[ "Tools like AutomatedQA TestComplete allow you to make a script which simulates a user controlling your application. Running multiple scripts at the same time could be your stress test.\n" ]
[ 0 ]
[]
[]
[ ".net", "stress_testing" ]
stackoverflow_0000055035_.net_stress_testing.txt
Q: Where does RegexBuddy store its working data between uses? Ok, so I'm an idiot. So I was working on a regex that took way to long to craft. After perfecting it, I upgraded my work machine with a blazing fast hard drive and realized that I never saved the regex anywhere and simply used RegexBuddy's autosave to store it. Dumb dumb dumb. I sent a copy of the regex to a coworker but now he can't find it (or the record of our communication). My best hope of finding the regex is to find it in RegexBuddy on the old hard drive. RegexBuddy automatically saves whatever you were working on each time you close it. I've done some preliminary searches to try to determine where it actually saves that working data but I'm having no success. This question is the result of my dumb behavior but I thought it was a good chance to finally ask a question here. A: On my XP box, it was in the registry here: HKEY_CURRENT_USER\Software\JGsoft\RegexBuddy3\History There were two REG_BINARY keys called Action0 and Action1 that had hex data containing my two regexes from the history. The test data that I was testing the regex against was here: C:\Documents and Settings\<username>\Application Data\JGsoft\RegexBuddy 3 A: It depends on the OS, of cause, but on Windows I would guess the application data directory. I can't remember the path on xp but on vista it's something like this: C:\Users\ user name \AppData\ And then it would probably be here: C:\Users\ user name \AppData\roaming
Where does RegexBuddy store its working data between uses?
Ok, so I'm an idiot. So I was working on a regex that took way to long to craft. After perfecting it, I upgraded my work machine with a blazing fast hard drive and realized that I never saved the regex anywhere and simply used RegexBuddy's autosave to store it. Dumb dumb dumb. I sent a copy of the regex to a coworker but now he can't find it (or the record of our communication). My best hope of finding the regex is to find it in RegexBuddy on the old hard drive. RegexBuddy automatically saves whatever you were working on each time you close it. I've done some preliminary searches to try to determine where it actually saves that working data but I'm having no success. This question is the result of my dumb behavior but I thought it was a good chance to finally ask a question here.
[ "On my XP box, it was in the registry here:\nHKEY_CURRENT_USER\\Software\\JGsoft\\RegexBuddy3\\History\n\nThere were two REG_BINARY keys called Action0 and Action1 that had hex data containing my two regexes from the history.\n\nThe test data that I was testing the regex against was here:\nC:\\Documents and Settings\\<username>\\Application Data\\JGsoft\\RegexBuddy 3\n\n", "It depends on the OS, of cause, but on Windows I would guess the application data directory. I can't remember the path on xp but on vista it's something like this:\nC:\\Users\\ user name \\AppData\\\nAnd then it would probably be here:\nC:\\Users\\ user name \\AppData\\roaming\n" ]
[ 9, 0 ]
[]
[]
[ "regexbuddy" ]
stackoverflow_0000055114_regexbuddy.txt
Q: Best way to parse Space Separated Text I have string like this /c SomeText\MoreText "Some Text\More Text\Lol" SomeText I want to tokenize it, however I can't just split on the spaces. I've come up with somewhat ugly parser that works, but I'm wondering if anyone has a more elegant design. This is in C# btw. EDIT: My ugly version, while ugly, is O(N) and may actually be faster than using a RegEx. private string[] tokenize(string input) { string[] tokens = input.Split(' '); List<String> output = new List<String>(); for (int i = 0; i < tokens.Length; i++) { if (tokens[i].StartsWith("\"")) { string temp = tokens[i]; int k = 0; for (k = i + 1; k < tokens.Length; k++) { if (tokens[k].EndsWith("\"")) { temp += " " + tokens[k]; break; } else { temp += " " + tokens[k]; } } output.Add(temp); i = k + 1; } else { output.Add(tokens[i]); } } return output.ToArray(); } A: The computer term for what you're doing is lexical analysis; read that for a good summary of this common task. Based on your example, I'm guessing that you want whitespace to separate your words, but stuff in quotation marks should be treated as a "word" without the quotes. The simplest way to do this is to define a word as a regular expression: ([^"^\s]+)\s*|"([^"]+)"\s* This expression states that a "word" is either (1) non-quote, non-whitespace text surrounded by whitespace, or (2) non-quote text surrounded by quotes (followed by some whitespace). Note the use of capturing parentheses to highlight the desired text. Armed with that regex, your algorithm is simple: search your text for the next "word" as defined by the capturing parentheses, and return it. Repeat that until you run out of "words". Here's the simplest bit of working code I could come up with, in VB.NET. Note that we have to check both groups for data since there are two sets of capturing parentheses. Dim token As String Dim r As Regex = New Regex("([^""^\s]+)\s*|""([^""]+)""\s*") Dim m As Match = r.Match("this is a ""test string""") While m.Success token = m.Groups(1).ToString If token.length = 0 And m.Groups.Count > 1 Then token = m.Groups(2).ToString End If m = m.NextMatch End While Note 1: Will's answer, above, is the same idea as this one. Hopefully this answer explains the details behind the scene a little better :) A: The Microsoft.VisualBasic.FileIO namespace (in Microsoft.VisualBasic.dll) has a TextFieldParser you can use to split on space delimeted text. It handles strings within quotes (i.e., "this is one token" thisistokentwo) well. Note, just because the DLL says VisualBasic doesn't mean you can only use it in a VB project. Its part of the entire Framework. A: There is the state machine approach. private enum State { None = 0, InTokin, InQuote } private static IEnumerable<string> Tokinize(string input) { input += ' '; // ensure we end on whitespace State state = State.None; State? next = null; // setting the next state implies that we have found a tokin StringBuilder sb = new StringBuilder(); foreach (char c in input) { switch (state) { default: case State.None: if (char.IsWhiteSpace(c)) continue; else if (c == '"') { state = State.InQuote; continue; } else state = State.InTokin; break; case State.InTokin: if (char.IsWhiteSpace(c)) next = State.None; else if (c == '"') next = State.InQuote; break; case State.InQuote: if (c == '"') next = State.None; break; } if (next.HasValue) { yield return sb.ToString(); sb = new StringBuilder(); state = next.Value; next = null; } else sb.Append(c); } } It can easily be extended for things like nested quotes and escaping. Returning as IEnumerable<string> allows your code to only parse as much as you need. There aren't any real downsides to that kind of lazy approach as strings are immutable so you know that input isn't going to change before you have parsed the whole thing. See: http://en.wikipedia.org/wiki/Automata-Based_Programming A: You also might want to look into regular expressions. That might help you out. Here is a sample ripped off from MSDN... using System; using System.Text.RegularExpressions; public class Test { public static void Main () { // Define a regular expression for repeated words. Regex rx = new Regex(@"\b(?<word>\w+)\s+(\k<word>)\b", RegexOptions.Compiled | RegexOptions.IgnoreCase); // Define a test string. string text = "The the quick brown fox fox jumped over the lazy dog dog."; // Find matches. MatchCollection matches = rx.Matches(text); // Report the number of matches found. Console.WriteLine("{0} matches found in:\n {1}", matches.Count, text); // Report on each match. foreach (Match match in matches) { GroupCollection groups = match.Groups; Console.WriteLine("'{0}' repeated at positions {1} and {2}", groups["word"].Value, groups[0].Index, groups[1].Index); } } } // The example produces the following output to the console: // 3 matches found in: // The the quick brown fox fox jumped over the lazy dog dog. // 'The' repeated at positions 0 and 4 // 'fox' repeated at positions 20 and 25 // 'dog' repeated at positions 50 and 54
Best way to parse Space Separated Text
I have string like this /c SomeText\MoreText "Some Text\More Text\Lol" SomeText I want to tokenize it, however I can't just split on the spaces. I've come up with somewhat ugly parser that works, but I'm wondering if anyone has a more elegant design. This is in C# btw. EDIT: My ugly version, while ugly, is O(N) and may actually be faster than using a RegEx. private string[] tokenize(string input) { string[] tokens = input.Split(' '); List<String> output = new List<String>(); for (int i = 0; i < tokens.Length; i++) { if (tokens[i].StartsWith("\"")) { string temp = tokens[i]; int k = 0; for (k = i + 1; k < tokens.Length; k++) { if (tokens[k].EndsWith("\"")) { temp += " " + tokens[k]; break; } else { temp += " " + tokens[k]; } } output.Add(temp); i = k + 1; } else { output.Add(tokens[i]); } } return output.ToArray(); }
[ "The computer term for what you're doing is lexical analysis; read that for a good summary of this common task.\nBased on your example, I'm guessing that you want whitespace to separate your words, but stuff in quotation marks should be treated as a \"word\" without the quotes.\nThe simplest way to do this is to define a word as a regular expression:\n([^\"^\\s]+)\\s*|\"([^\"]+)\"\\s*\n\nThis expression states that a \"word\" is either (1) non-quote, non-whitespace text surrounded by whitespace, or (2) non-quote text surrounded by quotes (followed by some whitespace). Note the use of capturing parentheses to highlight the desired text.\nArmed with that regex, your algorithm is simple: search your text for the next \"word\" as defined by the capturing parentheses, and return it. Repeat that until you run out of \"words\".\nHere's the simplest bit of working code I could come up with, in VB.NET. Note that we have to check both groups for data since there are two sets of capturing parentheses.\nDim token As String\nDim r As Regex = New Regex(\"([^\"\"^\\s]+)\\s*|\"\"([^\"\"]+)\"\"\\s*\")\nDim m As Match = r.Match(\"this is a \"\"test string\"\"\")\n\nWhile m.Success\n token = m.Groups(1).ToString\n If token.length = 0 And m.Groups.Count > 1 Then\n token = m.Groups(2).ToString\n End If\n m = m.NextMatch\nEnd While\n\nNote 1: Will's answer, above, is the same idea as this one. Hopefully this answer explains the details behind the scene a little better :)\n", "The Microsoft.VisualBasic.FileIO namespace (in Microsoft.VisualBasic.dll) has a TextFieldParser you can use to split on space delimeted text. It handles strings within quotes (i.e., \"this is one token\" thisistokentwo) well.\nNote, just because the DLL says VisualBasic doesn't mean you can only use it in a VB project. Its part of the entire Framework.\n", "There is the state machine approach.\n private enum State\n {\n None = 0,\n InTokin,\n InQuote\n }\n\n private static IEnumerable<string> Tokinize(string input)\n {\n input += ' '; // ensure we end on whitespace\n State state = State.None;\n State? next = null; // setting the next state implies that we have found a tokin\n StringBuilder sb = new StringBuilder();\n foreach (char c in input)\n {\n switch (state)\n {\n default:\n case State.None:\n if (char.IsWhiteSpace(c))\n continue;\n else if (c == '\"')\n {\n state = State.InQuote;\n continue;\n }\n else\n state = State.InTokin;\n break;\n case State.InTokin:\n if (char.IsWhiteSpace(c))\n next = State.None;\n else if (c == '\"')\n next = State.InQuote;\n break;\n case State.InQuote:\n if (c == '\"')\n next = State.None;\n break;\n }\n if (next.HasValue)\n {\n yield return sb.ToString();\n sb = new StringBuilder();\n state = next.Value;\n next = null;\n }\n else\n sb.Append(c);\n }\n }\n\nIt can easily be extended for things like nested quotes and escaping. Returning as IEnumerable<string> allows your code to only parse as much as you need. There aren't any real downsides to that kind of lazy approach as strings are immutable so you know that input isn't going to change before you have parsed the whole thing.\nSee: http://en.wikipedia.org/wiki/Automata-Based_Programming\n", "You also might want to look into regular expressions. That might help you out. Here is a sample ripped off from MSDN...\nusing System;\nusing System.Text.RegularExpressions;\n\npublic class Test\n{\n\n public static void Main ()\n {\n\n // Define a regular expression for repeated words.\n Regex rx = new Regex(@\"\\b(?<word>\\w+)\\s+(\\k<word>)\\b\",\n RegexOptions.Compiled | RegexOptions.IgnoreCase);\n\n // Define a test string. \n string text = \"The the quick brown fox fox jumped over the lazy dog dog.\";\n\n // Find matches.\n MatchCollection matches = rx.Matches(text);\n\n // Report the number of matches found.\n Console.WriteLine(\"{0} matches found in:\\n {1}\", \n matches.Count, \n text);\n\n // Report on each match.\n foreach (Match match in matches)\n {\n GroupCollection groups = match.Groups;\n Console.WriteLine(\"'{0}' repeated at positions {1} and {2}\", \n groups[\"word\"].Value, \n groups[0].Index, \n groups[1].Index);\n }\n\n }\n\n}\n// The example produces the following output to the console:\n// 3 matches found in:\n// The the quick brown fox fox jumped over the lazy dog dog.\n// 'The' repeated at positions 0 and 4\n// 'fox' repeated at positions 20 and 25\n// 'dog' repeated at positions 50 and 54\n\n" ]
[ 16, 7, 3, 0 ]
[ "Craig is right — use regular expressions. Regex.Split may be more concise for your needs.\n", "\n[^\\t]+\\t|\"[^\"]+\"\\t\n\nusing the Regex definitely looks like the best bet, however this one just returns the whole string. I'm trying to tweak it, but not much luck so far.\nstring[] tokens = System.Text.RegularExpressions.Regex.Split(this.BuildArgs, @\"[^\\t]+\\t|\"\"[^\"\"]+\"\"\\t\");\n\n" ]
[ -1, -1 ]
[ "c#", "string", "tokenize" ]
stackoverflow_0000054866_c#_string_tokenize.txt
Q: How do I insert text into a textbox after popping up another window to request information? I have an asp.net web page written in C#. Using some javascript I popup another .aspx page which has a few controls that are filled in and from which I create a small snippet of text. When the user clicks OK on that dialog box I want to insert that piece of text into a textbox on the page that initial "popped up" the dialog/popup page. I'm guessing that this will involve javascript which is not a strong point of mine. How do I do this? A: You will have to do something like: parent.opener.document.getElemenyById('ParentTextBox').value = "New Text"; A: What you could do is create an ajax modal pop-up instead of a new window. The semantic and aesthetic value is greater not to mention the data-passing is much easier. http://www.asp.net/ajax/ajaxcontroltoolkit/samples/modalpopup/modalpopup.aspx
How do I insert text into a textbox after popping up another window to request information?
I have an asp.net web page written in C#. Using some javascript I popup another .aspx page which has a few controls that are filled in and from which I create a small snippet of text. When the user clicks OK on that dialog box I want to insert that piece of text into a textbox on the page that initial "popped up" the dialog/popup page. I'm guessing that this will involve javascript which is not a strong point of mine. How do I do this?
[ "You will have to do something like:\nparent.opener.document.getElemenyById('ParentTextBox').value = \"New Text\";\n\n", "What you could do is create an ajax modal pop-up instead of a new window. The semantic and aesthetic value is greater not to mention the data-passing is much easier.\nhttp://www.asp.net/ajax/ajaxcontroltoolkit/samples/modalpopup/modalpopup.aspx\n" ]
[ 6, 2 ]
[]
[]
[ "asp.net", "c#", "javascript" ]
stackoverflow_0000055203_asp.net_c#_javascript.txt
Q: How can I return an anonymous type from a method? I have a Linq query that I want to call from multiple places: var myData = from a in db.MyTable where a.MyValue == "A" select new { a.Key, a.MyValue }; How can I create a method, put this code in it, and then call it? public ??? GetSomeData() { // my Linq query } A: IQueryable and IEnumerable both work. But you want to use a type specific version, IQueryable<T> or IEnumerable <T>. So you'll want to create a type to keep the data. var myData = from a in db.MyTable where a.MyValue == "A" select new MyType { Key = a.Key, Value = a.MyValue }; A: IQueryable So your method declaration would look like public IQueryable GetSomeData() A: A generic method should give you intellisense: public class MyType {Key{get;set;} Value{get;set}} public IQueryable<T> GetSomeData<T>() where T : MyType, new() { return from a in db.MyTable where a.MyValue == "A" select new T {Key=a.Key,Value=a.MyValue}; } A: If you want to return, you need a type. Instead of var, declare using IEnumerable<> and return that variable. Iterating through it actually executes the query.
How can I return an anonymous type from a method?
I have a Linq query that I want to call from multiple places: var myData = from a in db.MyTable where a.MyValue == "A" select new { a.Key, a.MyValue }; How can I create a method, put this code in it, and then call it? public ??? GetSomeData() { // my Linq query }
[ "IQueryable and IEnumerable both work. But you want to use a type specific version, IQueryable<T> or IEnumerable <T>.\nSo you'll want to create a type to keep the data.\nvar myData = from a in db.MyTable\n where a.MyValue == \"A\"\n select new MyType\n {\n Key = a.Key,\n Value = a.MyValue\n };\n\n", "IQueryable\nSo your method declaration would look like\npublic IQueryable GetSomeData()\n\n", "A generic method should give you intellisense:\npublic class MyType {Key{get;set;} Value{get;set}}\n\npublic IQueryable<T> GetSomeData<T>() where T : MyType, new() \n { return from a in db.MyTable\n where a.MyValue == \"A\" \n select new T {Key=a.Key,Value=a.MyValue};\n }\n\n", "If you want to return, you need a type.\nInstead of var, declare using IEnumerable<> and return that variable. Iterating through it actually executes the query.\n" ]
[ 10, 8, 3, 2 ]
[]
[]
[ "c#", "data_structures", "linq", "parameter_passing" ]
stackoverflow_0000055101_c#_data_structures_linq_parameter_passing.txt
Q: Files on Windows and Contiguous Sectors Is there a way to guarantee that a file on Windows (using the NTFS file system) will use contiguous sectors on the hard disk? In other words, the first chunk of the file will be stored in a certain sector, the second chunk of the file will be stored in the next sector, and so on. I should add that I want to be able to create this file programmatically, so I'd rather not just ask the user to defrag their harddrive after creating this file. If there is a way to programmatically defrag just the file that I create, then that would be OK too. A: I would start here: http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx and follow Mark's documentation of the defrag stuff: http://technet.microsoft.com/en-us/sysinternals/bb897427.aspx A: I know of no such guarantees. But also keep in mind that NTFS "files" are comprised of multiple data streams. So you are actually looking for a way to guarantee that a stream is contiguous. A: I believe there's no way to achieve that. You can only defragment the file after it's been written.
Files on Windows and Contiguous Sectors
Is there a way to guarantee that a file on Windows (using the NTFS file system) will use contiguous sectors on the hard disk? In other words, the first chunk of the file will be stored in a certain sector, the second chunk of the file will be stored in the next sector, and so on. I should add that I want to be able to create this file programmatically, so I'd rather not just ask the user to defrag their harddrive after creating this file. If there is a way to programmatically defrag just the file that I create, then that would be OK too.
[ "I would start here:\nhttp://technet.microsoft.com/en-us/sysinternals/bb897428.aspx\nand follow Mark's documentation of the defrag stuff:\nhttp://technet.microsoft.com/en-us/sysinternals/bb897427.aspx\n", "I know of no such guarantees.\nBut also keep in mind that NTFS \"files\" are comprised of multiple data streams. So you are actually looking for a way to guarantee that a stream is contiguous.\n", "I believe there's no way to achieve that. You can only defragment the file after it's been written.\n" ]
[ 7, 1, 0 ]
[]
[]
[ "filesystems", "windows" ]
stackoverflow_0000055256_filesystems_windows.txt
Q: Find checkout history for SVN working folder We have an intranet site backed by SVN, such that the site is a checkout out copy of the repository (working folder used only by IIS). Something on the site has been causing problems today, and I want to know how to find out what was checked out to that working folder in the last 48 hours. Update: If there's an option I need to turn on to enable this in the future, what is it? Also, as a corollary question, if I have to use the file creation time, how can I do that quickly in a recursive manner for a large folder? If I have to check creation times, then this question will be helpful to the solution as well. A: You could use creation-dates on the local files. You can't use modification-dates because Subversion sets those to last-changed upon checkout. Also Subversion can log checkouts, but that's server-side A: All the code in the web folder should be backed by SVN commits, shouldn't it? If this is the case you should easily be able to track the problem down just by looking through your SVN logs at the last few changes that got committed. svn info will tell you which revision the working copy currently is at, so you know where to start looking Once you track down the commit with the bug in it, you can use svn blame to find the person that did it, and explain to them what they overlooked and how they caused the bug. Then you can make them buy everyone lunch for screwing up the site. If you have locally modified/added any files which aren't in SVN, then svn stat and svn diff will show you what those changes are, so you can figure out if they are causing the problem too. You should then revert those changes so your working copy is a clean checkout, or commit the changes into the repository. There's nothing worse than trying to track down a bug in your code only to find out 3 hours later that the bug is not actually in any of your code, but in some stupid local tweak someone made in the working copy that never got committed :-( A: Depending how you access your SVN repo - if you're accessing it as file:// URLs, I think you're out of luck. But if you're using svnserve, or one of the HTTP gateways, you should be able to check your server logs for access to the SVN urls. A: I would run a svn st in the web folder (to find any files that are changed since the checkout) and compare that to the repository.
Find checkout history for SVN working folder
We have an intranet site backed by SVN, such that the site is a checkout out copy of the repository (working folder used only by IIS). Something on the site has been causing problems today, and I want to know how to find out what was checked out to that working folder in the last 48 hours. Update: If there's an option I need to turn on to enable this in the future, what is it? Also, as a corollary question, if I have to use the file creation time, how can I do that quickly in a recursive manner for a large folder? If I have to check creation times, then this question will be helpful to the solution as well.
[ "You could use creation-dates on the local files. You can't use modification-dates because Subversion sets those to last-changed upon checkout.\nAlso Subversion can log checkouts, but that's server-side\n", "All the code in the web folder should be backed by SVN commits, shouldn't it?\nIf this is the case you should easily be able to track the problem down just by looking through your SVN logs at the last few changes that got committed.\nsvn info will tell you which revision the working copy currently is at, so you know where to start looking\nOnce you track down the commit with the bug in it, you can use svn blame to find the person that did it, and explain to them what they overlooked and how they caused the bug. Then you can make them buy everyone lunch for screwing up the site.\nIf you have locally modified/added any files which aren't in SVN, then svn stat and svn diff will show you what those changes are, so you can figure out if they are causing the problem too. You should then revert those changes so your working copy is a clean checkout, or commit the changes into the repository. \nThere's nothing worse than trying to track down a bug in your code only to find out 3 hours later that the bug is not actually in any of your code, but in some stupid local tweak someone made in the working copy that never got committed :-(\n", "Depending how you access your SVN repo - if you're accessing it as file:// URLs, I think you're out of luck. But if you're using svnserve, or one of the HTTP gateways, you should be able to check your server logs for access to the SVN urls.\n", "I would run a svn st in the web folder (to find any files that are changed since the checkout) and compare that to the repository.\n" ]
[ 2, 2, 1, 0 ]
[]
[]
[ "svn", "tortoisesvn" ]
stackoverflow_0000055099_svn_tortoisesvn.txt
Q: How to deal with arrays (declared on the stack) in C++? I have a class to parse a matrix that keeps the result in an array member: class Parser { ... double matrix_[4][4]; }; The user of this class needs to call an API function (as in, a function I have no control over, so I can't just change its interface to make things work more easily) that looks like this: void api_func(const double matrix[4][4]); The only way I have come up with for the caller to pass the array result to the function is by making the member public: void myfunc() { Parser parser; ... api_func(parser.matrix_); } Is this the only way to do things? I'm astounded by how inflexible multidimensional arrays declared like this are. I thought matrix_ would essentially be the same as a double** and I could cast (safely) between the two. As it turns out, I can't even find an unsafe way to cast between the things. Say I add an accessor to the Parser class: void* Parser::getMatrix() { return (void*)matrix_; } This will compile, but I can't use it, because there doesn't seem to be a way to cast back to the weirdo array type: // A smorgasbord of syntax errors... api_func((double[][])parser.getMatrix()); api_func((double[4][4])parser.getMatrix()); api_func((double**)parser.getMatrix()); // cast works but it's to the wrong type The error is: error C2440: 'type cast' : cannot convert from 'void *' to 'const double [4][4]' ...with an intriguing addendum: There are no conversions to array types, although there are conversions to references or pointers to arrays I can't determine how to cast to a reference or pointer to array either, albeit that it probably won't help me here. To be sure, at this point the matter is purely academic, as the void* casts are hardly cleaner than a single class member left public! A: Here's a nice, clean way: class Parser { public: typedef double matrix[4][4]; // ... const matrix& getMatrix() const { return matrix_; } // ... private: matrix matrix_; }; Now you're working with a descriptive type name rather than an array, but since it's a typedef the compiler will still allow passing it to the unchangeable API function that takes the base type. A: Try this. It compiles cleanly on gcc 4.1.3: typedef double FourSquare[4][4]; class Parser { private: double matrix_[4][4]; public: Parser() { for(int i=0; i<4; i++) for(int j=0; j<4; j++) matrix_[i][j] = i*j; } public: const FourSquare& GetMatrix() { return matrix_; } }; void api_func( const double matrix[4][4] ) { } int main( int argc, char** argv ) { Parser parser; api_func( parser.GetMatrix() ); return 0; } A: I've used a union like this to pass around matrices in the past: union matrix { double dflat[16]; double dmatr[4][4]; }; Then pass a pointer in to your setter and copy the data into the matrix in your class. There are ways of handling this otherwise (that are more generic), but this solution tends to be the cleanest in the end, in my experience. A: I thought matrix_ would essentially be the same as a double** In C there are true multi-dimensional arrays, not arrays of pointers to arrays, so a double[4][4] is a contiguous array of four double[4] arrays, equivalent to a double[16], not a (double*)[4]. There are no conversions to array types, although there are conversions to references or pointers to arrays Casting a value to a double[4][4] would attempt to construct one on the stack - equivalent to std::string(parser.getMatrix()) - except that the array doesn't supply a suitable constructor. You probably did't want to do that, even if you could. Since the type encodes the stride, you need a full type (double[][] won't do). You can reinterpret cast the void* to ((double[4][4])*), and then take the reference. But it's easiest to typedef the matrix and return a reference of the correct type in the first place: typedef double matrix_t[4][4]; class Parser { double matrix_[4][4]; public: void* get_matrix () { return static_cast<void*>(matrix_); } const matrix_t& get_matrix_ref () const { return matrix_; } }; int main () { Parser p; matrix_t& data1 = *reinterpret_cast<matrix_t*>(p.get_matrix()); const matrix_t& data2 = p.get_matrix_ref(); } A: To elaborate on the selected answer, observe this line const matrix& getMatrix() const This is great, you don't have to worry about pointers and casting. You're returning a reference to the underlying matrix object. IMHO references are one of the best features of C++, which I miss when coding in straight C. If you're not familiar with the difference between references and pointers in C++, read this At any rate, you do have to be aware that if the Parser object which actually owns the underlying matrix object goes out of scope, any code which tries to access the matrix via that reference will now be referencing an out-of-scope object, and you'll crash.
How to deal with arrays (declared on the stack) in C++?
I have a class to parse a matrix that keeps the result in an array member: class Parser { ... double matrix_[4][4]; }; The user of this class needs to call an API function (as in, a function I have no control over, so I can't just change its interface to make things work more easily) that looks like this: void api_func(const double matrix[4][4]); The only way I have come up with for the caller to pass the array result to the function is by making the member public: void myfunc() { Parser parser; ... api_func(parser.matrix_); } Is this the only way to do things? I'm astounded by how inflexible multidimensional arrays declared like this are. I thought matrix_ would essentially be the same as a double** and I could cast (safely) between the two. As it turns out, I can't even find an unsafe way to cast between the things. Say I add an accessor to the Parser class: void* Parser::getMatrix() { return (void*)matrix_; } This will compile, but I can't use it, because there doesn't seem to be a way to cast back to the weirdo array type: // A smorgasbord of syntax errors... api_func((double[][])parser.getMatrix()); api_func((double[4][4])parser.getMatrix()); api_func((double**)parser.getMatrix()); // cast works but it's to the wrong type The error is: error C2440: 'type cast' : cannot convert from 'void *' to 'const double [4][4]' ...with an intriguing addendum: There are no conversions to array types, although there are conversions to references or pointers to arrays I can't determine how to cast to a reference or pointer to array either, albeit that it probably won't help me here. To be sure, at this point the matter is purely academic, as the void* casts are hardly cleaner than a single class member left public!
[ "Here's a nice, clean way:\nclass Parser\n{\npublic:\n typedef double matrix[4][4];\n\n // ...\n\n const matrix& getMatrix() const\n {\n return matrix_;\n }\n\n // ...\n\nprivate:\n matrix matrix_;\n};\n\nNow you're working with a descriptive type name rather than an array, but since it's a typedef the compiler will still allow passing it to the unchangeable API function that takes the base type.\n", "Try this. It compiles cleanly on gcc 4.1.3:\ntypedef double FourSquare[4][4];\n\nclass Parser\n{\n private:\n double matrix_[4][4];\n\n public:\n Parser()\n {\n for(int i=0; i<4; i++)\n for(int j=0; j<4; j++)\n matrix_[i][j] = i*j;\n }\n\n public:\n const FourSquare& GetMatrix()\n {\n return matrix_;\n }\n};\n\nvoid api_func( const double matrix[4][4] )\n{\n}\n\nint main( int argc, char** argv )\n{\n Parser parser;\n api_func( parser.GetMatrix() );\n return 0;\n}\n\n", "I've used a union like this to pass around matrices in the past:\nunion matrix {\n double dflat[16];\n double dmatr[4][4];\n};\n\nThen pass a pointer in to your setter and copy the data into the matrix in your class.\nThere are ways of handling this otherwise (that are more generic), but this solution tends to be the cleanest in the end, in my experience.\n", "\nI thought matrix_ would essentially be the same as a double**\n\nIn C there are true multi-dimensional arrays, not arrays of pointers to arrays, so a double[4][4] is a contiguous array of four double[4] arrays, equivalent to a double[16], not a (double*)[4]. \n\nThere are no conversions to array types, although there are conversions to references or pointers to arrays\n Casting a value to a double[4][4] would attempt to construct one on the stack - equivalent to std::string(parser.getMatrix()) - except that the array doesn't supply a suitable constructor. You probably did't want to do that, even if you could.\n\nSince the type encodes the stride, you need a full type (double[][] won't do). You can reinterpret cast the void* to ((double[4][4])*), and then take the reference. But it's easiest to typedef the matrix and return a reference of the correct type in the first place:\ntypedef double matrix_t[4][4];\n\nclass Parser\n{\n double matrix_[4][4];\npublic:\n void* get_matrix () { return static_cast<void*>(matrix_); }\n\n const matrix_t& get_matrix_ref () const { return matrix_; }\n};\n\nint main ()\n{\n Parser p;\n\n matrix_t& data1 = *reinterpret_cast<matrix_t*>(p.get_matrix());\n\n const matrix_t& data2 = p.get_matrix_ref();\n}\n\n", "To elaborate on the selected answer, observe this line\nconst matrix& getMatrix() const\n\nThis is great, you don't have to worry about pointers and casting. You're returning a reference to the underlying matrix object. IMHO references are one of the best features of C++, which I miss when coding in straight C.\nIf you're not familiar with the difference between references and pointers in C++, read this\nAt any rate, you do have to be aware that if the Parser object which actually owns the underlying matrix object goes out of scope, any code which tries to access the matrix via that reference will now be referencing an out-of-scope object, and you'll crash.\n" ]
[ 16, 6, 4, 4, 2 ]
[]
[]
[ "arrays", "c++" ]
stackoverflow_0000055093_arrays_c++.txt
Q: Are there best practices for testing security in an Agile development shop? Regarding Agile development, what are the best practices for testing security per release? If it is a monthly release, are there shops doing pen-tests every month? A: What's your application domain? It depends. Since you used the word "Agile", I'm guessing it's a web app. I have a nice easy answer for you. Go buy a copy of Burp Suite (it's the #1 Google result for "burp" --- a sure endorsement!); it'll cost you 99EU, or ~$180USD, or $98 Obama Dollars if you wait until November. Burp works as a web proxy. You browse through your web app using Firefox or IE or whatever, and it collects all the hits you generate. These hits get fed to a feature called "Intruder", which is a web fuzzer. Intruder will figure out all the parameters you provide to each one of your query handlers. It will then try crazy values for each parameter, including SQL, filesystem, and HTML metacharacters. On a typical complex form post, this is going to generate about 1500 hits, which you'll look through to identify scary --- or, more importantly in an Agile context, new --- error responses. Fuzzing every query handler in your web app at each release iteration is the #1 thing you can do to improve application security without instituting a formal "SDLC" and adding headcount. Beyond that, review your code for the major web app security hot spots: Use only parameterized prepared SQL statements; don't ever simply concatenate strings and feed them to your database handle. Filter all inputs to a white list of known good characters (alnum, basic punctuation), and, more importantly, output filter data from your query results to "neutralize" HTML metacharacters to HTML entities (quot, lt, gt, etc). Use long random hard-to-guess identifiers anywhere you're currently using simple integer row IDs in query parameters, and make sure user X can't see user Y's data just by guessing those identifiers. Test every query handler in your application to ensure that they function only when a valid, logged-on session cookie is presented. Turn on the XSRF protection in your web stack, which will generate hidden form token parameters on all your rendered forms, to prevent attackers from creating malicious links that will submit forms for unsuspecting users. Use bcrypt --- and nothing else --- to store hashed passwords. A: I'm no expert on Agile development, but I would imagine that integrating some basic automated pen-test software into your build cycle would be a good start. I have seen several software packages out there that will do basic testing and are well suited for automation. A: I'm not a security expert, but I think the most important fact you should be aware of, before testing security, is what you are trying to protect. Only if you know what you are trying to protect, you can do a proper analysis of your security measures and only then you can start testing those implemented measures. Very abstract, I know. However, I think it should be the first step of every security audit. A: Unit testing, Defense Programming and lots of logs Unit testing Make sure you unit test as early as possible (e.g. the password should be encrypted before sending, the SSL tunnel is working, etc). This would prevent your programmers from accidentally making the program insecure. Defense Programming I personally call this the Paranoid Programming but Wikipedia is never wrong (sarcasm). Basically, you add tests to your functions that checks all the inputs: is the user's cookies valid? is he still currently logged in? are the function's parameters protected against SQL injection? (even though you know that the input are generated by your own functions, you will test anyway) Logging Log everything like crazy. Its easier to remove logs then to add them. A user have logged in? Log it. A user found a 404? Log it. The admin edited/deleted a post? Log it. Someone was able to access a restricted page? Log it. Don't be surprised if your log file reaches 15+ Mb during your development phase. During beta, you can decide which logs to remove. If you want, you can add a flag to decide when a certain event is logged.
Are there best practices for testing security in an Agile development shop?
Regarding Agile development, what are the best practices for testing security per release? If it is a monthly release, are there shops doing pen-tests every month?
[ "What's your application domain? It depends. \nSince you used the word \"Agile\", I'm guessing it's a web app. I have a nice easy answer for you. \nGo buy a copy of Burp Suite (it's the #1 Google result for \"burp\" --- a sure endorsement!); it'll cost you 99EU, or ~$180USD, or $98 Obama Dollars if you wait until November. \nBurp works as a web proxy. You browse through your web app using Firefox or IE or whatever, and it collects all the hits you generate. These hits get fed to a feature called \"Intruder\", which is a web fuzzer. Intruder will figure out all the parameters you provide to each one of your query handlers. It will then try crazy values for each parameter, including SQL, filesystem, and HTML metacharacters. On a typical complex form post, this is going to generate about 1500 hits, which you'll look through to identify scary --- or, more importantly in an Agile context, new --- error responses.\nFuzzing every query handler in your web app at each release iteration is the #1 thing you can do to improve application security without instituting a formal \"SDLC\" and adding headcount. Beyond that, review your code for the major web app security hot spots:\n\nUse only parameterized prepared SQL statements; don't ever simply concatenate strings and feed them to your database handle.\nFilter all inputs to a white list of known good characters (alnum, basic punctuation), and, more importantly, output filter data from your query results to \"neutralize\" HTML metacharacters to HTML entities (quot, lt, gt, etc). \nUse long random hard-to-guess identifiers anywhere you're currently using simple integer row IDs in query parameters, and make sure user X can't see user Y's data just by guessing those identifiers.\nTest every query handler in your application to ensure that they function only when a valid, logged-on session cookie is presented.\nTurn on the XSRF protection in your web stack, which will generate hidden form token parameters on all your rendered forms, to prevent attackers from creating malicious links that will submit forms for unsuspecting users.\nUse bcrypt --- and nothing else --- to store hashed passwords.\n\n", "I'm no expert on Agile development, but I would imagine that integrating some basic automated pen-test software into your build cycle would be a good start. I have seen several software packages out there that will do basic testing and are well suited for automation.\n", "I'm not a security expert, but I think the most important fact you should be aware of, before testing security, is what you are trying to protect. Only if you know what you are trying to protect, you can do a proper analysis of your security measures and only then you can start testing those implemented measures.\nVery abstract, I know. However, I think it should be the first step of every security audit.\n", "Unit testing, Defense Programming and lots of logs\nUnit testing\nMake sure you unit test as early as possible (e.g. the password should be encrypted before sending, the SSL tunnel is working, etc). This would prevent your programmers from accidentally making the program insecure.\nDefense Programming\nI personally call this the Paranoid Programming but Wikipedia is never wrong (sarcasm). Basically, you add tests to your functions that checks all the inputs:\n\nis the user's cookies valid?\nis he still currently logged in?\nare the function's parameters protected against SQL injection? (even though you know that the input are generated by your own functions, you will test anyway)\n\nLogging\nLog everything like crazy. Its easier to remove logs then to add them. A user have logged in? Log it. A user found a 404? Log it. The admin edited/deleted a post? Log it. Someone was able to access a restricted page? Log it.\nDon't be surprised if your log file reaches 15+ Mb during your development phase. During beta, you can decide which logs to remove. If you want, you can add a flag to decide when a certain event is logged.\n" ]
[ 2, 1, 1, 1 ]
[]
[]
[ "agile", "security" ]
stackoverflow_0000002447_agile_security.txt
Q: Kill a specific PHP script running on FastCGI / IIS? I'm a PHP developer, but honestly my knowledge of server management is somewhat lacking. I fired off a script today that took a regrettably long time to run, and because it had an embedded call to ignore_user_abort(), pressing "stop" in the browser was obviously futile. There was a time limit of 15 minutes enforced in the FastCGI settings, but this was still incessantly long since I really had to just wait it out before I could continue with anything else. Is there some way to manage/kill whatever PHP scripts are being executed by FastCGI at any given moment? A: Does the php process appear in the taskmanager?I wonder what happens if you kill it there. Will the IIS start another one to handle the next request?
Kill a specific PHP script running on FastCGI / IIS?
I'm a PHP developer, but honestly my knowledge of server management is somewhat lacking. I fired off a script today that took a regrettably long time to run, and because it had an embedded call to ignore_user_abort(), pressing "stop" in the browser was obviously futile. There was a time limit of 15 minutes enforced in the FastCGI settings, but this was still incessantly long since I really had to just wait it out before I could continue with anything else. Is there some way to manage/kill whatever PHP scripts are being executed by FastCGI at any given moment?
[ "Does the php process appear in the taskmanager?I wonder what happens if you kill it there. Will the IIS start another one to handle the next request?\n" ]
[ 1 ]
[]
[]
[ "fastcgi", "iis", "php" ]
stackoverflow_0000055279_fastcgi_iis_php.txt
Q: What common web exploits should I know about? I'm pretty green still when it comes to web programming, I've spent most of my time on client applications. So I'm curious about the common exploits I should fear/test for in my site. A: I'm posting the OWASP Top 2007 abbreviated list here so people don't have to look through to another link and in case the source goes down. Cross Site Scripting (XSS) XSS flaws occur whenever an application takes user supplied data and sends it to a web browser without first validating or encoding that content. XSS allows attackers to execute script in the victim's browser which can hijack user sessions, deface web sites, possibly introduce worms, etc. Injection Flaws Injection flaws, particularly SQL injection, are common in web applications. Injection occurs when user-supplied data is sent to an interpreter as part of a command or query. The attacker's hostile data tricks the interpreter into executing unintended commands or changing data. Malicious File Execution Code vulnerable to remote file inclusion (RFI) allows attackers to include hostile code and data, resulting in devastating attacks, such as total server compromise. Malicious file execution attacks affect PHP, XML and any framework which accepts filenames or files from users. Insecure Direct Object Reference A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, database record, or key, as a URL or form parameter. Attackers can manipulate those references to access other objects without authorization. Cross Site Request Forgery (CSRF) A CSRF attack forces a logged-on victim's browser to send a pre-authenticated request to a vulnerable web application, which then forces the victim's browser to perform a hostile action to the benefit of the attacker. CSRF can be as powerful as the web application that it attacks. Information Leakage and Improper Error Handling Applications can unintentionally leak information about their configuration, internal workings, or violate privacy through a variety of application problems. Attackers use this weakness to steal sensitive data, or conduct more serious attacks. Broken Authentication and Session Management Account credentials and session tokens are often not properly protected. Attackers compromise passwords, keys, or authentication tokens to assume other users' identities. Insecure Cryptographic Storage Web applications rarely use cryptographic functions properly to protect data and credentials. Attackers use weakly protected data to conduct identity theft and other crimes, such as credit card fraud. Insecure Communications Applications frequently fail to encrypt network traffic when it is necessary to protect sensitive communications. Failure to Restrict URL Access Frequently, an application only protects sensitive functionality by preventing the display of links or URLs to unauthorized users. Attackers can use this weakness to access and perform unauthorized operations by accessing those URLs directly. The Open Web Application Security Project -Adam A: OWASP keeps a list of the Top 10 web attacks to watch our for, in addition to a ton of other useful security information for web development. A: These three are the most important: Cross Site Request Forgery Cross Site Scripting SQL injection A: bool UserCredentialsOK(User user) { if (user.Name == "modesty") return false; else // perform other checks } A: Everyone's going to say "SQL Injection", because it's the scariest-sounding vulnerability and the easiest one to get your head around. Cross-Site Scripting (XSS) is going to come in second place, because it's also easy to understand. "Poor input validation" isn't a vulnerability, but rather an evaluation of a security best practice. Let's try this from a different perspective. Here are features that, when implemented in a web application, are likely to mess you up: Dynamic SQL (for instance, UI query builders). By now, you probably know that the only reliably safe way to use SQL in a web app is to use parameterized queries, where you explicitly bind each parameter in the query to a variable. The places where I see web apps most frequently break this rule is when the malicious input isn't an obvious parameter (like a name), but rather a query attribute. An obvious example are the iTunes-like "Smart Playlist" query builders you see on search sites, where things like where-clause operators are passed directly to the backend. Another great rock to turn over are table column sorts, where you'll see things like DESC exposed in HTTP parameters. File upload. File upload messes people up because file pathnames look suspiciously like URL pathnames, and because web servers make it easy to implement the "download" part just by aiming URLs at directories on the filesystem. 7 out of 10 upload handlers we test allow attackers to access arbitrary files on the server, because the app developers assumed the same permissions were applied to the filesystem "open()" call as are applied to queries. Password storage. If your application can mail me back my raw password when I lose it, you fail. There's a single safe reliable answer for password storage, which is bcrypt; if you're using PHP, you probably want PHPpass. Random number generation. A classic attack on web apps: reset another user's password, and, because the app is using the system's "rand()" function, which is not crypto-strong, the password is predictable. This also applies anywhere you're doing cryptography. Which, by the way, you shouldn't be doing: if you're relying on crypto anywhere, you're very likely vulnerable. Dynamic output. People put too much faith in input validation. Your chances of scrubbing user inputs of all possible metacharacters, especially in the real world, where metacharacters are necessary parts of user input, are low. A much better approach is to have a consistent regime of filtering database outputs and transforming them into HTML entities, like quot, gt, and lt. Rails will do this for you automatically. Email. Plenty of applications implement some sort of outbound mail capability that enable an attacker to either create an anonymous account, or use no account at all, to send attacker-controlled email to arbitrary email addresses. Beyond these features, the #1 mistake you are likely to make in your application is to expose a database row ID somewhere, so that user X can see data for user Y simply by changing a number from "5" to "6". A: SQL INJECTION ATTACKS. They are easy to avoid but all too common. NEVER EVER EVER EVER (did I mention "ever"?) trust user information passed to you from form elements. If your data is not vetted before being passed into other logical layers of your application, you might as well give the keys to your site to a stranger on the street. You do not mention what platform you are on but if on ASP.NET get a start with good ol' Scott Guthrie and his article "Tip/Trick: Guard Against SQL Injection Attacks". After that you need to consider what type of data you will permit users to submit into and eventually out of your database. If you permit HTML to be inserted and then later presented you are wide-open for Cross Site Scripting attacks (known as XSS). Those are the two that come to mind for me, but our very own Jeff Atwood had a good article at Coding Horror with a review of the book "19 Deadly Sins of Software Security". A: Most people here have mentioned SQL Injection and XSS, which is kind of correct, but don't be fooled - the most important things you need to worry about as a web developer is INPUT VALIDATION, which is where XSS and SQL Injection stem from. For instance, if you have a form field that will only ever accept integers, make sure you're implementing something at both the client-side AND the server-side to sanitise the data. Check and double check any input data especially if it's going to end up in an SQL query. I suggest building an escaper function and wrap it around anything going into a query. For instance: $query = "SELECT field1, field2 FROM table1 WHERE field1 = '" . myescapefunc($userinput) . "'"; Likewise, if you're going to display any user-inputted information onto a webpage, make sure you've stripped any <script> tags or anything else that might result in Javascript execution (such as onLoad= onMouseOver= etc. attributes on tags). A: This is also a short little presentation on security by one of wordpress's core developers. Security in wordpress it covers all of the basic security problems in web apps. A: The most common are probably database injection attacks and cross-site scripting attacks; mainly because those are the easiest to accomplish (that's likely because those are the ones programmers are laziest about). A: You can see even on this site that the most damaging things you'll be looking after involve code injection into your application, so XSS (Cross Site Scripting) and SQL injection (@Patrick's suggestions) are your biggest concerns. Basically you're going to want to make sure that if your application allows for a user to inject any code whatsoever, it's regulated and tested to be sure that only things you're sure you want to allow (an html link, image, etc) are passed, and nothing else is executed. A: SQL Injection. Cross Site Scripting. A: Using stored procedures and/or parameterized queries will go a long way in protecting you from sql injection. Also do NOT have your web app access the database as sa or dbo - set a up a standard user account and set the permissions. AS for XSS (cross site scripting) ASP.NET has some built in protections. The best thing is to filter input using validation controls and Regex. A: I'm no expert, but from what I learned so far the golden rule is not to trust any user data (GET, POST, COOKIE). Common attack types and how to save yourself: SQL Injection Attack: Use prepared queries Cross Site Scripting: Send no user data to browser without filtering/escaping first. This also includes user data stored in database, which originally came from users.
What common web exploits should I know about?
I'm pretty green still when it comes to web programming, I've spent most of my time on client applications. So I'm curious about the common exploits I should fear/test for in my site.
[ "I'm posting the OWASP Top 2007 abbreviated list here so people don't have to look through to another link and in case the source goes down.\nCross Site Scripting (XSS)\n\nXSS flaws occur whenever an application takes user supplied data and sends it to a web browser without first validating or encoding that content. XSS allows attackers to execute script in the victim's browser which can hijack user sessions, deface web sites, possibly introduce worms, etc.\n\nInjection Flaws\n\nInjection flaws, particularly SQL injection, are common in web applications. Injection occurs when user-supplied data is sent to an interpreter as part of a command or query. The attacker's hostile data tricks the interpreter into executing unintended commands or changing data.\n\nMalicious File Execution\n\nCode vulnerable to remote file inclusion (RFI) allows attackers to include hostile code and data, resulting in devastating attacks, such as total server compromise. Malicious file execution attacks affect PHP, XML and any framework which accepts filenames or files from users.\n\nInsecure Direct Object Reference\n\nA direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, database record, or key, as a URL or form parameter. Attackers can manipulate those references to access other objects without authorization.\n\nCross Site Request Forgery (CSRF)\n\nA CSRF attack forces a logged-on victim's browser to send a pre-authenticated request to a vulnerable web application, which then forces the victim's browser to perform a hostile action to the benefit of the attacker. CSRF can be as powerful as the web application that it attacks.\n\nInformation Leakage and Improper Error Handling\n\nApplications can unintentionally leak information about their configuration, internal workings, or violate privacy through a variety of application problems. Attackers use this weakness to steal sensitive data, or conduct more serious attacks.\n\nBroken Authentication and Session Management\n\nAccount credentials and session tokens are often not properly protected. Attackers compromise passwords, keys, or authentication tokens to assume other users' identities.\n\nInsecure Cryptographic Storage\n\nWeb applications rarely use cryptographic functions properly to protect data and credentials. Attackers use weakly protected data to conduct identity theft and other crimes, such as credit card fraud.\n\nInsecure Communications\n\nApplications frequently fail to encrypt network traffic when it is necessary to protect sensitive communications.\n\nFailure to Restrict URL Access\n\nFrequently, an application only protects sensitive functionality by preventing the display of links or URLs to unauthorized users. Attackers can use this weakness to access and perform unauthorized operations by accessing those URLs directly.\n\nThe Open Web Application Security Project\n-Adam\n", "OWASP keeps a list of the Top 10 web attacks to watch our for, in addition to a ton of other useful security information for web development.\n", "These three are the most important:\n\nCross Site Request Forgery\nCross Site Scripting\nSQL injection\n\n", "bool UserCredentialsOK(User user)\n{\n\n if (user.Name == \"modesty\")\n return false;\n else\n // perform other checks\n} \n\n", "Everyone's going to say \"SQL Injection\", because it's the scariest-sounding vulnerability and the easiest one to get your head around. Cross-Site Scripting (XSS) is going to come in second place, because it's also easy to understand. \"Poor input validation\" isn't a vulnerability, but rather an evaluation of a security best practice.\nLet's try this from a different perspective. Here are features that, when implemented in a web application, are likely to mess you up:\n\nDynamic SQL (for instance, UI query builders). By now, you probably know that the only reliably safe way to use SQL in a web app is to use parameterized queries, where you explicitly bind each parameter in the query to a variable. The places where I see web apps most frequently break this rule is when the malicious input isn't an obvious parameter (like a name), but rather a query attribute. An obvious example are the iTunes-like \"Smart Playlist\" query builders you see on search sites, where things like where-clause operators are passed directly to the backend. Another great rock to turn over are table column sorts, where you'll see things like DESC exposed in HTTP parameters.\nFile upload. File upload messes people up because file pathnames look suspiciously like URL pathnames, and because web servers make it easy to implement the \"download\" part just by aiming URLs at directories on the filesystem. 7 out of 10 upload handlers we test allow attackers to access arbitrary files on the server, because the app developers assumed the same permissions were applied to the filesystem \"open()\" call as are applied to queries.\nPassword storage. If your application can mail me back my raw password when I lose it, you fail. There's a single safe reliable answer for password storage, which is bcrypt; if you're using PHP, you probably want PHPpass.\nRandom number generation. A classic attack on web apps: reset another user's password, and, because the app is using the system's \"rand()\" function, which is not crypto-strong, the password is predictable. This also applies anywhere you're doing cryptography. Which, by the way, you shouldn't be doing: if you're relying on crypto anywhere, you're very likely vulnerable.\nDynamic output. People put too much faith in input validation. Your chances of scrubbing user inputs of all possible metacharacters, especially in the real world, where metacharacters are necessary parts of user input, are low. A much better approach is to have a consistent regime of filtering database outputs and transforming them into HTML entities, like quot, gt, and lt. Rails will do this for you automatically.\nEmail. Plenty of applications implement some sort of outbound mail capability that enable an attacker to either create an anonymous account, or use no account at all, to send attacker-controlled email to arbitrary email addresses. \n\nBeyond these features, the #1 mistake you are likely to make in your application is to expose a database row ID somewhere, so that user X can see data for user Y simply by changing a number from \"5\" to \"6\". \n", "SQL INJECTION ATTACKS. They are easy to avoid but all too common.\nNEVER EVER EVER EVER (did I mention \"ever\"?) trust user information passed to you from form elements. If your data is not vetted before being passed into other logical layers of your application, you might as well give the keys to your site to a stranger on the street.\nYou do not mention what platform you are on but if on ASP.NET get a start with good ol' Scott Guthrie and his article \"Tip/Trick: Guard Against SQL Injection Attacks\".\nAfter that you need to consider what type of data you will permit users to submit into and eventually out of your database. If you permit HTML to be inserted and then later presented you are wide-open for Cross Site Scripting attacks (known as XSS).\nThose are the two that come to mind for me, but our very own Jeff Atwood had a good article at Coding Horror with a review of the book \"19 Deadly Sins of Software Security\".\n", "Most people here have mentioned SQL Injection and XSS, which is kind of correct, but don't be fooled - the most important things you need to worry about as a web developer is INPUT VALIDATION, which is where XSS and SQL Injection stem from.\nFor instance, if you have a form field that will only ever accept integers, make sure you're implementing something at both the client-side AND the server-side to sanitise the data. \nCheck and double check any input data especially if it's going to end up in an SQL query. I suggest building an escaper function and wrap it around anything going into a query. For instance:\n$query = \"SELECT field1, field2 FROM table1 WHERE field1 = '\" . myescapefunc($userinput) . \"'\";\n\nLikewise, if you're going to display any user-inputted information onto a webpage, make sure you've stripped any <script> tags or anything else that might result in Javascript execution (such as onLoad= onMouseOver= etc. attributes on tags).\n", "This is also a short little presentation on security by one of wordpress's core developers.\nSecurity in wordpress\nit covers all of the basic security problems in web apps.\n", "The most common are probably database injection attacks and cross-site scripting attacks; mainly because those are the easiest to accomplish (that's likely because those are the ones programmers are laziest about).\n", "You can see even on this site that the most damaging things you'll be looking after involve code injection into your application, so XSS (Cross Site Scripting) and SQL injection (@Patrick's suggestions) are your biggest concerns.\nBasically you're going to want to make sure that if your application allows for a user to inject any code whatsoever, it's regulated and tested to be sure that only things you're sure you want to allow (an html link, image, etc) are passed, and nothing else is executed.\n", "SQL Injection. Cross Site Scripting. \n", "Using stored procedures and/or parameterized queries will go a long way in protecting you from sql injection. Also do NOT have your web app access the database as sa or dbo - set a up a standard user account and set the permissions. \nAS for XSS (cross site scripting) ASP.NET has some built in protections. The best thing is to filter input using validation controls and Regex. \n", "I'm no expert, but from what I learned so far the golden rule is not to trust any user data (GET, POST, COOKIE). Common attack types and how to save yourself:\n\nSQL Injection Attack: Use prepared queries\nCross Site Scripting: Send no user data to browser without filtering/escaping first. This also includes user data stored in database, which originally came from users.\n\n" ]
[ 35, 29, 28, 10, 10, 3, 3, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "security", "testing" ]
stackoverflow_0000023102_security_testing.txt
Q: Reading VC++ CArchive Binary Format (or Java reading (CObArray)) Is there any clear documentation on the binary formats used to serialize the various MFC data structures? I've been able to view some of my own classes in a hex editor and use Java's ByteBuffer class to read them in (with automatic endianness conversions, etc). However, I am currently running into issues while trying to bring over the CObArray data, as there seems to be a rather large header that is opaque to me, and it is unclear how it is persisting object type information. Is there a set of online documentation that would be helpful for this? Or some sample Java code from someone that has dealt with this in the past? A: Since MFC ships with source code I would create a test MFC application that serializes a CObArray and step through the serialization code. This should give you all the information you need. A: I agree with jmatthias: use the MFC source code. There's also this page on MSDN that may be useful.
Reading VC++ CArchive Binary Format (or Java reading (CObArray))
Is there any clear documentation on the binary formats used to serialize the various MFC data structures? I've been able to view some of my own classes in a hex editor and use Java's ByteBuffer class to read them in (with automatic endianness conversions, etc). However, I am currently running into issues while trying to bring over the CObArray data, as there seems to be a rather large header that is opaque to me, and it is unclear how it is persisting object type information. Is there a set of online documentation that would be helpful for this? Or some sample Java code from someone that has dealt with this in the past?
[ "Since MFC ships with source code I would create a test MFC application that serializes a CObArray and step through the serialization code. This should give you all the information you need.\n", "I agree with jmatthias: use the MFC source code.\nThere's also this page on MSDN that may be useful.\n" ]
[ 3, 2 ]
[]
[]
[ "carchive", "java", "mfc", "serialization", "visual_c++" ]
stackoverflow_0000055369_carchive_java_mfc_serialization_visual_c++.txt
Q: Which C# project type would you use to redevelop a MFC C++ activex control? Looking at the C# project templates in VS2008 and the offerings are WPF User Control Library, WPF Custom Control Library and Windows Forms Control Library. Which of these would you use if you wanted to move a legacy active control written in c++ into the world of C# and .NET? A: It sounds like you are trying to do several different things all at once: Migrate your code to building in a newer version of visual studio. Migrate your use of technology to a newer technology (ActiveX to .net) Migrate your language (c++ to c#). If you have a small codebase you are probably as well to start from scratch and port functionality into the new codebase as required. For a larger codebase you need to realize that this is an expensive task both in effort and defect rate. An order might be: Import your code into the newer version of visual studio. Get it compiling. Review the project settings for each project. Refactor your code to isolate the mfc and activex code as much as possible. Follow good refactoring practices especially if don't have many unit tests before you start. Consider replacing your ActiveX layer with .net. Consider which GUI toolkit is best for replacing MFC. Language - consider moving first to managed c++. Consider moving from managed c++ to c#. Most importantly be able to justify doing all of the above! A: There is no project template that will do this for you. You might as well read up and start with a usercontrol. A: You would have to consider the target application that will host the control. If it is a line of business application I've heard that WPF doesn't offer great advantages over Forms. According to this blog entry however, the author believes that the killer WPF is a LOB application that leverages the graphical power afforded by WPF for data visualisation. In the end I guess it is a cost/benefit analysis. Do you go down the WPF route and pay the cost of the learning curve for the future benefit of graphical data visualisation or do you stick with the tried and true method and risk developing an outdated application.
Which C# project type would you use to redevelop a MFC C++ activex control?
Looking at the C# project templates in VS2008 and the offerings are WPF User Control Library, WPF Custom Control Library and Windows Forms Control Library. Which of these would you use if you wanted to move a legacy active control written in c++ into the world of C# and .NET?
[ "It sounds like you are trying to do several different things all at once:\n\nMigrate your code to building in a newer version of visual studio.\nMigrate your use of technology to a newer technology (ActiveX to .net)\nMigrate your language (c++ to c#).\n\nIf you have a small codebase you are probably as well to start from scratch and port functionality into the new codebase as required.\nFor a larger codebase you need to realize that this is an expensive task both in effort and defect rate.\nAn order might be:\n\nImport your code into the newer version of visual studio. Get it compiling. Review the project settings for each project.\nRefactor your code to isolate the mfc and activex code as much as possible. Follow good refactoring practices especially if don't have many unit tests before you start.\nConsider replacing your ActiveX layer with .net.\nConsider which GUI toolkit is best for replacing MFC.\nLanguage - consider moving first to managed c++.\nConsider moving from managed c++ to c#.\n\nMost importantly be able to justify doing all of the above!\n", "There is no project template that will do this for you. You might as well read up and start with a usercontrol.\n", "You would have to consider the target application that will host the control. If it is a line of business application I've heard that WPF doesn't offer great advantages over Forms. According to this blog entry however, the author believes that the killer WPF is a LOB application that leverages the graphical power afforded by WPF for data visualisation.\nIn the end I guess it is a cost/benefit analysis. Do you go down the WPF route and pay the cost of the learning curve for the future benefit of graphical data visualisation or do you stick with the tried and true method and risk developing an outdated application.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "c#", "c++", "code_migration", "visual_studio" ]
stackoverflow_0000055451_c#_c++_code_migration_visual_studio.txt
Q: How to parse relative time? This question is the other side of the question asking, "How do I calculate relative time?". Given some human input for a relative time, how can you parse it? By default you would offset from DateTime.Now(), but could optionally offset from another DateTime. (Prefer answers in C#) Example input: "in 20 minutes" "5 hours ago" "3h 2m" "next week" Edit: Let's suppose we can define some limits on the input. This sort of code would be a useful thing to have out on the web. A: A Google search turns up the parsedatetime library (associated with the Chandler project), which is designed to do exactly this. It's open source (Apache License) and written in Python. It seems to be quite sophisticated -- from the homepage: parsedatetime is able to parse, for example, the following: * Aug 25 5pm * 5pm August 25 * next saturday ... * tomorrow * next thursday at 4pm * at 4pm * eod * in 5 minutes * 5 minutes from now * 5 hours before now * 2 days from tomorrow Since it's implemented in pure Python and doesn't use anything fancy, there's a good chance it's compatible with IronPython, so you could use it with .net. If you want specifically a C# solution, you could write something based on the algorithms they use... It also comes with a whole bunch of unit tests. A: That's building a DSL (Domain specific language) for date handling. I don't know if somebody has done one for .NET but the construction of a DSL is fairly straightforward: Define the language precisely, which input forms you will accept and what will you do with ambiguities Construct the grammar for the language Build the finite state machine that parses your language into an actionable AST You can do all that by yourself (with the help of the Dragon Book, for instance) or with the help of tools to the effect, as shown in this link. Just by thinking hard about the possibilities you have a good chance, with the help of good UI examples, of covering more than half of the actual inputs your application will receive. If you aim to accept everything a human could possibly type, you can record the input determined as ambiguous and then add them to the grammar, whenever they can be interpreted, as there are things that will be inherently ambiguous. A: The ruby folks have attempted to tackle this with a parser called Chronic. Chronic RDocs Chronic on GitHub I watched an informative video presentation recently on how the author went about solving this problem. Chronic Presentation (San Diego Ruby Brigade) A: This is likely not all that helpful since you're talking c# but since no one's mentioned it yet you can try to take a look at php's excellent and utterly insane native strtotime function
How to parse relative time?
This question is the other side of the question asking, "How do I calculate relative time?". Given some human input for a relative time, how can you parse it? By default you would offset from DateTime.Now(), but could optionally offset from another DateTime. (Prefer answers in C#) Example input: "in 20 minutes" "5 hours ago" "3h 2m" "next week" Edit: Let's suppose we can define some limits on the input. This sort of code would be a useful thing to have out on the web.
[ "A Google search turns up the parsedatetime library (associated with the Chandler project), which is designed to do exactly this. It's open source (Apache License) and written in Python. It seems to be quite sophisticated -- from the homepage:\n\nparsedatetime is able to parse, for\n example, the following:\n* Aug 25 5pm\n* 5pm August 25\n* next saturday\n...\n* tomorrow\n* next thursday at 4pm\n* at 4pm\n* eod\n* in 5 minutes\n* 5 minutes from now\n* 5 hours before now\n* 2 days from tomorrow\n\n\nSince it's implemented in pure Python and doesn't use anything fancy, there's a good chance it's compatible with IronPython, so you could use it with .net. If you want specifically a C# solution, you could write something based on the algorithms they use...\nIt also comes with a whole bunch of unit tests.\n", "That's building a DSL (Domain specific language) for date handling. I don't know if somebody has done one for .NET but the construction of a DSL is fairly straightforward:\n\nDefine the language precisely, which input forms you will accept and what will you do with ambiguities\nConstruct the grammar for the language\nBuild the finite state machine that parses your language into an actionable AST\n\nYou can do all that by yourself (with the help of the Dragon Book, for instance) or with the help of tools to the effect, as shown in this link.\nJust by thinking hard about the possibilities you have a good chance, with the help of good UI examples, of covering more than half of the actual inputs your application will receive. If you aim to accept everything a human could possibly type, you can record the input determined as ambiguous and then add them to the grammar, whenever they can be interpreted, as there are things that will be inherently ambiguous.\n", "The ruby folks have attempted to tackle this with a parser called Chronic.\n\nChronic RDocs\nChronic on GitHub\n\nI watched an informative video presentation recently on how the author went about solving this problem. \n\nChronic Presentation (San Diego Ruby Brigade)\n\n", "This is likely not all that helpful since you're talking c# but since no one's mentioned it yet you can try to take a look at php's excellent and utterly insane native strtotime function\n" ]
[ 7, 3, 0, 0 ]
[ "This: http://www.codeproject.com/KB/edit/dateparser.aspx\nIs fairly close to what you are trying to accomplish. Not the most elegant solution, but certainly might save you some work.\n" ]
[ -1 ]
[ "c#", "language_agnostic", "parsing", "time" ]
stackoverflow_0000055434_c#_language_agnostic_parsing_time.txt
Q: What’s the best approach when migrating legacy projects across versions of visual studio? I've been thinking about the number of projects we have in-house that are still being developed using visual studio 6 and how best to migrate them forward onto visual studio 2008. The projects range in flavours of C/C++ and VB. Is it better to let VS2008 convert the work-spaces into solutions, fix any compile errors and be on your merry way? Or, is it better to start with a clean solution and migrate code across project by project discarding dead code along the way? A: The Microsoft p&p team has recommended some strategies that answers this. Basically they recommend something like the project by project approach you mention. Of course, they're assuming a neatly architected application that has no nasty, dark corners from which late nights of coding and copious amounts of coffee spring from. It doesn't hurt to let VS2008 convert the project for you and see how much effort is required to fix the errors. A: When I had to convert a VB6 app to VS2003 several years ago, I ran the converter and it produced something that basically compiled, but wasn't very good at all. I ended up having to modify a big chunk of the code it generated. I would start with a clean solution, then run the converter on a project and copy over only the code you need. One of the big differences I noticed between a VB6 project and the converted VB.NET project (WinForm) was with the built-in controls. The converter would try to preserve the type of controls you were using, even if they were old and outdated. So you might be better served by creating new forms with modern controls (text boxes, tab controls, etc), then copy in the code that you need.
What’s the best approach when migrating legacy projects across versions of visual studio?
I've been thinking about the number of projects we have in-house that are still being developed using visual studio 6 and how best to migrate them forward onto visual studio 2008. The projects range in flavours of C/C++ and VB. Is it better to let VS2008 convert the work-spaces into solutions, fix any compile errors and be on your merry way? Or, is it better to start with a clean solution and migrate code across project by project discarding dead code along the way?
[ "The Microsoft p&p team has recommended some strategies that answers this. Basically they recommend something like the project by project approach you mention. Of course, they're assuming a neatly architected application that has no nasty, dark corners from which late nights of coding and copious amounts of coffee spring from.\nIt doesn't hurt to let VS2008 convert the project for you and see how much effort is required to fix the errors.\n", "When I had to convert a VB6 app to VS2003 several years ago, I ran the converter and it produced something that basically compiled, but wasn't very good at all. I ended up having to modify a big chunk of the code it generated.\nI would start with a clean solution, then run the converter on a project and copy over only the code you need. One of the big differences I noticed between a VB6 project and the converted VB.NET project (WinForm) was with the built-in controls. The converter would try to preserve the type of controls you were using, even if they were old and outdated. So you might be better served by creating new forms with modern controls (text boxes, tab controls, etc), then copy in the code that you need.\n" ]
[ 3, 2 ]
[]
[]
[ "legacy", "migration", "visual_studio" ]
stackoverflow_0000055448_legacy_migration_visual_studio.txt
Q: Accessing Datasource from Outside A Web Container (through JNDI) I'm trying to access a data source that is defined within a web container (JBoss) from a fat client outside the container. I've decided to look up the data source through JNDI. Actually, my persistence framework (Ibatis) does this. When performing queries I always end up getting this error: java.lang.IllegalAccessException: Method=public abstract java.sql.Connection java.sql.Statement.getConnection() throws java.sql.SQLException does not return Serializable Stacktrace: org.jboss.resource.adapter.jdbc.remote.WrapperDataSourceService.doStatementMethod(WrapperDataSourceS ervice.java:411), org.jboss.resource.adapter.jdbc.remote.WrapperDataSourceService.invoke(WrapperDataSourceService.java :223), sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source), sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25), java.lang.reflect.Method.invoke(Method.java:585), org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155), org.jboss.mx.server.Invocation.dispatch(Invocation.java:94), org.jboss.mx.server.Invocation.invoke(Invocation.java:86), org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264), org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659), My Datasource: <?xml version="1.0" encoding="UTF-8"?> <datasources> <local-tx-datasource> <jndi-name>jdbc/xxxxxDS</jndi-name> <connection-url>jdbc:oracle:thin:@xxxxxxxxx:1521:xxxxxxx</connection-url> <use-java-context>false</use-java-context> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <user-name>xxxxxxxx</user-name> <password>xxxxxx</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name> <min-pool-size>5</min-pool-size> <max-pool-size>20</max-pool-size> </local-tx-datasource> </datasources> Does anyone have a clue where this could come from? Maybe someone even knows a better way how to achieve this. Any hints are much appreciated! Cheers, Michael A: Not sure if this is the same issue? JBoss DataSource config DataSource wrappers are not usable outside of the server VM A: @Michael Well, java.sql.Connection is an Interface - it might technically be possible for the concrete implementation you're getting from JBoss to be Serializable - but I don't think you're really going to have any options you can use. If it was possible, it would probably be easy :) I think @toolkit might have said the right words with useable outside the VM - the JDBC drivers will be talking to native driver code running in the underlying OS I guess, so that might explain why you can't just pass a connection over the network elsewhere. My advice, (if you don't get any better advice!) would be to find a different approach - if you have access to locate the resource on the JBoss directory, maybe implement a proxy object that you can locate and obtain from the directory that allows you to use the connection remotely from your fat client. That's a design pattern called data transfer object I think Wikipedia entry A: I think the exception indicates that the SQLConnection object you're trying to retrieve doesn't implement the Serializable interface, so it can't be passed to you the way you asked for it. From the limited work I've done with JDNI, if you're asking for an object via JNDI it must be serializable. As far as I know, there's no way round that - if I think of a better way I'll post it up... OK, one obvious option is to provide a serializable object local to the datasource that uses it but doesn't have the datasource as part of its serializable object graph. The fat client could then look up that object and query it instead. Or create a (web?) service through which to access the datasource is governed - again your fat client would hit the service - this would probably be better encapsulated and more reuseable approach if those are concerns for you. A: @toolkit: Well, not exactly. Since I can access the data source over JNDI, it is actually visible and thus usable. Or am I getting something totally wrong? @Brabster: I think you're on the right track. Isn't there a way to make the connection serializable? Maybe it's just a configuration issue... A: I've read up on Ibatis now - maybe you can make your implementations of Dao etc. Serializable, post them into your directory and so retrieve them and use them in your fat client? You'd get reuse benefits out of that too. Here's an example of something looks similar for Wicket A: JBoss wraps up all DataSources with it's own ones. That lets it play tricks with autocommit to get the specified J2EE behaviour out of a JDBC connection. They are mostly serailizable. But you needn't trust them. I'd look carefully at it's wrappers. I've written a surrogate for JBoss's J2EE wrappers wrapper for JDBC that works with OOCJNDI to get my DAO code unit test-able standalone. You just wrap java.sql.Driver, point OOCJNDI at your class, and run in JUnit. The Driver wrapper can just directly create a SQL Driver and delegate to it. Return a java.sql.Connection wrapper of your own devising on Connect. A ConnectionWrapper can just wrap the Connection your Oracle driver gives you, and all it does special is set Autocommit true. Don't forget Eclipse can wrt delgates for you. Add a member you need to delegate to , then select it and right click, source -=>add delgage methods. This is great when you get paid by the line ;-) Bada-bing, Bada-boom, JUnit out of the box J2EE testing. Your problem is probably amenable to the same thing, with JUnit crossed out and FatCLient written in an crayon. My FatClient uses RMI generated with xdoclet to talk to the J2EE server, so I don't have your problem.
Accessing Datasource from Outside A Web Container (through JNDI)
I'm trying to access a data source that is defined within a web container (JBoss) from a fat client outside the container. I've decided to look up the data source through JNDI. Actually, my persistence framework (Ibatis) does this. When performing queries I always end up getting this error: java.lang.IllegalAccessException: Method=public abstract java.sql.Connection java.sql.Statement.getConnection() throws java.sql.SQLException does not return Serializable Stacktrace: org.jboss.resource.adapter.jdbc.remote.WrapperDataSourceService.doStatementMethod(WrapperDataSourceS ervice.java:411), org.jboss.resource.adapter.jdbc.remote.WrapperDataSourceService.invoke(WrapperDataSourceService.java :223), sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source), sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25), java.lang.reflect.Method.invoke(Method.java:585), org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155), org.jboss.mx.server.Invocation.dispatch(Invocation.java:94), org.jboss.mx.server.Invocation.invoke(Invocation.java:86), org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264), org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659), My Datasource: <?xml version="1.0" encoding="UTF-8"?> <datasources> <local-tx-datasource> <jndi-name>jdbc/xxxxxDS</jndi-name> <connection-url>jdbc:oracle:thin:@xxxxxxxxx:1521:xxxxxxx</connection-url> <use-java-context>false</use-java-context> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <user-name>xxxxxxxx</user-name> <password>xxxxxx</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name> <min-pool-size>5</min-pool-size> <max-pool-size>20</max-pool-size> </local-tx-datasource> </datasources> Does anyone have a clue where this could come from? Maybe someone even knows a better way how to achieve this. Any hints are much appreciated! Cheers, Michael
[ "Not sure if this is the same issue?\nJBoss DataSource config\n\nDataSource wrappers are not usable outside of the server VM\n\n", "@Michael Well, java.sql.Connection is an Interface - it might technically be possible for the concrete implementation you're getting from JBoss to be Serializable - but I don't think you're really going to have any options you can use. If it was possible, it would probably be easy :)\nI think @toolkit might have said the right words with useable outside the VM - the JDBC drivers will be talking to native driver code running in the underlying OS I guess, so that might explain why you can't just pass a connection over the network elsewhere.\nMy advice, (if you don't get any better advice!) would be to find a different approach - if you have access to locate the resource on the JBoss directory, maybe implement a proxy object that you can locate and obtain from the directory that allows you to use the connection remotely from your fat client. That's a design pattern called data transfer object I think Wikipedia entry\n", "I think the exception indicates that the SQLConnection object you're trying to retrieve doesn't implement the Serializable interface, so it can't be passed to you the way you asked for it.\nFrom the limited work I've done with JDNI, if you're asking for an object via JNDI it must be serializable. As far as I know, there's no way round that - if I think of a better way I'll post it up...\nOK, one obvious option is to provide a serializable object local to the datasource that uses it but doesn't have the datasource as part of its serializable object graph. The fat client could then look up that object and query it instead.\nOr create a (web?) service through which to access the datasource is governed - again your fat client would hit the service - this would probably be better encapsulated and more reuseable approach if those are concerns for you.\n", "@toolkit:\nWell, not exactly. Since I can access the data source over JNDI, it is actually visible and thus usable. \nOr am I getting something totally wrong?\n@Brabster:\nI think you're on the right track. Isn't there a way to make the connection serializable? Maybe it's just a configuration issue...\n", "I've read up on Ibatis now - maybe you can make your implementations of Dao etc. Serializable, post them into your directory and so retrieve them and use them in your fat client? You'd get reuse benefits out of that too.\nHere's an example of something looks similar for Wicket\n", "JBoss wraps up all DataSources with it's own ones.\nThat lets it play tricks with autocommit to get the specified J2EE behaviour out of a JDBC connection. They are mostly serailizable. But you needn't trust them.\nI'd look carefully at it's wrappers. I've written a surrogate for JBoss's J2EE wrappers wrapper for JDBC that works with OOCJNDI to get my DAO code unit test-able standalone.\n\nYou just wrap java.sql.Driver, point OOCJNDI at your class, and run in JUnit.\nThe Driver wrapper can just directly create a SQL Driver and delegate to it.\nReturn a java.sql.Connection wrapper of your own devising on Connect.\nA ConnectionWrapper can just wrap the Connection your Oracle driver gives you,\nand all it does special is set Autocommit true.\nDon't forget Eclipse can wrt delgates for you. Add a member you need to delegate to , then select it and right click, source -=>add delgage methods. \nThis is great when you get paid by the line ;-)\nBada-bing, Bada-boom, JUnit out of the box J2EE testing.\nYour problem is probably amenable to the same thing, with JUnit crossed out and FatCLient written in an crayon.\nMy FatClient uses RMI generated with xdoclet to talk to the J2EE server, so I don't have your problem.\n" ]
[ 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "datasource", "ibatis", "jboss", "jndi" ]
stackoverflow_0000039053_datasource_ibatis_jboss_jndi.txt
Q: HelpInsight documentation in Delphi 2007 I am using D2007 and am trying to document my source code, using the HelpInsight feature (provided since D2005). I am mainly interested in getting the HelpInsight tool-tips working. From various Web-surfing and experimentation I have found the following: Using the triple slash (///) comment style works more often than the other documented comment styles. i.e.: {*! comment *} and {! comment } The comments must precede the declaration that they are for. For most cases this will mean placing them in the interface section of the code. (The obvious exception is for types and functions that are not accessible from outside the current unit and are therefore declared in the implementation block.) The first comment cannot be for a function. (i.e. it must be for a type - or at least it appears the parser must have seen the "type" keyword before the HelpInsight feature works) Despite following these "rules", sometimes the Help-insight just doesn't find the comments I've written. One file does not produce the correct HelpInsight tool-tips, but if I include this file in a different dummy project, it works properly. Does anyone have any other pointers / tricks for getting HelpInsight to work? A: I have discovered another caveat (which in my case was what was "wrong") It appears that the unit with the HelpInsight comments must be explicitly added to the project. It is not sufficient to simply have the unit in a path that is searched when compiling the project. In other words, the unit must be included in the Project's .dpr / .dproj file. (Using the Project | "Add to Project" menu option)
HelpInsight documentation in Delphi 2007
I am using D2007 and am trying to document my source code, using the HelpInsight feature (provided since D2005). I am mainly interested in getting the HelpInsight tool-tips working. From various Web-surfing and experimentation I have found the following: Using the triple slash (///) comment style works more often than the other documented comment styles. i.e.: {*! comment *} and {! comment } The comments must precede the declaration that they are for. For most cases this will mean placing them in the interface section of the code. (The obvious exception is for types and functions that are not accessible from outside the current unit and are therefore declared in the implementation block.) The first comment cannot be for a function. (i.e. it must be for a type - or at least it appears the parser must have seen the "type" keyword before the HelpInsight feature works) Despite following these "rules", sometimes the Help-insight just doesn't find the comments I've written. One file does not produce the correct HelpInsight tool-tips, but if I include this file in a different dummy project, it works properly. Does anyone have any other pointers / tricks for getting HelpInsight to work?
[ "I have discovered another caveat (which in my case was what was \"wrong\")\nIt appears that the unit with the HelpInsight comments must be explicitly added to the project. It is not sufficient to simply have the unit in a path that is searched when compiling the project.\nIn other words, the unit must be included in the Project's .dpr / .dproj file. (Using the Project | \"Add to Project\" menu option)\n" ]
[ 4 ]
[]
[]
[ "delphi", "documentation", "ndoc" ]
stackoverflow_0000053198_delphi_documentation_ndoc.txt
Q: Dynamic linq:Creating an extension method that produces JSON result I'm stuck trying to create a dynamic linq extension method that returns a string in JSON format - I'm using System.Linq.Dynamic and Newtonsoft.Json and I can't get the Linq.Dynamic to parse the "cell=new object[]" part. Perhaps too complex? Any ideas? : My Main method: static void Main(string[] args) { NorthwindDataContext db = new NorthwindDataContext(); var query = db.Customers; string json = JSonify<Customer> .GetJsonTable( query, 2, 10, "CustomerID" , new string[] { "CustomerID", "CompanyName", "City", "Country", "Orders.Count" }); Console.WriteLine(json); } JSonify class public static class JSonify<T> { public static string GetJsonTable( this IQueryable<T> query, int pageNumber, int pageSize, string IDColumnName, string[] columnNames) { string selectItems = String.Format(@" new { {{0}} as ID, cell = new object[]{{{1}}} }", IDColumnName, String.Join(",", columnNames)); var items = new { page = pageNumber, total = query.Count(), rows = query .Select(selectItems) .Skip(pageNumber * pageSize) .Take(pageSize) }; return JavaScriptConvert.SerializeObject(items); // Should produce this result: // { // "page":2, // "total":91, // "rows": // [ // {"ID":"FAMIA","cell":["FAMIA","Familia Arquibaldo","Sao Paulo","Brazil",7]}, // {"ID":"FISSA","cell":["FISSA","FISSA Fabrica Inter. Salchichas S.A.","Madrid","Spain",0]}, // {"ID":"FOLIG","cell":["FOLIG","Folies gourmandes","Lille","France",5]}, // {"ID":"FOLKO","cell":["FOLKO","Folk och fä HB","Bräcke","Sweden",19]}, // {"ID":"FRANK","cell":["FRANK","Frankenversand","München","Germany",15]}, // {"ID":"FRANR","cell":["FRANR","France restauration","Nantes","France",3]}, // {"ID":"FRANS","cell":["FRANS","Franchi S.p.A.","Torino","Italy",6]}, // {"ID":"FURIB","cell":["FURIB","Furia Bacalhau e Frutos do Mar","Lisboa","Portugal",8]}, // {"ID":"GALED","cell":["GALED","Galería del gastrónomo","Barcelona","Spain",5]}, // {"ID":"GODOS","cell":["GODOS","Godos Cocina Típica","Sevilla","Spain",10]} // ] // } } } A: This is really ugly and there may be some issues with the string replacement, but it produces the expected results: public static class JSonify { public static string GetJsonTable<T>( this IQueryable<T> query, int pageNumber, int pageSize, string IDColumnName, string[] columnNames) { string select = string.Format("new ({0} as ID, \"CELLSTART\" as CELLSTART, {1}, \"CELLEND\" as CELLEND)", IDColumnName, string.Join(",", columnNames)); var items = new { page = pageNumber, total = query.Count(), rows = query.Select(select).Skip((pageNumber - 1) * pageSize).Take(pageSize) }; string json = JavaScriptConvert.SerializeObject(items); json = json.Replace("\"CELLSTART\":\"CELLSTART\",", "\"cell\":["); json = json.Replace(",\"CELLEND\":\"CELLEND\"", "]"); foreach (string column in columnNames) { json = json.Replace("\"" + column + "\":", ""); } return json; } } A: Thanks for the quick response. However, note the required output does not have property names in the "cell" array ( that's why I was using object[]): "cell":["FAMIA","Familia Arquibaldo",... vs. "cell":{"CustomerID":"FAMIA","CompanyName","Familia Arquibaldo",... The result is meant to be used with a JQuery grid called "flexify" which requires the output in this format. A: static void Main(string[] args) { NorthwindDataContext db = new NorthwindDataContext(); var query = db.Customers; string json = query.GetJsonTable<Customer>(2, 10, "CustomerID", new string[] {"CustomerID", "CompanyName", "City", "Country", "Orders.Count" }); } public static class JSonify { public static string GetJsonTable<T>( this IQueryable<T> query, int pageNumber, int pageSize, string IDColumnName, string[] columnNames) { string select = string.Format("new ({0} as ID, new ({1}) as cell)", IDColumnName, string.Join(",", columnNames)); var items = new { page = pageNumber, total = query.Count(), rows = query.Select(select).Skip((pageNumber - 1) * pageSize).Take(pageSize) }; return JavaScriptConvert.SerializeObject(items); } }
Dynamic linq:Creating an extension method that produces JSON result
I'm stuck trying to create a dynamic linq extension method that returns a string in JSON format - I'm using System.Linq.Dynamic and Newtonsoft.Json and I can't get the Linq.Dynamic to parse the "cell=new object[]" part. Perhaps too complex? Any ideas? : My Main method: static void Main(string[] args) { NorthwindDataContext db = new NorthwindDataContext(); var query = db.Customers; string json = JSonify<Customer> .GetJsonTable( query, 2, 10, "CustomerID" , new string[] { "CustomerID", "CompanyName", "City", "Country", "Orders.Count" }); Console.WriteLine(json); } JSonify class public static class JSonify<T> { public static string GetJsonTable( this IQueryable<T> query, int pageNumber, int pageSize, string IDColumnName, string[] columnNames) { string selectItems = String.Format(@" new { {{0}} as ID, cell = new object[]{{{1}}} }", IDColumnName, String.Join(",", columnNames)); var items = new { page = pageNumber, total = query.Count(), rows = query .Select(selectItems) .Skip(pageNumber * pageSize) .Take(pageSize) }; return JavaScriptConvert.SerializeObject(items); // Should produce this result: // { // "page":2, // "total":91, // "rows": // [ // {"ID":"FAMIA","cell":["FAMIA","Familia Arquibaldo","Sao Paulo","Brazil",7]}, // {"ID":"FISSA","cell":["FISSA","FISSA Fabrica Inter. Salchichas S.A.","Madrid","Spain",0]}, // {"ID":"FOLIG","cell":["FOLIG","Folies gourmandes","Lille","France",5]}, // {"ID":"FOLKO","cell":["FOLKO","Folk och fä HB","Bräcke","Sweden",19]}, // {"ID":"FRANK","cell":["FRANK","Frankenversand","München","Germany",15]}, // {"ID":"FRANR","cell":["FRANR","France restauration","Nantes","France",3]}, // {"ID":"FRANS","cell":["FRANS","Franchi S.p.A.","Torino","Italy",6]}, // {"ID":"FURIB","cell":["FURIB","Furia Bacalhau e Frutos do Mar","Lisboa","Portugal",8]}, // {"ID":"GALED","cell":["GALED","Galería del gastrónomo","Barcelona","Spain",5]}, // {"ID":"GODOS","cell":["GODOS","Godos Cocina Típica","Sevilla","Spain",10]} // ] // } } }
[ "This is really ugly and there may be some issues with the string replacement, but it produces the expected results:\npublic static class JSonify\n{\n public static string GetJsonTable<T>(\n this IQueryable<T> query, int pageNumber, int pageSize, string IDColumnName, string[] columnNames)\n {\n string select = string.Format(\"new ({0} as ID, \\\"CELLSTART\\\" as CELLSTART, {1}, \\\"CELLEND\\\" as CELLEND)\", IDColumnName, string.Join(\",\", columnNames));\n var items = new\n {\n page = pageNumber,\n total = query.Count(),\n rows = query.Select(select).Skip((pageNumber - 1) * pageSize).Take(pageSize)\n };\n string json = JavaScriptConvert.SerializeObject(items);\n json = json.Replace(\"\\\"CELLSTART\\\":\\\"CELLSTART\\\",\", \"\\\"cell\\\":[\");\n json = json.Replace(\",\\\"CELLEND\\\":\\\"CELLEND\\\"\", \"]\");\n foreach (string column in columnNames)\n {\n json = json.Replace(\"\\\"\" + column + \"\\\":\", \"\");\n }\n return json;\n }\n} \n\n", "Thanks for the quick response.\nHowever, note the required output does not have property names in the \"cell\" array ( that's why I was using object[]):\n\"cell\":[\"FAMIA\",\"Familia Arquibaldo\",... \nvs. \n\"cell\":{\"CustomerID\":\"FAMIA\",\"CompanyName\",\"Familia Arquibaldo\",...\nThe result is meant to be used with a JQuery grid called \"flexify\" which requires the output in this format. \n", "static void Main(string[] args)\n{\n NorthwindDataContext db = new NorthwindDataContext();\n var query = db.Customers;\n string json = query.GetJsonTable<Customer>(2, 10, \"CustomerID\", new string[] {\"CustomerID\", \"CompanyName\", \"City\", \"Country\", \"Orders.Count\" });\n } \n\npublic static class JSonify\n{\n public static string GetJsonTable<T>(\n this IQueryable<T> query, int pageNumber, int pageSize, string IDColumnName, string[] columnNames)\n {\n string select = string.Format(\"new ({0} as ID, new ({1}) as cell)\", IDColumnName, string.Join(\",\", columnNames));\n var items = new\n {\n page = pageNumber,\n total = query.Count(),\n rows = query.Select(select).Skip((pageNumber - 1) * pageSize).Take(pageSize)\n };\n return JavaScriptConvert.SerializeObject(items);\n }\n} \n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "c#", "json", "linq" ]
stackoverflow_0000050251_c#_json_linq.txt
Q: Formatting Stored Procedures I currently work with an Oracle database and we use stored procedures for all our SQL queries. The problem I have is that we do not really having a coding standard for our packages. So what happens is that every developer has a different style (or in some cases no sense of style) in how they format there packages. Making them difficult to read and work on with out first reformatting. We all pretty much just use Notepad or Notepad2 to write our packages. I am unfortunately not in a position to mandate a coding standard and enforce it (just a code monkey at this point) so I was hoping to find a free SQL code formatter that I can use myself, and possibly suggest to others on the team to use, to make my life easier. I have considered writing a small application that would essentially take a file as input and reformat everything, but before I did this I figured I would ask if anyone new of such a tool that is already available and is free. So does anyone now of any such tools available? A: There is a free one online sqlformatter, also SQLinForm, personally i use TOAD and have done since before it was bought by Quest (10 years?) A: VIM script Aqua Data studio $ I use this one all the time. A: I like TOAD for Oracle. It has a format feature that's decent. I see there's a freeware version, though I have not used it. A: Toad for Oracle nicest, most mature $$$ http://www.toadsoft.com Toad for Oracle, free version free this will do what you want limitations are related to number of connections, size of data mods, etc. http://www.toadsoft.com Oracle SQL Developer (up and coming, free!) free from Oracle cross platform http://www.oracle.com/technology/products/database/sql_developer A: I had the exact same experience from Day One working with Oracle stored procedures - "I have to use NOTEPAD?! Oh HELL no." So I hopped on the internets and what I found were people saying "Hey, I have to create stored procedures in Oracle, isn't there anything better than NOTEPAD?!" And the canonical answer was: "Download TOAD, you'll be glad you did". So I followed their advice, was very happy with it, and I'm pleased (if a bit amazed) to see it is still a popular answer.
Formatting Stored Procedures
I currently work with an Oracle database and we use stored procedures for all our SQL queries. The problem I have is that we do not really having a coding standard for our packages. So what happens is that every developer has a different style (or in some cases no sense of style) in how they format there packages. Making them difficult to read and work on with out first reformatting. We all pretty much just use Notepad or Notepad2 to write our packages. I am unfortunately not in a position to mandate a coding standard and enforce it (just a code monkey at this point) so I was hoping to find a free SQL code formatter that I can use myself, and possibly suggest to others on the team to use, to make my life easier. I have considered writing a small application that would essentially take a file as input and reformat everything, but before I did this I figured I would ask if anyone new of such a tool that is already available and is free. So does anyone now of any such tools available?
[ "There is a free one online sqlformatter, also SQLinForm, personally i use TOAD and have done since before it was bought by Quest (10 years?)\n", "\nVIM script \nAqua Data studio $ I use this one all the time.\n\n", "I like TOAD for Oracle. It has a format feature that's decent. I see there's a freeware version, though I have not used it.\n", "Toad for Oracle\n\nnicest, most mature\n$$$\nhttp://www.toadsoft.com\n\nToad for Oracle, free version\n\nfree\nthis will do what you want\nlimitations are related to number of connections, size of data mods, etc.\nhttp://www.toadsoft.com\n\nOracle SQL Developer (up and coming, free!)\n\nfree\nfrom Oracle\ncross platform\nhttp://www.oracle.com/technology/products/database/sql_developer\n\n", "I had the exact same experience from Day One working with Oracle stored procedures - \"I have to use NOTEPAD?! Oh HELL no.\"\nSo I hopped on the internets and what I found were people saying \"Hey, I have to create stored procedures in Oracle, isn't there anything better than NOTEPAD?!\"\nAnd the canonical answer was: \"Download TOAD, you'll be glad you did\". So I followed their advice, was very happy with it, and I'm pleased (if a bit amazed) to see it is still a popular answer.\n" ]
[ 2, 1, 1, 1, 1 ]
[]
[]
[ "oracle", "plsql" ]
stackoverflow_0000033903_oracle_plsql.txt
Q: What is the possible mimetype hierarchy of an email message? I'm working with a snippet of code that recursively calls itself and tries to pull out a MIME Type part of text/html from an email (if it exists) for further processing. The "text/html" could exist inside other content such as multipart/alternative, so I'm trying to find out if there is a defined hierarchy for email MIME Types. Anybody know if there is and what it is? i.e. what types can parent other types? A: In theory, only multipart/ and message/ can parent other types (per RFC2046). A: Your question assumes that mail clients follow the RFC standards for MIME encoding, which they don't. I'd advise you collect a bunch of mail from sources and try and process it as-it-exists. The problem you are facing is extremely difficult (perhaps impossible) to solve 100%.
What is the possible mimetype hierarchy of an email message?
I'm working with a snippet of code that recursively calls itself and tries to pull out a MIME Type part of text/html from an email (if it exists) for further processing. The "text/html" could exist inside other content such as multipart/alternative, so I'm trying to find out if there is a defined hierarchy for email MIME Types. Anybody know if there is and what it is? i.e. what types can parent other types?
[ "In theory, only multipart/ and message/ can parent other types (per RFC2046).\n", "Your question assumes that mail clients follow the RFC standards for MIME encoding, which they don't. I'd advise you collect a bunch of mail from sources and try and process it as-it-exists. The problem you are facing is extremely difficult (perhaps impossible) to solve 100%.\n" ]
[ 1, 0 ]
[]
[]
[ "email", "email_validation", "html_email", "mime_types" ]
stackoverflow_0000055460_email_email_validation_html_email_mime_types.txt
Q: Is there a way to highlight the target of a bookmark? (www.site.com/page.htm#bookmark)? I want to link to bookmark on a page (mysite.com/mypage.htm#bookmark) AND visually highlight the item that was bookmarked (maybe having a red border). Naturally, there would be multiple items bookmarked. So that if someone clicked on #bookmark2 then that other area would be highlighted). I can see how to do that with .asp or .aspx but I'd like to do it more simply than that. I thought maybe there was a clever way to do it with CSS. WHY I'm interested: - I want to have our programs link to a shopping page that lists all the programs on it. I'm using a bookmark so they're jumping to the particular program area (site.com/shoppingpage#Programx) but just to make it obvious I'd like to actually highlight the page being linked to. A: In your css you need to define a.highlight {border:1px solid red;} or something similar Then using jQuery, $(document).ready ( function () { //Work as soon as the DOM is ready for parsing var id = location.hash.substr(1); //Get the word after the hash from the url if (id) $('#'+id).addClass('highlight'); // add class highlight to element whose id is the word after the hash }); To highlight the targets on mouse over also add: $("a[href^='#']") .mouseover(function() { var id = $(this).attr('href').substr(1); $('#'+id).addClass('highlight'); }) .mouseout(function() { var id = $(this).attr('href').substr(1); $('#'+id).removeClass('highlight'); }); A: You can also use the target pseudo-class in CSS: <html> <head> <style type="text/css"> div#test:target { background-color: yellow; } </style> </head> <body> <p><b><a href="#test">Link</a></b></p> <div id="test"> Target </div> </body> </html> Unfortunately the target pseudo class isn't supported by IE or Opera, so if you're looking for a universal solution here this might not be sufficient. A: Use your favorite JS toolkit to add a "highlight" (or whatever) class to the item containing (or contained in) the anchor. Something like: jQuery(location.hash).addClass('highlight'); Of course, you'd need to call that onready or click if you want it triggered by other links on the page, and you'll want to have the .highlight class defined. You could also make your jQuery selector traverse up or down depending on the container you want highlighted. A: I guess if you could store this information with JavaScript and cookies for the functionality of remembering the bookmarks and even add a splash of Ajax if you wanted to interact with a database. CSS would only be able to do styling. You would have to give the bookmarked anchor a class found in your CSS file. CSS also has the a:visited selector which is used for styling links found in the browser's history.
Is there a way to highlight the target of a bookmark? (www.site.com/page.htm#bookmark)?
I want to link to bookmark on a page (mysite.com/mypage.htm#bookmark) AND visually highlight the item that was bookmarked (maybe having a red border). Naturally, there would be multiple items bookmarked. So that if someone clicked on #bookmark2 then that other area would be highlighted). I can see how to do that with .asp or .aspx but I'd like to do it more simply than that. I thought maybe there was a clever way to do it with CSS. WHY I'm interested: - I want to have our programs link to a shopping page that lists all the programs on it. I'm using a bookmark so they're jumping to the particular program area (site.com/shoppingpage#Programx) but just to make it obvious I'd like to actually highlight the page being linked to.
[ "In your css you need to define \na.highlight {border:1px solid red;}\n\nor something similar\nThen using jQuery, \n$(document).ready ( function () { //Work as soon as the DOM is ready for parsing\n var id = location.hash.substr(1); //Get the word after the hash from the url\n if (id) $('#'+id).addClass('highlight'); // add class highlight to element whose id is the word after the hash\n});\n\nTo highlight the targets on mouse over also add:\n$(\"a[href^='#']\")\n .mouseover(function() {\n var id = $(this).attr('href').substr(1);\n $('#'+id).addClass('highlight');\n })\n .mouseout(function() {\n var id = $(this).attr('href').substr(1);\n $('#'+id).removeClass('highlight');\n });\n\n", "You can also use the target pseudo-class in CSS:\n<html>\n<head>\n\n<style type=\"text/css\">\ndiv#test:target {\n background-color: yellow;\n}\n</style>\n\n</head>\n<body>\n\n<p><b><a href=\"#test\">Link</a></b></p>\n\n<div id=\"test\">\nTarget\n</div>\n\n</body>\n</html>\n\nUnfortunately the target pseudo class isn't supported by IE or Opera, so if you're looking for a universal solution here this might not be sufficient.\n", "Use your favorite JS toolkit to add a \"highlight\" (or whatever) class to the item containing (or contained in) the anchor.\nSomething like:\njQuery(location.hash).addClass('highlight');\n\nOf course, you'd need to call that onready or click if you want it triggered by other links on the page, and you'll want to have the .highlight class defined. You could also make your jQuery selector traverse up or down depending on the container you want highlighted.\n", "I guess if you could store this information with JavaScript and cookies for the functionality of remembering the bookmarks and even add a splash of Ajax if you wanted to interact with a database.\nCSS would only be able to do styling. You would have to give the bookmarked anchor a class found in your CSS file.\nCSS also has the a:visited selector which is used for styling links found in the browser's history.\n" ]
[ 10, 6, 2, 0 ]
[]
[]
[ "css", "javascript", "jquery" ]
stackoverflow_0000054237_css_javascript_jquery.txt
Q: How can I indicate that multiple versions of a dependent assembly are okay? Assemblies A and B are privately deployed and strongly named. Assembly A contains references to Assembly B. There are two versions of Assembly B: B1 and B2. I want to be able to indicate for Assembly A that it may bind to either B1 or B2 -- ideally, by incorporating this information into the assembly itself. What are my options? I'm somewhat familiar with versioning policy and the way it applies to the GAC, but I don't want to be dependent on these assemblies being in the GAC. A: There are several places you can indicate to the .Net Framework that a specific version of a strongly typed library should be preferred over another. These are: Publisher Policy file machine.config file app.config file All these methods utilise the "<bindingRedirect>" element which can instruct the .Net Framework to bind a version or range of versions of an assembly to a specific version. Here is a short example of the tag in use to bind all versions of an assembly up until version 2.0 to version 2.5: <assemblyBinding> <dependantAssembly> <assemblyIdentity name="foo" publicKeyToken="00000000000" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0 - 2.0.0.0" newVersion="2.5.0.0" /> </dependantAssembly> </assemblyBinding> There are lots of details so it's best if you read about Redirecting Assembly Versions on MSDN to decide which method is best for your case. A: You can set version policy in your app.config file. Alternatively you can manually load these assemblies with a call to Assembly.LoadFrom() when this is done assembly version is not considered.
How can I indicate that multiple versions of a dependent assembly are okay?
Assemblies A and B are privately deployed and strongly named. Assembly A contains references to Assembly B. There are two versions of Assembly B: B1 and B2. I want to be able to indicate for Assembly A that it may bind to either B1 or B2 -- ideally, by incorporating this information into the assembly itself. What are my options? I'm somewhat familiar with versioning policy and the way it applies to the GAC, but I don't want to be dependent on these assemblies being in the GAC.
[ "There are several places you can indicate to the .Net Framework that a specific version of a strongly typed library should be preferred over another. These are:\n\nPublisher Policy file\nmachine.config file\napp.config file\n\nAll these methods utilise the \"<bindingRedirect>\" element which can instruct the .Net Framework to bind a version or range of versions of an assembly to a specific version.\nHere is a short example of the tag in use to bind all versions of an assembly up until version 2.0 to version 2.5:\n<assemblyBinding>\n <dependantAssembly>\n <assemblyIdentity name=\"foo\" publicKeyToken=\"00000000000\" culture=\"neutral\" />\n <bindingRedirect oldVersion=\"0.0.0.0 - 2.0.0.0\" newVersion=\"2.5.0.0\" />\n </dependantAssembly>\n</assemblyBinding>\n\nThere are lots of details so it's best if you read about Redirecting Assembly Versions on MSDN to decide which method is best for your case.\n", "You can set version policy in your app.config file. Alternatively you can manually load these assemblies with a call to Assembly.LoadFrom() when this is done assembly version is not considered.\n" ]
[ 2, 1 ]
[]
[]
[ ".net", "versioning" ]
stackoverflow_0000054546_.net_versioning.txt
Q: What's a clean/simple way to ensure the security of a page? Supposing you have a form that collects and submits sensitive information and you want to ensure it is never accessed via insecure (non-HTTPS) means, how might you best go about enforcing that policy? A: If you're running Apache, you can put a RewriteRule in your .htaccess, like so: RewriteCond %{HTTPS} "off" RewriteRule /mypage.html https://example.com/mypage.html A: I think the most bullet-proof solution is to keep the code inside your SSL document root only. This will ensure that you (or another developer in the future) can't accidentally link to a non-secure version of the form. If you have the form on both HTTP and HTTPS, you might not even notice if the wrong one gets used inadvertently. If this isn't doable, then I would take at least two precautions. Do the Apache URL rewriting, and have a check in your code to make sure the session is encrypted - check the HTTP headers. A: Take a look at this: http://www.dotnetmonster.com/Uwe/Forum.aspx/asp-net/75369/Enforcing-https Edit: This shows solutions from an IIS point of view, but you should be able to configure about any web server for this. A: In IIS? Go to security settings and hit "Require secure connection". Alternately, you can check the server variables in page load and redirect to the secure page. A: I'd suggest looking at the request in the code that renders the form, and if it is not using SSL, issue a redirect to the https URL. You could also use a rewite rule in Apache to redirect the user. Or, you could just not serve up the page via HTTP, and keep it only in the document root of your HTTPS site.
What's a clean/simple way to ensure the security of a page?
Supposing you have a form that collects and submits sensitive information and you want to ensure it is never accessed via insecure (non-HTTPS) means, how might you best go about enforcing that policy?
[ "If you're running Apache, you can put a RewriteRule in your .htaccess, like so:\nRewriteCond %{HTTPS} \"off\"\nRewriteRule /mypage.html https://example.com/mypage.html\n\n", "I think the most bullet-proof solution is to keep the code inside your SSL document root only. This will ensure that you (or another developer in the future) can't accidentally link to a non-secure version of the form. If you have the form on both HTTP and HTTPS, you might not even notice if the wrong one gets used inadvertently.\nIf this isn't doable, then I would take at least two precautions. Do the Apache URL rewriting, and have a check in your code to make sure the session is encrypted - check the HTTP headers.\n", "Take a look at this: http://www.dotnetmonster.com/Uwe/Forum.aspx/asp-net/75369/Enforcing-https\nEdit: This shows solutions from an IIS point of view, but you should be able to configure about any web server for this.\n", "In IIS? Go to security settings and hit \"Require secure connection\". Alternately, you can check the server variables in page load and redirect to the secure page.\n", "I'd suggest looking at the request in the code that renders the form, and if it is not using SSL, issue a redirect to the https URL.\nYou could also use a rewite rule in Apache to redirect the user.\nOr, you could just not serve up the page via HTTP, and keep it only in the document root of your HTTPS site.\n" ]
[ 5, 3, 1, 1, 1 ]
[]
[]
[ "forms", "https", "security", "ssl" ]
stackoverflow_0000055010_forms_https_security_ssl.txt
Q: Find the settings JNDI is using for error reporting I've got a J2SE application that I am maintaining uses JNDI. (It uses JNDI to find it's J2EE application server.) It has pretty poor error reporting of failure to find the JNDI server. I've been looking around fora way to display which server the InitialContext is trying to talk to. Has anyone got a neat way to do this ? A: Reporting the value for InitialContext.getEnvironment().get(Context.PROVIDER_URL) might be helpful.
Find the settings JNDI is using for error reporting
I've got a J2SE application that I am maintaining uses JNDI. (It uses JNDI to find it's J2EE application server.) It has pretty poor error reporting of failure to find the JNDI server. I've been looking around fora way to display which server the InitialContext is trying to talk to. Has anyone got a neat way to do this ?
[ "Reporting the value for InitialContext.getEnvironment().get(Context.PROVIDER_URL) might be helpful.\n" ]
[ 1 ]
[]
[]
[ "java", "jndi" ]
stackoverflow_0000055487_java_jndi.txt
Q: Regex's For Developers I've been trying to figure out a regex to allow me to search for a particular string while automatically skipping comments. Anyone have an RE like this or know of one? It doesn't even need to be sophisticated enough to skip #if 0 blocks; I just want it to skip over // and /* blocks. The converse, that is only search inside comment blocks, would be very useful too. Environment: VS 2003 A: This is a harder problem than it might at first appear, since you need to consider comment tokens inside strings, comment tokens that are themselves commented out etc. I wrote a string and comment parser for C#, let me see if I can dig out something that will help... I'll update if I find anything. EDIT: ... ok, so I found my old 'codemasker' project. Turns out that I did this in stages, not with a single regex. Basically I inch through a source file looking for start tokens, when I find one I then look for an end-token and mask everything in between. This takes into account the context of the start token... if you find a token for "string start" then you can safely ignore comment tokens until you find the end of the string, and vice versa. Once the code is masked (I used guids as masks, and a hashtable to keep track) then you can safely do your search and replace, then finally restore the masked code. Hope that helps. A: Be especially careful with strings. Strings often have escape sequences which you also have to respect while you're finding the end of them. So e.g. "This is \"a test\"". You cannot blindly look for a double-quote to terminate. Also beware of ``"This is \"`, which shows that you cannot just say "unless double-quote is preceded by a backslash." In summary, make some brutal unit tests! A: A regexp is not the best tool for the job. Perl FAQ: C comments: #!/usr/bin/perl $/ = undef; $_ = <>; s#/\*[^*]*\*+([^/*][^*]*\*+)*/|([^/"']*("[^"\\]*(\\[\d\D][^"\\]*)*"[^/"']*|'[^'\\]*(\\[\d\D][^'\\]*)*'[^/"']*|/+[^*/][^/"']*)*)#$2#g; print; C++ comments: #!/usr/local/bin/perl $/ = undef; $_ = <>; s#//(.*)|/\*[^*]*\*+([^/*][^*]*\*+)*/|"(\\.|[^"\\])*"|'(\\.|[^'\\])*'|[^/"']+# $1 ? "/*$1 */" : $& #ge; print; A: I would make a copy and strip out the comments first, then search the string the regular way.
Regex's For Developers
I've been trying to figure out a regex to allow me to search for a particular string while automatically skipping comments. Anyone have an RE like this or know of one? It doesn't even need to be sophisticated enough to skip #if 0 blocks; I just want it to skip over // and /* blocks. The converse, that is only search inside comment blocks, would be very useful too. Environment: VS 2003
[ "This is a harder problem than it might at first appear, since you need to consider comment tokens inside strings, comment tokens that are themselves commented out etc.\nI wrote a string and comment parser for C#, let me see if I can dig out something that will help... I'll update if I find anything.\nEDIT:\n... ok, so I found my old 'codemasker' project. Turns out that I did this in stages, not with a single regex. Basically I inch through a source file looking for start tokens, when I find one I then look for an end-token and mask everything in between. This takes into account the context of the start token... if you find a token for \"string start\" then you can safely ignore comment tokens until you find the end of the string, and vice versa. Once the code is masked (I used guids as masks, and a hashtable to keep track) then you can safely do your search and replace, then finally restore the masked code.\nHope that helps.\n", "Be especially careful with strings. Strings often have escape sequences which you also have to respect while you're finding the end of them.\nSo e.g. \"This is \\\"a test\\\"\". You cannot blindly look for a double-quote to terminate. Also beware of ``\"This is \\\"`, which shows that you cannot just say \"unless double-quote is preceded by a backslash.\"\nIn summary, make some brutal unit tests!\n", "A regexp is not the best tool for the job.\nPerl FAQ:\nC comments:\n#!/usr/bin/perl\n$/ = undef;\n$_ = <>; \n\ns#/\\*[^*]*\\*+([^/*][^*]*\\*+)*/|([^/\"']*(\"[^\"\\\\]*(\\\\[\\d\\D][^\"\\\\]*)*\"[^/\"']*|'[^'\\\\]*(\\\\[\\d\\D][^'\\\\]*)*'[^/\"']*|/+[^*/][^/\"']*)*)#$2#g;\nprint; \n\nC++ comments:\n#!/usr/local/bin/perl\n$/ = undef;\n$_ = <>;\n\ns#//(.*)|/\\*[^*]*\\*+([^/*][^*]*\\*+)*/|\"(\\\\.|[^\"\\\\])*\"|'(\\\\.|[^'\\\\])*'|[^/\"']+# $1 ? \"/*$1 */\" : $& #ge;\nprint;\n\n", "I would make a copy and strip out the comments first, then search the string the regular way.\n" ]
[ 3, 2, 2, 1 ]
[]
[]
[ "c++", "regex", "utilities", "visual_c++" ]
stackoverflow_0000054047_c++_regex_utilities_visual_c++.txt
Q: live asp.net web.config settings I've only recently started working with asp.net and c#. Is there a standard practice set of web.config settings for a live final website? There seem to be a ton of options available and I'm looking to streamline performance, close possible security holes and other unnecessary options. A: Tip/Trick: Automating Dev, QA, Staging, and Production Web.Config Settings with VS 2005 A: An empty web.config (or at least an absent <system.web> element) would mean that all of the framework's recommended defaults would take effect. You would then just need to be concerned with the host (e.g., IIS) set-up. A: Start with a clean web.config and only add the sections you need. For security, all you really can do is make sure you flag <compelation debug="false"> for your production box and set custom errors to true. A: Secure all folders containing any sensitive info with the location tag. Encrypt any connection strings with DPAPI.
live asp.net web.config settings
I've only recently started working with asp.net and c#. Is there a standard practice set of web.config settings for a live final website? There seem to be a ton of options available and I'm looking to streamline performance, close possible security holes and other unnecessary options.
[ "Tip/Trick: Automating Dev, QA, Staging, and Production Web.Config Settings with VS 2005 \n", "An empty web.config (or at least an absent <system.web> element) would mean that all of the framework's recommended defaults would take effect. You would then just need to be concerned with the host (e.g., IIS) set-up.\n", "Start with a clean web.config and only add the sections you need. \nFor security, all you really can do is make sure you flag \n <compelation debug=\"false\">\n for your production box and set custom errors to true.\n", "Secure all folders containing any sensitive info with the location tag. Encrypt any connection strings with DPAPI.\n" ]
[ 3, 1, 1, 1 ]
[]
[]
[ "asp.net", "security", "web_config" ]
stackoverflow_0000055572_asp.net_security_web_config.txt
Q: How do you communicate between Windows Vista Session 0 and Desktop? In prior versions of Windows before Vista you could have a Windows Service interact with the current logged in desktop user to easy display information on the screen from the service. In Windows Vista Session 0 was added for security to isolate the services from the desktop. What is an easy way to communicate between a service and an application running outside of Session 0? So far I have gotten around this by using TCP/IP to communicate between the two but it seems to be kind of a sloppy way to do it. A: You can use shared memory or named pipe to facilitate IPC as well. Conceptually this is similar to TCP/IP, but you don't have to worry about finding an unused port. You have to make sure that the named objects you create are prefixed with "Global\" to allow them to be accessed by all sessions as described here. AFAIK there is no way for a service to directly interact with the desktop any more. A: Indeed, for security reasons it is no longer possible to communicate directly with the "desktop". What exactly is the "desktop" anyway, when you live in a machine with multiple active users + remote sessions? The general way to solve the problem is to use service apps which communicate via some RPC mechanism (TCP/IP, IPC, .Net Remoting Channels over one of those, etc). Its kind of a pain, but I think the benefits are worth the change. A: For the service to talk to the desktop, you're pretty much stuck with one of the RPC mechanisms. The .NET remoting mechanism (IpcServerChannel) isn't to hard to implement for this purpose. Also with .NET a desktop application can send messages directly to the service with the ServiceController.ExecuteCommand. These commands are received by the service via ServiceBase.OnCustomCommand. This is even easier to do, and would be all you need if controlling the service is your only requirement.
How do you communicate between Windows Vista Session 0 and Desktop?
In prior versions of Windows before Vista you could have a Windows Service interact with the current logged in desktop user to easy display information on the screen from the service. In Windows Vista Session 0 was added for security to isolate the services from the desktop. What is an easy way to communicate between a service and an application running outside of Session 0? So far I have gotten around this by using TCP/IP to communicate between the two but it seems to be kind of a sloppy way to do it.
[ "You can use shared memory or named pipe to facilitate IPC as well. Conceptually this is similar to TCP/IP, but you don't have to worry about finding an unused port.\nYou have to make sure that the named objects you create are prefixed with \"Global\\\" to allow them to be accessed by all sessions as described here.\nAFAIK there is no way for a service to directly interact with the desktop any more.\n", "Indeed, for security reasons it is no longer possible to communicate directly with the \"desktop\". What exactly is the \"desktop\" anyway, when you live in a machine with multiple active users + remote sessions?\nThe general way to solve the problem is to use service apps which communicate via some RPC mechanism (TCP/IP, IPC, .Net Remoting Channels over one of those, etc). Its kind of a pain, but I think the benefits are worth the change.\n", "For the service to talk to the desktop, you're pretty much stuck with one of the RPC mechanisms. The .NET remoting mechanism (IpcServerChannel) isn't to hard to implement for this purpose. \nAlso with .NET a desktop application can send messages directly to the service with the ServiceController.ExecuteCommand. These commands are received by the service via ServiceBase.OnCustomCommand. This is even easier to do, and would be all you need if controlling the service is your only requirement.\n" ]
[ 4, 3, 1 ]
[]
[]
[ "ipc", "windows_vista" ]
stackoverflow_0000055639_ipc_windows_vista.txt
Q: Developing addins for World of Warcraft - Getting started? As a long time World of Warcraft player, and a passionate developer I have decided that I would like to combine the two and set about developing some addins. Not only to improve my gameplay experience but as a great opportunity to learn something new. Does anyone have any advice on how to go about starting out? Is there an IDE one can use? How does one go about testing? Are there any ready made libraries available? Or would I get a better learning experience by ignoring the libraries and building from scratch? How do I oneshot Hogger? Would love to hear your advice, experiences and views. A: This article explains how to start pretty well. Your first bookmark is possibly the US Interface Forum, especially the Stickies for that: http://us.battle.net/wow/en/forum/1011693/ Then, grab some simple addons to learn how XML and LUA interacts. The WoWWiki HOWTO List is a good point here as well. One important thing to keep in mind: World of Warcraft is available in many languages. If you have a EU Account, you got an excellent testing bed by simply downloading the language Packs for Spanish, German and French. If you're an US Guy, check if you can get the Latin America version. That way, you can test it against another language version. Once you made 1 or 2 really small and simple addons just to learn how to use it, have a look at the various frameworks. WowAce is a popular one, but there are others. Just keep one thing in mind: Making an Addon is work. Maintaining one is even more work. With each new Patch, there may be breaking changes, and the next Addon will surely cause a big Exodus of Addons, just like Patch 2.0.1 did. A: Another useful tools you might like is WarcraftAddOnStudio which lets you make plugins in the visual studio environment. A: I learned the art of add-ons primarily by looking at the code of Blizzard's UI. You can see that code by extracting the default UI or finding a copy of the default UI online. Add-on developers sometimes like to over-engineer their pet projects (who doesn't?), while Blizzard's code is usually pretty no-nonsense and straightforward. In addition, Programming in Lua is a pretty useful (if slightly out-of-date) reference for the actual Lua language. A: The best way to start is with the book World of Warcraft Programming. It covers LUA, XML, WarcraftAddOnStudio and the WoW API. The book also has sections on best practices and avoiding common mistakes.
Developing addins for World of Warcraft - Getting started?
As a long time World of Warcraft player, and a passionate developer I have decided that I would like to combine the two and set about developing some addins. Not only to improve my gameplay experience but as a great opportunity to learn something new. Does anyone have any advice on how to go about starting out? Is there an IDE one can use? How does one go about testing? Are there any ready made libraries available? Or would I get a better learning experience by ignoring the libraries and building from scratch? How do I oneshot Hogger? Would love to hear your advice, experiences and views.
[ "This article explains how to start pretty well.\nYour first bookmark is possibly the US Interface Forum, especially the Stickies for that:\nhttp://us.battle.net/wow/en/forum/1011693/\nThen, grab some simple addons to learn how XML and LUA interacts. The WoWWiki HOWTO List is a good point here as well.\nOne important thing to keep in mind: World of Warcraft is available in many languages. If you have a EU Account, you got an excellent testing bed by simply downloading the language Packs for Spanish, German and French. If you're an US Guy, check if you can get the Latin America version. That way, you can test it against another language version.\nOnce you made 1 or 2 really small and simple addons just to learn how to use it, have a look at the various frameworks. WowAce is a popular one, but there are others.\nJust keep one thing in mind: Making an Addon is work. Maintaining one is even more work. With each new Patch, there may be breaking changes, and the next Addon will surely cause a big Exodus of Addons, just like Patch 2.0.1 did.\n", "Another useful tools you might like is WarcraftAddOnStudio which lets you make plugins in the visual studio environment.\n", "I learned the art of add-ons primarily by looking at the code of Blizzard's UI. You can see that code by extracting the default UI or finding a copy of the default UI online. Add-on developers sometimes like to over-engineer their pet projects (who doesn't?), while Blizzard's code is usually pretty no-nonsense and straightforward. In addition, Programming in Lua is a pretty useful (if slightly out-of-date) reference for the actual Lua language.\n", "The best way to start is with the book World of Warcraft Programming. It covers LUA, XML, WarcraftAddOnStudio and the WoW API. The book also has sections on best practices and avoiding common mistakes.\n" ]
[ 28, 10, 7, 6 ]
[]
[]
[ "add_on", "lua", "plugins", "world_of_warcraft" ]
stackoverflow_0000006859_add_on_lua_plugins_world_of_warcraft.txt
Q: Modifying the AJAX Control Toolkit Dropdown extender I am using the example on the AJAX website for the DropDownExtender. I'm looking to make the target control (the label) have the DropDown image appear always, instead of just when I hover over it. Is there any way to do this? A: This can be done using the following script tag: <script> function pageLoad() { $find('TextBox1_DropDownExtender')._dropWrapperHoverBehavior_onhover(); $find('TextBox1_DropDownExtender').unhover = VisibleMe; } function VisibleMe() { $find('TextBox1_DropDownExtender')._dropWrapperHoverBehavior_onhover(); } </script> I found this and some other tips at this dot net curry example. It works but I'd also consider writing a new control based on the drop down extender exposing a property to set the behaviour you want on or off. Writing a new AJAX control isn't too hard, more fiddly than anything.
Modifying the AJAX Control Toolkit Dropdown extender
I am using the example on the AJAX website for the DropDownExtender. I'm looking to make the target control (the label) have the DropDown image appear always, instead of just when I hover over it. Is there any way to do this?
[ "This can be done using the following script tag:\n\n<script>\n function pageLoad()\n {\n $find('TextBox1_DropDownExtender')._dropWrapperHoverBehavior_onhover();\n $find('TextBox1_DropDownExtender').unhover = VisibleMe;\n } \n\n function VisibleMe()\n {\n $find('TextBox1_DropDownExtender')._dropWrapperHoverBehavior_onhover();\n }\n</script>\n\nI found this and some other tips at this dot net curry example.\nIt works but I'd also consider writing a new control based on the drop down extender exposing a property to set the behaviour you want on or off. \nWriting a new AJAX control isn't too hard, more fiddly than anything.\n" ]
[ 4 ]
[]
[]
[ "ajaxcontroltoolkit", "asp.net", "asp.net_ajax" ]
stackoverflow_0000054702_ajaxcontroltoolkit_asp.net_asp.net_ajax.txt
Q: Embed a File Chooser in a UserControl / Form I've inherited a desktop application which has a custom .NET file chooser that is embedded in a control, but it has some issues. I'd like to replace it with a non-custom File Chooser (like the OpenFileDialog). However, for a variety of reasons it needs to be embedded in the parent control not a popup dialog. Is there a control I'm missing, or does MS only provide the popup dialog out of the box? A: The .Net control is a thin wrapper for the common dialog built into windows, and that is a dialog. So there is no way to embed it as though it were a control. A: Depending on your needs, you COULD abuse the web browser control to show local files and folders. It won't match all the functionality of the OpenFileDialog, but it could work. Here's one that I remembered from way-back. The Shell Mega-Pack. It has ActiveX and .NET versions. It looks promising. Alternatively, if you want to build your own, you could start here on CodeProject: A Windows Explorer in a User Control. That looks like a good start. Here's another one: An All VB.NET Explorer Tree Control with ImageList Management.
Embed a File Chooser in a UserControl / Form
I've inherited a desktop application which has a custom .NET file chooser that is embedded in a control, but it has some issues. I'd like to replace it with a non-custom File Chooser (like the OpenFileDialog). However, for a variety of reasons it needs to be embedded in the parent control not a popup dialog. Is there a control I'm missing, or does MS only provide the popup dialog out of the box?
[ "The .Net control is a thin wrapper for the common dialog built into windows, and that is a dialog. So there is no way to embed it as though it were a control.\n", "Depending on your needs, you COULD abuse the web browser control to show local files and folders. It won't match all the functionality of the OpenFileDialog, but it could work.\nHere's one that I remembered from way-back. The Shell Mega-Pack. It has ActiveX and .NET versions. It looks promising.\nAlternatively, if you want to build your own, you could start here on CodeProject: A Windows Explorer in a User Control. That looks like a good start. Here's another one: An All VB.NET Explorer Tree Control with ImageList Management.\n" ]
[ 1, 0 ]
[]
[]
[ ".net", "c#", "winforms" ]
stackoverflow_0000055147_.net_c#_winforms.txt
Q: "Background" task in palm OS I'm trying to create a Palm OS app to check a web site every X minutes or hours, and provide a notification when a piece of data is available. I know that this kind of thing can be done on the new Palm's - for example, my Centro can have email or web sites download when the application isn't on top - but I don't know how to do it. Can anyone point me in the right direction? A: This is possible to do but very difficult. There are several steps you'll have to take. First off, this only works on Palm OS 5 and is sketchy on some of the early Palm OS 5 devices. The latest devices are better but not perfect. Next, you will need to create an alarm for your application using AlmSetAlarm. This is how you accomplish the "every X minutes or hours" part. When the alarm fires, your application will get a sysAppLaunchCmdAlarmTriggered launch code, even if it's not already running. If you only want to do something simple and quick, you can do it in response to the launch code and you're done. After you do your stuff in the alarm launch code, be sure to set up the next alarm so that you continue to be called. Important notes: You cannot access global variables when responding this launch code! Depending on the setup in your compiler, you probably also won't be able to access certain C++ features, like virtual functions (which internally use global variables). There is a setting you can set in Codewarrior that will help with this, but I'm not too familiar with it. You should architect your code so that it doesn't need globals; for example, you can use FtrSet and FtrGet to store bits of global data that you might need. Finally, you will only be able to access a single 64KB code segment of 68000 machine code. Inter-segment jumps don't work properly without globals set up. You can get around a lot of these restrictions by moving the majority of your code to a PNOlet, but that's an entirely different and more complicated topic. If you want to do something more complicated that could take a while (e.g. load a web page or download email), it is strongly recommended not to do it during the alarm launch code. You could do something in the sysAppLaunchCmdDisplayAlarm launch code and display a form to the user allowing them to cancel. But this is bound to get annoying quickly. Better for the user experience (but much more complicated) is to become a background application. This is a bit of black magic and is not really well supported, but it is possible. There are basically three steps to becoming a background application: Protect your application database using DmDatabaseProtect. This will ensure that your application is locked down so it can't be deleted. Lock your code segment using MemHandleLock and MemHandleSetOwner (set the owner to 0). This will ensure that your code is loaded into memory and won't be moved. Register for some notifications. For example, the sysNotifyIdleTimeEvent is a great notification to use to do some periodic background processing. Once you set this up, you can exit from the alarm launch code and then wait for your notifications to fire. You will then do all of your background processing when your notification handlers are called. Also make sure that if you allocate any system objects (memory, handles, file handles, etc.), you set their owner to 0 (system) if you expect them to persist after you return from your notification handler. Otherwise the system will clean them up. If you do this, be super careful to avoid memory and resource leaks!! They will never get cleaned up when the owner is set to 0! To leave background mode, simply do the reverse: unregister for notifications, unlock your code segment, and unprotect your application database. If you do any network operations in the background, be sure that you set the sockets to non-blocking mode and deal correctly with that! Otherwise you will block the foreground application and cause problems.
"Background" task in palm OS
I'm trying to create a Palm OS app to check a web site every X minutes or hours, and provide a notification when a piece of data is available. I know that this kind of thing can be done on the new Palm's - for example, my Centro can have email or web sites download when the application isn't on top - but I don't know how to do it. Can anyone point me in the right direction?
[ "This is possible to do but very difficult. There are several steps you'll have to take.\nFirst off, this only works on Palm OS 5 and is sketchy on some of the early Palm OS 5 devices. The latest devices are better but not perfect.\nNext, you will need to create an alarm for your application using AlmSetAlarm. This is how you accomplish the \"every X minutes or hours\" part.\nWhen the alarm fires, your application will get a sysAppLaunchCmdAlarmTriggered launch code, even if it's not already running. If you only want to do something simple and quick, you can do it in response to the launch code and you're done.\nAfter you do your stuff in the alarm launch code, be sure to set up the next alarm so that you continue to be called.\nImportant notes: You cannot access global variables when responding this launch code! Depending on the setup in your compiler, you probably also won't be able to access certain C++ features, like virtual functions (which internally use global variables). There is a setting you can set in Codewarrior that will help with this, but I'm not too familiar with it. You should architect your code so that it doesn't need globals; for example, you can use FtrSet and FtrGet to store bits of global data that you might need. Finally, you will only be able to access a single 64KB code segment of 68000 machine code. Inter-segment jumps don't work properly without globals set up.\nYou can get around a lot of these restrictions by moving the majority of your code to a PNOlet, but that's an entirely different and more complicated topic.\nIf you want to do something more complicated that could take a while (e.g. load a web page or download email), it is strongly recommended not to do it during the alarm launch code. You could do something in the sysAppLaunchCmdDisplayAlarm launch code and display a form to the user allowing them to cancel. But this is bound to get annoying quickly.\nBetter for the user experience (but much more complicated) is to become a background application. This is a bit of black magic and is not really well supported, but it is possible. There are basically three steps to becoming a background application:\n\nProtect your application database using DmDatabaseProtect. This will ensure that your application is locked down so it can't be deleted.\nLock your code segment using MemHandleLock and MemHandleSetOwner (set the owner to 0). This will ensure that your code is loaded into memory and won't be moved.\nRegister for some notifications. For example, the sysNotifyIdleTimeEvent is a great notification to use to do some periodic background processing.\n\nOnce you set this up, you can exit from the alarm launch code and then wait for your notifications to fire. You will then do all of your background processing when your notification handlers are called.\nAlso make sure that if you allocate any system objects (memory, handles, file handles, etc.), you set their owner to 0 (system) if you expect them to persist after you return from your notification handler. Otherwise the system will clean them up. If you do this, be super careful to avoid memory and resource leaks!! They will never get cleaned up when the owner is set to 0!\nTo leave background mode, simply do the reverse: unregister for notifications, unlock your code segment, and unprotect your application database.\nIf you do any network operations in the background, be sure that you set the sockets to non-blocking mode and deal correctly with that! Otherwise you will block the foreground application and cause problems.\n" ]
[ 7 ]
[]
[]
[ "garnet_os", "palm_os" ]
stackoverflow_0000055350_garnet_os_palm_os.txt
Q: What is the best way to cache a menu system locally, in the browser? I have a very large cascading menu system with over 300 items in it. (I know it's large but it's a requirement.) Currently, it's written in javascript so the external file is cached by browsers. To improve search engine results I need to convert this to a css menu system. I realize the browsers will also cache external stylesheets but, is there a way to cache the menu content (<ul> and <li> tags)? If I use javascript (document.write) to write the content I could have this in an external javascript file, which would be cached locally, but, would this be search engine friendly? What is the best solution? A: The best way to accomplish what you want to do is using SiteMaps to inform Google about the urls for your web site. Basically you will want to translate your hierarchial data for the menus into a SiteMap. A: You could generate the menus beforehand into static html / javascript files, and have all the pages pull the site from the same URL on your site. That way, the client side browser will do the caching. You'll just have to have a step in your deployment that generates the html files for the menu. Try to have it generate as much plain HTML (+JS +CSS) as possible, then whatever has to be dynamic can be adjusted with javascript. A: You could do the whole thing in CSS and HTML only, and you don't need yo use any Java script. See < http://www.netwiz.com.au/cssmenu.htmlvalue >. This pages shows a tool to be used with a specific documentation software, but the sample CSS and HTML shows how to use ul li elements for a CSS/HTML only menu in a large number of browsers. You still have the problem of 300 items in the menu which will add to the loading time. If this is an issue I guess you could move this code to a separate iframe to increase the chance of it being cached at a proxy (or by the browser). At the risk of offending the purists even a frame might do the job, but you will have problems with the topic pages not being able to display the menu if they are linked to directly.
What is the best way to cache a menu system locally, in the browser?
I have a very large cascading menu system with over 300 items in it. (I know it's large but it's a requirement.) Currently, it's written in javascript so the external file is cached by browsers. To improve search engine results I need to convert this to a css menu system. I realize the browsers will also cache external stylesheets but, is there a way to cache the menu content (<ul> and <li> tags)? If I use javascript (document.write) to write the content I could have this in an external javascript file, which would be cached locally, but, would this be search engine friendly? What is the best solution?
[ "The best way to accomplish what you want to do is using SiteMaps to inform Google about the urls for your web site. Basically you will want to translate your hierarchial data for the menus into a SiteMap.\n", "You could generate the menus beforehand into static html / javascript files, and have all the pages pull the site from the same URL on your site. That way, the client side browser will do the caching. You'll just have to have a step in your deployment that generates the html files for the menu.\nTry to have it generate as much plain HTML (+JS +CSS) as possible, then whatever has to be dynamic can be adjusted with javascript.\n", "You could do the whole thing in CSS and HTML only, and you don't need yo use any Java script. See < http://www.netwiz.com.au/cssmenu.htmlvalue >. This pages shows a tool to be used with a specific documentation software, but the sample CSS and HTML shows how to use ul li elements for a CSS/HTML only menu in a large number of browsers.\nYou still have the problem of 300 items in the menu which will add to the loading time. If this is an issue I guess you could move this code to a separate iframe to increase the chance of it being cached at a proxy (or by the browser). At the risk of offending the purists even a frame might do the job, but you will have problems with the topic pages not being able to display the menu if they are linked to directly.\n" ]
[ 6, 0, 0 ]
[]
[]
[ "css", "menu" ]
stackoverflow_0000055752_css_menu.txt
Q: How can one reference a WCF service in a different Visual Studio solution? A Visual Studio 2008 project in one solution needs to reference a WCF service in another VS 2008 solution on the same development machine. Does anybody have any suggestions on how best to accomplish this? A: Host the service, and then use the URI of the hosted service in the other project to have VS create a proxy for you. Here's a step by step article on how to add a reference. And here's an article that teaches you how to host a service in VS (which is probably the simplest thing to do while developing). I'd recommend you host your service in IIS, however, even during development. A: Right click the WCF solution in the other VS, and click Debug -> Start, that should get the WCF to show up in the system tray. Then, in the VS you want to add the service to, add the service reference. If you want to be able to step-into the WCF code for debugging, in the menu open Debug -> Attach Tread. Then scroll down the list until you see the WCF service running in your other VS.
How can one reference a WCF service in a different Visual Studio solution?
A Visual Studio 2008 project in one solution needs to reference a WCF service in another VS 2008 solution on the same development machine. Does anybody have any suggestions on how best to accomplish this?
[ "Host the service, and then use the URI of the hosted service in the other project to have VS create a proxy for you.\nHere's a step by step article on how to add a reference. And here's an article that teaches you how to host a service in VS (which is probably the simplest thing to do while developing). I'd recommend you host your service in IIS, however, even during development. \n", "Right click the WCF solution in the other VS, and click Debug -> Start, that should get the WCF to show up in the system tray. Then, in the VS you want to add the service to, add the service reference. \nIf you want to be able to step-into the WCF code for debugging, in the menu open Debug -> Attach Tread. Then scroll down the list until you see the WCF service running in your other VS.\n" ]
[ 1, 1 ]
[]
[]
[ "visual_studio_2008", "wcf" ]
stackoverflow_0000055797_visual_studio_2008_wcf.txt
Q: How to modify the style property of a font on Windows? Note that this question continues from Is it possible to coax Visual Studio 2008 into using italics for comments? If the long question title got you, here's the problem: How to convert the style property of the Consolas Italic font to Bold without actually modifying any of its actual glyphs? That is, we want the font to be still the same (i.e., Italic) we merely want the OS to believe that it's now a Bold font. Please just don't mention the name of a tool (Ex: fontforge), but describe the steps to achieve this or point to such a description. A: Alright, I've successfully used FontForge to create a copy of Consolas (although this should work with any font) with the bold style actually being italics. These are the steps that I followed: Install FontForge. It's a lot easier to do this on linux than on windows/cygwin. I used a Ubuntu VM ("sudo apt-get install fontforge"). Open Consola.ttf (the "normal" style font) in FontForge. Select Element -> Font Info. Change the Fontname, Family Name, and Name for Humans, all to the same thing. I used 'ConsolasVS'. Click Ok. Click 'Yes' to let FontForge generate a new GUID for the font. Select File -> Generate Fonts. Make sure you've got "TrueType" selected. Uncheck "Validate before saving". Click Save. Now open Consolai.ttf (the italic style font) in FontForge. Go back to Element -> Font Info. Change the Font names as before, and where it currently says "Italic", change that to "Bold". Go to the OS/2 tab, change the font weight to "700 Bold". Go to the Mac tab, change the style set to Bold. Click Ok. Allow a new GUID to be generated again. File -> Generate Fonts, as before. Copy your two new ttf files into your \Windows\FONTS\ folder. You can now have nice italic comments with Consolas in VS2008. Hooray! A: I did the italics-as-bold trick on Consolas back in July 2007 and posted a screenshot of it on my blog. I used FontLab which does a great job but a custom tool to copy and set the header would be the best bet as you can't modify and redistribute Consolas and FontLab costs $699. If you want to go down the FontLab route the open up the regular and italic versions and go into the File > Font Info... menu option and use the Names and Copyright section. In there set them both fonts Family Name to a new name then flip the checkboxes on the italic version to indicate bold instead of italic and select Normal from the Weight list box and Italic in the Style Name list box. Save and install :)
How to modify the style property of a font on Windows?
Note that this question continues from Is it possible to coax Visual Studio 2008 into using italics for comments? If the long question title got you, here's the problem: How to convert the style property of the Consolas Italic font to Bold without actually modifying any of its actual glyphs? That is, we want the font to be still the same (i.e., Italic) we merely want the OS to believe that it's now a Bold font. Please just don't mention the name of a tool (Ex: fontforge), but describe the steps to achieve this or point to such a description.
[ "Alright, I've successfully used FontForge to create a copy of Consolas (although this should work with any font) with the bold style actually being italics.\nThese are the steps that I followed:\n\nInstall FontForge. It's a lot easier to do this on linux than on windows/cygwin. I used a Ubuntu VM (\"sudo apt-get install fontforge\").\nOpen Consola.ttf (the \"normal\" style font) in FontForge.\nSelect Element -> Font Info.\nChange the Fontname, Family Name, and Name for Humans, all to the same thing. I used 'ConsolasVS'.\nClick Ok. Click 'Yes' to let FontForge generate a new GUID for the font.\nSelect File -> Generate Fonts. Make sure you've got \"TrueType\" selected. Uncheck \"Validate before saving\". Click Save.\nNow open Consolai.ttf (the italic style font) in FontForge.\nGo back to Element -> Font Info.\nChange the Font names as before, and where it currently says \"Italic\", change that to \"Bold\".\nGo to the OS/2 tab, change the font weight to \"700 Bold\".\nGo to the Mac tab, change the style set to Bold.\nClick Ok. Allow a new GUID to be generated again.\nFile -> Generate Fonts, as before.\n\nCopy your two new ttf files into your \\Windows\\FONTS\\ folder.\nYou can now have nice italic comments with Consolas in VS2008. Hooray!\n", "I did the italics-as-bold trick on Consolas back in July 2007 and posted a screenshot of it on my blog.\nI used FontLab which does a great job but a custom tool to copy and set the header would be the best bet as you can't modify and redistribute Consolas and FontLab costs $699.\nIf you want to go down the FontLab route the open up the regular and italic versions and go into the File > Font Info... menu option and use the Names and Copyright section.\nIn there set them both fonts Family Name to a new name then flip the checkboxes on the italic version to indicate bold instead of italic and select Normal from the Weight list box and Italic in the Style Name list box.\nSave and install :)\n" ]
[ 19, 3 ]
[]
[]
[ "fonts", "windows" ]
stackoverflow_0000017508_fonts_windows.txt
Q: Adding Items Using DataBinding from TreeView to ListBox WPF I want to add the selected item from the TreeView to the ListBox control using DataBinding (If it can work with DataBinding). <TreeView HorizontalAlignment="Left" Margin="30,32,0,83" Name="treeView1" Width="133" > </TreeView> <ListBox VerticalAlignment="Top" Margin="208,36,93,0" Name="listBox1" Height="196" > </ListBox> TreeView is populated from the code behind page with some dummy data. A: You can bind to an element using ElementName, so if you wanted to bind the selected tree item to the ItemsSource of a ListBox: ItemsSource="{Binding SelectedItem, ElementName=treeView1}" A: I'm pretty sure it is possible, since WPF is really flexible with data binding, but I haven't done that specific scenario yet. I've been following a WPF Databinding FAQ from the MSDN blogs as of late and it provides a lot of insights that might help.
Adding Items Using DataBinding from TreeView to ListBox WPF
I want to add the selected item from the TreeView to the ListBox control using DataBinding (If it can work with DataBinding). <TreeView HorizontalAlignment="Left" Margin="30,32,0,83" Name="treeView1" Width="133" > </TreeView> <ListBox VerticalAlignment="Top" Margin="208,36,93,0" Name="listBox1" Height="196" > </ListBox> TreeView is populated from the code behind page with some dummy data.
[ "You can bind to an element using ElementName, so if you wanted to bind the selected tree item to the ItemsSource of a ListBox:\nItemsSource=\"{Binding SelectedItem, ElementName=treeView1}\"\n\n", "I'm pretty sure it is possible, since WPF is really flexible with data binding, but I haven't done that specific scenario yet.\nI've been following a WPF Databinding FAQ from the MSDN blogs as of late and it provides a lot of insights that might help.\n" ]
[ 1, 0 ]
[]
[]
[ "data_binding", "listbox", "treeview", "wpf" ]
stackoverflow_0000054440_data_binding_listbox_treeview_wpf.txt
Q: Designing a new UI for a legacy WinForms MDI application I'm working on moving a client/server application created with C# and WinForms into the SOA/WPF/Silverlight world. One of the big hurdles is the design of the UI. My current UI is MDI driven and users rely heavily on child windows, having many open at the same time and toggling back and forth between them. What might be the best way to recreate the UI functionality in an MDI-less environment? (I've no desire to create MDI functionality on my own in WPF). Tabs? A list panel that toggles different controls? A: Look at 37signals and how nice their web UIs are (mostly HTML + AJAX). It's a good example of web applications that work. One of the things to remember are to make sure you don't break the web paradigm. If users want to see two things side by side, they should be able to duplicate the window and let the web browser do the windowing. For WPF, there are a lot of new visualization paradigms. You can find some examples on the sites for various control toolkit providers: Xceed, Telerik, Infragistics. They have demo programs for the different ways they help you organize screens in an application. When developing complex composite applications in WPF, you could also start at the Patterns and Practices Prism site. It's an InProgress set of practices for planning and developing complex composite (smart client style) applications in WPF. A: I think the answer is really going to be up to your users -- I'd set up some prototypes with multiple paradigms and let them provide some input. The last thing you want to do is introduce a new UI paradigm without having any end-user input. Tabs are really popular now, but don't allow side-by-side viewing, so if that is a requirement you may want to go with more of an outlook-style setup, with multiple panels that can be activated, hidden and resized. One thing that you might want to do is to code your app as a composite UI, where each view is built independently from its container (be it a child window, tab or accordion, etc.), and is just "dropped in" in the designer. That will protect you from when the users change their minds about the navigation paradigm in the future. A: multiple top level windows are easy to implement and have all the advantages of MDI - that's what MS selected for the newer versions of Office A: Did you try GOA WinForms?
Designing a new UI for a legacy WinForms MDI application
I'm working on moving a client/server application created with C# and WinForms into the SOA/WPF/Silverlight world. One of the big hurdles is the design of the UI. My current UI is MDI driven and users rely heavily on child windows, having many open at the same time and toggling back and forth between them. What might be the best way to recreate the UI functionality in an MDI-less environment? (I've no desire to create MDI functionality on my own in WPF). Tabs? A list panel that toggles different controls?
[ "Look at 37signals and how nice their web UIs are (mostly HTML + AJAX). It's a good example of web applications that work. One of the things to remember are to make sure you don't break the web paradigm. If users want to see two things side by side, they should be able to duplicate the window and let the web browser do the windowing.\nFor WPF, there are a lot of new visualization paradigms. You can find some examples on the sites for various control toolkit providers: Xceed, Telerik, Infragistics. They have demo programs for the different ways they help you organize screens in an application.\nWhen developing complex composite applications in WPF, you could also start at the Patterns and Practices Prism site. It's an InProgress set of practices for planning and developing complex composite (smart client style) applications in WPF.\n", "I think the answer is really going to be up to your users -- I'd set up some prototypes with multiple paradigms and let them provide some input. The last thing you want to do is introduce a new UI paradigm without having any end-user input.\nTabs are really popular now, but don't allow side-by-side viewing, so if that is a requirement you may want to go with more of an outlook-style setup, with multiple panels that can be activated, hidden and resized.\nOne thing that you might want to do is to code your app as a composite UI, where each view is built independently from its container (be it a child window, tab or accordion, etc.), and is just \"dropped in\" in the designer. That will protect you from when the users change their minds about the navigation paradigm in the future.\n", "multiple top level windows are easy to implement and have all the advantages of MDI - that's what MS selected for the newer versions of Office\n", "Did you try GOA WinForms?\n" ]
[ 4, 3, 1, 0 ]
[]
[]
[ "c#", "silverlight", "user_interface", "wpf" ]
stackoverflow_0000050518_c#_silverlight_user_interface_wpf.txt
Q: Tablet PC SDK (1.7) Merge Module + VS2008 + Windows Vista? I have a VS2005 deployment & setup project, that makes use of the Tablet PC SDK 1.7 Merge Module, so users of Windows XP can make use of the managed Microsoft.Ink.DLL library. Now that we've moved over to Vista/VS2008, do I still need to install the TPC SDK (to get the merge module) or can I make use of something that Vista has? Google seems plagued with vague references. If I add the merge module for SDK 1.7, how will this affect current Vista users (which will have the Tablet PC capabilities built-in)? A: As usual, one of the trickiest aspects of Tablet development is deployment: Tablet functionality isn't built into the Home Basic or Starter editions of Vista so if you want your program to work on those, you still need the MSM. You should be ok using merge modules on Tablet-enabled versions of Vista. I mean, it's equivalent installing the MSM onto an existing XP Tablet that already had the components. It won't add it if it's already there. XP 2005 Tablet included TPC 1.7. These are also installed on Tablet-enabled versions of Vista too. If you stick with those core features, just installing the main 1.7 MSM everywhere's probably cool. However, Vista also added new ink analysis capabilities, some stylus input APIs, and a new InkCanvas control so if you need any of these, are there additional merge modules you need to install if you want everything to still work on XP 2005. So bottom line, if you care about XP and/or Home Basic Vista, you still need to deal with merge modules... stuff should still work on Vista. If you're just targeting premium versions of Vista, you don't need 'em anymore.
Tablet PC SDK (1.7) Merge Module + VS2008 + Windows Vista?
I have a VS2005 deployment & setup project, that makes use of the Tablet PC SDK 1.7 Merge Module, so users of Windows XP can make use of the managed Microsoft.Ink.DLL library. Now that we've moved over to Vista/VS2008, do I still need to install the TPC SDK (to get the merge module) or can I make use of something that Vista has? Google seems plagued with vague references. If I add the merge module for SDK 1.7, how will this affect current Vista users (which will have the Tablet PC capabilities built-in)?
[ "As usual, one of the trickiest aspects of Tablet development is deployment:\n\nTablet functionality isn't built into the Home Basic or Starter editions of Vista so if you want your program to work on those, you still need the MSM.\nYou should be ok using merge modules on Tablet-enabled versions of Vista. I mean, it's equivalent installing the MSM onto an existing XP Tablet that already had the components. It won't add it if it's already there.\nXP 2005 Tablet included TPC 1.7. These are also installed on Tablet-enabled versions of Vista too. If you stick with those core features, just installing the main 1.7 MSM everywhere's probably cool. However, Vista also added new ink analysis capabilities, some stylus input APIs, and a new InkCanvas control so if you need any of these, are there additional merge modules you need to install if you want everything to still work on XP 2005.\n\nSo bottom line, if you care about XP and/or Home Basic Vista, you still need to deal with merge modules... stuff should still work on Vista. If you're just targeting premium versions of Vista, you don't need 'em anymore.\n" ]
[ 2 ]
[]
[]
[ "sdk", "tablet_pc", "visual_studio", "windows_vista", "windows_xp" ]
stackoverflow_0000052319_sdk_tablet_pc_visual_studio_windows_vista_windows_xp.txt
Q: What is the correct .NET exception to throw when try to insert a duplicate object into a collection? I have an Asset object that has a property AssignedSoftware, which is a collection. I want to make sure that the same piece of Software is not assigned to an Asset more than once. In Add method I check to see if the Software already exist, and if it does, I want to throw an exception. Is there a standard .NET exception that I should be throwing? Or does best practices dictate I create my own custom exception? A: Why has InvalidOperationException been accepted as the answer?! It should be an ArgumentException?! InvalidOperationException should be used if the object having the method/property called against it is not able to cope with the request due to uninit'ed state etc. The problem here is not the object being Added to, but the object being passed to the object (it's a dupe). Think about it, if this Add call never took place, would the object still function as normal, YES! This should be an ArgumentException. A: .Net will throw a System.ArgumentException if you try to add an item to a hashtable twice with the same key value, so it doesnt look like there is anything more specific. You may want to write your own exception if you need something more specific. A: You should probably throw ArgumentException, as that is what the base library classes do. A: From the Class Library design guidelines for errors (http://msdn.microsoft.com/en-us/library/8ey5ey87(VS.71).aspx): In most cases, use the predefined exception types. Only define new exception types for programmatic scenarios, where you expect users of your class library to catch exceptions of this new type and perform a programmatic action based on the exception type itself. This is in lieu of parsing the exception string, which would negatively impact performance and maintenance. ... Throw an ArgumentException or create an exception derived from this class if invalid parameters are passed or detected. Throw the InvalidOperationException exception if a call to a property set accessor or method is not appropriate given the object's current state. This seems like an "Object state invalid" scenario to me, so I'd pick InvalidOperationException over ArgumentException: The parameters are valid, but not at this point in the objects life. A: Well, if you really want an collection with unique items, you might want to take a look at the HashSet object (available in C# 3.0). Otherwise, there are two approaches that you can take: Create a custom exception for your operation, just as you had stated Implement an Add() method that returns a boolean result: true if the item is added and false if the item already has a duplicate in the collection Either approach can be considered best practice, just as long as you are consistent in its use.
What is the correct .NET exception to throw when try to insert a duplicate object into a collection?
I have an Asset object that has a property AssignedSoftware, which is a collection. I want to make sure that the same piece of Software is not assigned to an Asset more than once. In Add method I check to see if the Software already exist, and if it does, I want to throw an exception. Is there a standard .NET exception that I should be throwing? Or does best practices dictate I create my own custom exception?
[ "Why has InvalidOperationException been accepted as the answer?! It should be an ArgumentException?!\nInvalidOperationException should be used if the object having the method/property called against it is not able to cope with the request due to uninit'ed state etc. The problem here is not the object being Added to, but the object being passed to the object (it's a dupe). Think about it, if this Add call never took place, would the object still function as normal, YES!\nThis should be an ArgumentException.\n", ".Net will throw a System.ArgumentException if you try to add an item to a hashtable twice with the same key value, so it doesnt look like there is anything more specific. You may want to write your own exception if you need something more specific.\n", "You should probably throw ArgumentException, as that is what the base library classes do.\n", "From the Class Library design guidelines for errors (http://msdn.microsoft.com/en-us/library/8ey5ey87(VS.71).aspx):\n\nIn most cases, use the predefined exception types. Only define new exception types for programmatic scenarios, where you expect users of your class library to catch exceptions of this new type and perform a programmatic action based on the exception type itself. This is in lieu of parsing the exception string, which would negatively impact performance and maintenance.\n...\nThrow an ArgumentException or create an exception derived from this class if invalid parameters are passed or detected.\nThrow the InvalidOperationException exception if a call to a property set accessor or method is not appropriate given the object's current state.\n\nThis seems like an \"Object state invalid\" scenario to me, so I'd pick InvalidOperationException over ArgumentException: The parameters are valid, but not at this point in the objects life.\n", "Well, if you really want an collection with unique items, you might want to take a look at the HashSet object (available in C# 3.0).\nOtherwise, there are two approaches that you can take:\n\nCreate a custom exception for your operation, just as you had stated\nImplement an Add() method that returns a boolean result: true if the item is added and false if the item already has a duplicate in the collection\n\nEither approach can be considered best practice, just as long as you are consistent in its use.\n" ]
[ 15, 9, 5, 5, 1 ]
[]
[]
[ "c#", "collections", "exception" ]
stackoverflow_0000054789_c#_collections_exception.txt
Q: How do distributed transactions work (eg. MSDTC)? I understand, in a fuzzy sort of way, how regular ACID transactions work. You perform some work on a database in such a way that the work is not confirmed until some kind of commit flag is set. The commit part is based on some underlying assumption (like a single disk block write is atomic). In the event of a catastrophic error, you can just clear out the uncommitted data in the recovery phase. How do distributed transactions work? In some of the MS documentation I have read that you can somehow perform a transaction across databases and filesystems (among other things). This technology could be (and probably is) used for installers, where you want the program to be fully installed or fully absent. You simply begin a transaction at the start of the installer. Next you could connect to the registry and filesystem, making the changes that define the installation. When the job is done, simply commit, or rollback if the installation fails for some reason. The registry and filesystem are automatically cleaned for you by this magical distributed transaction coordinator. How is it possible that two disparate systems can be transacted upon in this fashion? It seems to me that it is always possible to leave the system in an inconsistent state, where the filesystem has committed its changes and the registry has not. I think in MSDTC it is even possible to perform a transaction across the network. I have read http://blogs.msdn.com/florinlazar/archive/2004/03/04/84199.aspx, but it feels like only the beginning of the explanation, and that step 4 should be expanded considerably. Edit: From what I gather on http://en.wikipedia.org/wiki/Distributed_transaction, it can be accomplished by a two-phase commit (http://en.wikipedia.org/wiki/Two-phase_commit). After reading this, I'm still not understanding the method 100%, it seems like there is a lot of room for error between the steps. A: About "step 4": The transaction manager coordinates with the resource managers to ensure that all succeed to do the requested work or none of the work if done, thus maintaining the ACID properties. This of course requires all participants to provide the proper interfaces and (error-free) implementations. The interface looks like vaguely this: public interface ITransactionParticipant { bool WouldCommitWork(); void Commit(); void Rollback(); } The Transaction manager at commit-time queries all participants whether they are willing to commit the transaction. The participants may only assert this if they are able to commit this transaction under all allowable error conditions (validation, system errors, etc). After all participants have asserted the ability to commit the transaction, the manager sends the Commit() message to all participants. If any participant instead raises an error or times out, the whole transaction aborts and individual members are rolled back. This protocol requires participants to have recorded their whole transaction content before asserting their ability to commit. Of course this has to be in a special local transaction log structure to be able to recover from various kinds of failures.
How do distributed transactions work (eg. MSDTC)?
I understand, in a fuzzy sort of way, how regular ACID transactions work. You perform some work on a database in such a way that the work is not confirmed until some kind of commit flag is set. The commit part is based on some underlying assumption (like a single disk block write is atomic). In the event of a catastrophic error, you can just clear out the uncommitted data in the recovery phase. How do distributed transactions work? In some of the MS documentation I have read that you can somehow perform a transaction across databases and filesystems (among other things). This technology could be (and probably is) used for installers, where you want the program to be fully installed or fully absent. You simply begin a transaction at the start of the installer. Next you could connect to the registry and filesystem, making the changes that define the installation. When the job is done, simply commit, or rollback if the installation fails for some reason. The registry and filesystem are automatically cleaned for you by this magical distributed transaction coordinator. How is it possible that two disparate systems can be transacted upon in this fashion? It seems to me that it is always possible to leave the system in an inconsistent state, where the filesystem has committed its changes and the registry has not. I think in MSDTC it is even possible to perform a transaction across the network. I have read http://blogs.msdn.com/florinlazar/archive/2004/03/04/84199.aspx, but it feels like only the beginning of the explanation, and that step 4 should be expanded considerably. Edit: From what I gather on http://en.wikipedia.org/wiki/Distributed_transaction, it can be accomplished by a two-phase commit (http://en.wikipedia.org/wiki/Two-phase_commit). After reading this, I'm still not understanding the method 100%, it seems like there is a lot of room for error between the steps.
[ "About \"step 4\":\n\nThe transaction manager coordinates\n with the resource managers to ensure\n that all succeed to do the requested\n work or none of the work if done, thus\n maintaining the ACID properties.\n\nThis of course requires all participants to provide the proper interfaces and (error-free) implementations. The interface looks like vaguely this:\npublic interface ITransactionParticipant {\n bool WouldCommitWork();\n void Commit();\n void Rollback();\n}\n\nThe Transaction manager at commit-time queries all participants whether they are willing to commit the transaction. The participants may only assert this if they are able to commit this transaction under all allowable error conditions (validation, system errors, etc). After all participants have asserted the ability to commit the transaction, the manager sends the Commit() message to all participants. If any participant instead raises an error or times out, the whole transaction aborts and individual members are rolled back.\nThis protocol requires participants to have recorded their whole transaction content before asserting their ability to commit. Of course this has to be in a special local transaction log structure to be able to recover from various kinds of failures.\n" ]
[ 4 ]
[]
[]
[ "acid", "mstdc", "transactions" ]
stackoverflow_0000055878_acid_mstdc_transactions.txt
Q: ASP.NET - Performance Implications of a sql server database in the app_data folder The default asp.net membership provider uses a .mdf sql server database file in the app_code database. How scalable is this in terms of calling a flat file database instead of running it in a standard sql environment? Is this recommended only for small/medium traffic sites? A: It's a reasonable trade off for any site that can run on one server. It's fairly reasonable for small to medium traffic sites. When you grow to a point of a web farm, then you'll be better off with a separate server. Also, depending on how database dependent your application is, you may find better performance handing off SQL queries to a totally different server/processor to handle the database side. A: I wouldn't recommend this for anything but a "learning" project. For any real application, regardless of size, you don't know what type of "next feature" you will add. You want to have a real independent database in which you can delegate functionality to, in which you can set jobs to run independently, sit on a different HD, maybe splitting it into a different VM? You can use SQL Express and still be "free', and it is better to do this seperation before the site grows and the DB is harder to move.
ASP.NET - Performance Implications of a sql server database in the app_data folder
The default asp.net membership provider uses a .mdf sql server database file in the app_code database. How scalable is this in terms of calling a flat file database instead of running it in a standard sql environment? Is this recommended only for small/medium traffic sites?
[ "It's a reasonable trade off for any site that can run on one server. It's fairly reasonable for small to medium traffic sites.\nWhen you grow to a point of a web farm, then you'll be better off with a separate server. Also, depending on how database dependent your application is, you may find better performance handing off SQL queries to a totally different server/processor to handle the database side.\n", "I wouldn't recommend this for anything but a \"learning\" project.\nFor any real application, regardless of size, you don't know what type of \"next feature\" you will add. You want to have a real independent database in which you can delegate functionality to, in which you can set jobs to run independently, sit on a different HD, maybe splitting it into a different VM? \nYou can use SQL Express and still be \"free', and it is better to do this seperation before the site grows and the DB is harder to move.\n" ]
[ 3, 0 ]
[]
[]
[ "asp.net", "database", "sql_server" ]
stackoverflow_0000055755_asp.net_database_sql_server.txt
Q: How to implement password protection for individual files? I'm writing a little desktop app that should be able to encrypt a data file and protect it with a password (i.e. one must enter the correct password to decrypt). I want the encrypted data file to be self-contained and portable, so the authentication has to be embedded in the file (or so I assume). I have a strategy that appears workable and seems logical based on what I know (which is probably just enough to be dangerous), but I have no idea if it's actually a good design or not. So tell me: is this crazy? Is there a better/best way to do it? Step 1: User enters plain-text password, e.g. "MyDifficultPassword" Step 2: App hashes the user-password and uses that value as the symmetric key to encrypt/decrypt the data file. e.g. "MyDifficultPassword" --> "HashedUserPwdAndKey". Step 3: App hashes the hashed value from step 2 and saves the new value in the data file header (i.e. the unencrypted part of the data file) and uses that value to validate the user's password. e.g. "HashedUserPwdAndKey" --> "HashedValueForAuthentication" Basically I'm extrapolating from the common way to implement web-site passwords (when you're not using OpenID, that is), which is to store the (salted) hash of the user's password in your DB and never save the actual password. But since I use the hashed user password for the symmetric encryption key, I can't use the same value for authentication. So I hash it again, basically treating it just like another password, and save the doubly-hashed value in the data file. That way, I can take the file to another PC and decrypt it by simply entering my password. So is this design reasonably secure, or hopelessly naive, or somewhere in between? Thanks! EDIT: clarification and follow-up question re: Salt. I thought the salt had to be kept secret to be useful, but your answers and links imply this is not the case. For example, this spec linked by erickson (below) says: Thus, password-based key derivation as defined here is a function of a password, a salt, and an iteration count, where the latter two quantities need not be kept secret. Does this mean that I could store the salt value in the same place/file as the hashed key and still be more secure than if I used no salt at all when hashing? How does that work? A little more context: the encrypted file isn't meant to be shared with or decrypted by others, it's really single-user data. But I'd like to deploy it in a shared environment on computers I don't fully control (e.g. at work) and be able to migrate/move the data by simply copying the file (so I can use it at home, on different workstations, etc.). A: Key Generation I would recommend using a recognized algorithm such as PBKDF2 defined in PKCS #5 version 2.0 to generate a key from your password. It's similar to the algorithm you outline, but is capable of generating longer symmetric keys for use with AES. You should be able to find an open-source library that implements PBE key generators for different algorithms. File Format You might also consider using the Cryptographic Message Syntax as a format for your file. This will require some study on your part, but again there are existing libraries to use, and it opens up the possibility of inter-operating more smoothly with other software, like S/MIME-enabled mail clients. Password Validation Regarding your desire to store a hash of the password, if you use PBKDF2 to generate the key, you could use a standard password hashing algorithm (big salt, a thousand rounds of hashing) for that, and get different values. Alternatively, you could compute a MAC on the content. A hash collision on a password is more likely to be useful to an attacker; a hash collision on the content is likely to be worthless. But it would serve to let a legitimate recipient know that the wrong password was used for decryption. Cryptographic Salt Salt helps to thwart pre-computed dictionary attacks. Suppose an attacker has a list of likely passwords. He can hash each and compare it to the hash of his victim's password, and see if it matches. If the list is large, this could take a long time. He doesn't want spend that much time on his next target, so he records the result in a "dictionary" where a hash points to its corresponding input. If the list of passwords is very, very long, he can use techniques like a Rainbow Table to save some space. However, suppose his next target salted their password. Even if the attacker knows what the salt is, his precomputed table is worthless—the salt changes the hash resulting from each password. He has to re-hash all of the passwords in his list, affixing the target's salt to the input. Every different salt requires a different dictionary, and if enough salts are used, the attacker won't have room to store dictionaries for them all. Trading space to save time is no longer an option; the attacker must fall back to hashing each password in his list for each target he wants to attack. So, it's not necessary to keep the salt secret. Ensuring that the attacker doesn't have a pre-computed dictionary corresponding to that particular salt is sufficient. A: As Niyaz said, the approach sounds reasonable if you use a quality implementation of strong algorithms, like SHA-265 and AES for hashing and encryption. Additionally I would recommend using a Salt to reduce the possibility to create a dictionary of all password hashes. Of course, reading Bruce Schneier's Applied Cryptography is never wrong either. A: If you are using a strong hash algorithm (SHA-2) and a strong Encryption algorithm (AES), you will do fine with this approach. A: Why not use a compression library that supports password-protected files? I've used a password-protected zip file containing XML content in the past :} A: Is there really need to save the hashed password into the file. Can't you just use the password (or hashed password) with some salt and then encrypt the file with it. When decrypting just try to decrypt the file with the password + salt. If user gives wrong password the decrypted file isn't correct. Only drawbacks I can think is if the user accidentally enters wrong password and the decryption is slow, he has to wait to try again. And of course if password is forgotten there's no way to decrypt the file.
How to implement password protection for individual files?
I'm writing a little desktop app that should be able to encrypt a data file and protect it with a password (i.e. one must enter the correct password to decrypt). I want the encrypted data file to be self-contained and portable, so the authentication has to be embedded in the file (or so I assume). I have a strategy that appears workable and seems logical based on what I know (which is probably just enough to be dangerous), but I have no idea if it's actually a good design or not. So tell me: is this crazy? Is there a better/best way to do it? Step 1: User enters plain-text password, e.g. "MyDifficultPassword" Step 2: App hashes the user-password and uses that value as the symmetric key to encrypt/decrypt the data file. e.g. "MyDifficultPassword" --> "HashedUserPwdAndKey". Step 3: App hashes the hashed value from step 2 and saves the new value in the data file header (i.e. the unencrypted part of the data file) and uses that value to validate the user's password. e.g. "HashedUserPwdAndKey" --> "HashedValueForAuthentication" Basically I'm extrapolating from the common way to implement web-site passwords (when you're not using OpenID, that is), which is to store the (salted) hash of the user's password in your DB and never save the actual password. But since I use the hashed user password for the symmetric encryption key, I can't use the same value for authentication. So I hash it again, basically treating it just like another password, and save the doubly-hashed value in the data file. That way, I can take the file to another PC and decrypt it by simply entering my password. So is this design reasonably secure, or hopelessly naive, or somewhere in between? Thanks! EDIT: clarification and follow-up question re: Salt. I thought the salt had to be kept secret to be useful, but your answers and links imply this is not the case. For example, this spec linked by erickson (below) says: Thus, password-based key derivation as defined here is a function of a password, a salt, and an iteration count, where the latter two quantities need not be kept secret. Does this mean that I could store the salt value in the same place/file as the hashed key and still be more secure than if I used no salt at all when hashing? How does that work? A little more context: the encrypted file isn't meant to be shared with or decrypted by others, it's really single-user data. But I'd like to deploy it in a shared environment on computers I don't fully control (e.g. at work) and be able to migrate/move the data by simply copying the file (so I can use it at home, on different workstations, etc.).
[ "Key Generation\nI would recommend using a recognized algorithm such as PBKDF2 defined in PKCS #5 version 2.0 to generate a key from your password. It's similar to the algorithm you outline, but is capable of generating longer symmetric keys for use with AES. You should be able to find an open-source library that implements PBE key generators for different algorithms.\nFile Format\nYou might also consider using the Cryptographic Message Syntax as a format for your file. This will require some study on your part, but again there are existing libraries to use, and it opens up the possibility of inter-operating more smoothly with other software, like S/MIME-enabled mail clients.\nPassword Validation\nRegarding your desire to store a hash of the password, if you use PBKDF2 to generate the key, you could use a standard password hashing algorithm (big salt, a thousand rounds of hashing) for that, and get different values. \nAlternatively, you could compute a MAC on the content. A hash collision on a password is more likely to be useful to an attacker; a hash collision on the content is likely to be worthless. But it would serve to let a legitimate recipient know that the wrong password was used for decryption.\nCryptographic Salt\nSalt helps to thwart pre-computed dictionary attacks. \nSuppose an attacker has a list of likely passwords. He can hash each and compare it to the hash of his victim's password, and see if it matches. If the list is large, this could take a long time. He doesn't want spend that much time on his next target, so he records the result in a \"dictionary\" where a hash points to its corresponding input. If the list of passwords is very, very long, he can use techniques like a Rainbow Table to save some space.\nHowever, suppose his next target salted their password. Even if the attacker knows what the salt is, his precomputed table is worthless—the salt changes the hash resulting from each password. He has to re-hash all of the passwords in his list, affixing the target's salt to the input. Every different salt requires a different dictionary, and if enough salts are used, the attacker won't have room to store dictionaries for them all. Trading space to save time is no longer an option; the attacker must fall back to hashing each password in his list for each target he wants to attack.\nSo, it's not necessary to keep the salt secret. Ensuring that the attacker doesn't have a pre-computed dictionary corresponding to that particular salt is sufficient.\n", "As Niyaz said, the approach sounds reasonable if you use a quality implementation of strong algorithms, like SHA-265 and AES for hashing and encryption. Additionally I would recommend using a Salt to reduce the possibility to create a dictionary of all password hashes.\nOf course, reading Bruce Schneier's Applied Cryptography is never wrong either.\n", "If you are using a strong hash algorithm (SHA-2) and a strong Encryption algorithm (AES), you will do fine with this approach.\n", "Why not use a compression library that supports password-protected files? I've used a password-protected zip file containing XML content in the past :}\n", "Is there really need to save the hashed password into the file. Can't you just use the password (or hashed password) with some salt and then encrypt the file with it. When decrypting just try to decrypt the file with the password + salt. If user gives wrong password the decrypted file isn't correct.\nOnly drawbacks I can think is if the user accidentally enters wrong password and the decryption is slow, he has to wait to try again. And of course if password is forgotten there's no way to decrypt the file.\n" ]
[ 24, 2, 1, 1, 0 ]
[]
[]
[ "cryptography", "encryption", "passwords" ]
stackoverflow_0000055862_cryptography_encryption_passwords.txt
Q: Eclipse alternative to VS .sln files I've recently had to switch from Visual Studio to Eclipse CDT. It would seem that Eclipse "workspaces" are not quite like VS solution files. Eclipse workspaces use the .metadata folder for managing multiple projects, but there doesn't seem to be a simple distinction between user settings or IDE preferences and project/solution settings. What I want is a way to group a collection of related (and dependent) projects together and have that data live in source control without all the other user specific stuff that developers don't need to share. You know, like a .sln file in Visual Studio. Does Eclipse just not work this way? (And if not, then why not?) A: Yes you are right eclipse does not manage projects in the same way VS does with solution files. However for putting a group of related projects into a VCS eclipse has the concept of a Team Project Set available in File->Export then under the Team folder there is Team Project Set. A: Like JProgrammer said there is Team Project Set. You can send your colleagues a bunch of .psf files, works similar to VS.NET. I can only say we have good expierience with this feature. A: I often find IDE's have a preferred way to work. Sure, you might be able to get the IDE to do it your way, but you'll probably end up fighting it all the way. Try to use your IDE like their makers intended you to. They have made presumptions on how you are supposed to do your work. They have optimized the user experience according to those presumptions. Go with the flow. Anything else will make you gnarly, bitter, wrinkly and give you gastly breath! Corollary: If you can, choose the IDE that makes the same presumptions about workflow as you do!
Eclipse alternative to VS .sln files
I've recently had to switch from Visual Studio to Eclipse CDT. It would seem that Eclipse "workspaces" are not quite like VS solution files. Eclipse workspaces use the .metadata folder for managing multiple projects, but there doesn't seem to be a simple distinction between user settings or IDE preferences and project/solution settings. What I want is a way to group a collection of related (and dependent) projects together and have that data live in source control without all the other user specific stuff that developers don't need to share. You know, like a .sln file in Visual Studio. Does Eclipse just not work this way? (And if not, then why not?)
[ "Yes you are right eclipse does not manage projects in the same way VS does with solution files. However for putting a group of related projects into a VCS eclipse has the concept of a Team Project Set available in File->Export then under the Team folder there is Team Project Set.\n", "Like JProgrammer said there is Team Project Set. You can send your colleagues a bunch of .psf files, works similar to VS.NET. I can only say we have good expierience with this feature.\n", "I often find IDE's have a preferred way to work. Sure, you might be able to get the IDE to do it your way, but you'll probably end up fighting it all the way.\nTry to use your IDE like their makers intended you to. They have made presumptions on how you are supposed to do your work. They have optimized the user experience according to those presumptions.\nGo with the flow. Anything else will make you gnarly, bitter, wrinkly and give you gastly breath!\nCorollary: If you can, choose the IDE that makes the same presumptions about workflow as you do!\n" ]
[ 4, 1, 0 ]
[]
[]
[ "eclipse" ]
stackoverflow_0000055903_eclipse.txt
Q: PHP: How do I check if all public methods of two classes return the same values? In effect, if I have a class c and instances of $c1 and $c2 which might have different private variable amounts but all their public methods return the same values I would like to be able to check that $c1 == $c2? Does anyone know an easy way to do this? A: It's difficult to follow exactly what you're after. Your question seems to imply that these public methods don't require arguments, or that if they did they would be the same arguments. You could probably get quite far using the inbuilt reflection classes. Pasted below is a quick test I knocked up to compare the returns of all the public methods of two classes and ensure they were they same. You could easily modify it to ignore non matching public methods (i.e. only check for equality on public methods in class2 which exist in class1). Giving a set of arguments to pass in would be trickier - but could be done with an array of methods names / arguments to call against each class. Anyway, this may have some bits in it which could be of use to you. $class1 = new Class1(); $class2 = new Class2(); $class3 = new Class3(); $class4 = new Class4(); $class5 = new Class5(); echo ClassChecker::samePublicMethods($class1,$class2); //should be true echo ClassChecker::samePublicMethods($class1,$class3); //should be false - different values echo ClassChecker::samePublicMethods($class1,$class4); //should be false -- class3 contains extra public methods echo ClassChecker::samePublicMethods($class1,$class5); //should be true -- class5 contains extra private methods class ClassChecker { public static function samePublicMethods($class1, $class2) { $class1methods = array(); $r = new ReflectionClass($class1); $methods = $r->getMethods(); foreach($methods as $m) { if ($m->isPublic()) { @$result = call_user_method($m->getName(), $class1); $class1methods[$m->getName()] = $result; } } $r = new ReflectionClass($class2); $methods = $r->getMethods(); foreach($methods as $m) { //only comparing public methods if ($m->isPublic()) { //public method doesn't match method in class1 so return false if(!isset($class1methods[$m->getName()])) { return false; } //public method of same name doesn't return same value so return false @$result = call_user_method($m->getName(), $class2); if ($class1methods[$m->getName()] !== $result) { return false; } } } return true; } } class Class1 { private $b = 'bbb'; public function one() { return 999; } public function two() { return "bendy"; } } class Class2 { private $a = 'aaa'; public function one() { return 999; } public function two() { return "bendy"; } } class Class3 { private $c = 'ccc'; public function one() { return 222; } public function two() { return "bendy"; } } class Class4 { public function one() { return 999; } public function two() { return "bendy"; } public function three() { return true; } } class Class5 { public function one() { return 999; } public function two() { return "bendy"; } private function three() { return true; } } A: You can also implement a equal($other) function like <?php class Foo { public function equals($o) { return ($o instanceof 'Foo') && $o.firstName()==$this.firstName(); } } or use foreach to iterate over the public properties (this behaviour might be overwritten) of one object and compare them to the other object's properties. <?php function equalsInSomeWay($a, $b) { if ( !($b instanceof $a) ) { return false; } foreach($a as $name=>$value) { if ( !isset($b->$name) || $b->$name!=$value ) { return false; } } return true; } (untested) or (more or less) the same using the Reflection classes, see http://php.net/manual/en/language.oop5.reflection.php#language.oop5.reflection.reflectionobject With reflection you might also implement a more duck-typing kind of comparision, if you want to, like "I don't care if it's an instance of or the same class as long as it has the same public methods and they return the 'same' values" it really depends on how you define "equal". A: You can define PHP's __toString magic method inside your class. For example class cat { private $name; public function __contruct($catname) { $this->name = $catname; } public function __toString() { return "My name is " . $this->name . "\n"; } } $max = new cat('max'); $toby = new cat('toby'); print $max; // echoes 'My name is max' print $toby; // echoes 'My name is toby' if($max == $toby) { echo 'Woohoo!\n'; } else { echo 'Doh!\n'; } Then you can use the equality operator to check if both instances are equal or not. HTH, Rushi A: George: You may have already seen this but it may help: http://usphp.com/manual/en/language.oop5.object-comparison.php When using the comparison operator (==), object variables are compared in a simple manner, namely: Two object instances are equal if they have the same attributes and values, and are instances of the same class. They don't get implicitly converted to strings. If you want todo comparison, you will end up modifying your classes. You can also write some method of your own todo comparison using getters & setters A: You can try writing a class of your own to plugin and write methods that do comparison based on what you define. For example: class Validate { public function validateName($c1, $c2) { if($c1->FirstName == "foo" && $c2->LastName == "foo") { return true; } else if (// someother condition) { return // someval; } else { return false; } } public function validatePhoneNumber($c1, $c2) { // some code } } This will probably be the only way where you wont have to modify the pre-existing class code
PHP: How do I check if all public methods of two classes return the same values?
In effect, if I have a class c and instances of $c1 and $c2 which might have different private variable amounts but all their public methods return the same values I would like to be able to check that $c1 == $c2? Does anyone know an easy way to do this?
[ "It's difficult to follow exactly what you're after. Your question seems to imply that these public methods don't require arguments, or that if they did they would be the same arguments. \nYou could probably get quite far using the inbuilt reflection classes. \nPasted below is a quick test I knocked up to compare the returns of all the public methods of two classes and ensure they were they same. You could easily modify it to ignore non matching public methods (i.e. only check for equality on public methods in class2 which exist in class1). Giving a set of arguments to pass in would be trickier - but could be done with an array of methods names / arguments to call against each class. \nAnyway, this may have some bits in it which could be of use to you.\n$class1 = new Class1();\n$class2 = new Class2();\n$class3 = new Class3();\n$class4 = new Class4();\n$class5 = new Class5();\n\necho ClassChecker::samePublicMethods($class1,$class2); //should be true\necho ClassChecker::samePublicMethods($class1,$class3); //should be false - different values\necho ClassChecker::samePublicMethods($class1,$class4); //should be false -- class3 contains extra public methods\necho ClassChecker::samePublicMethods($class1,$class5); //should be true -- class5 contains extra private methods\n\nclass ClassChecker {\n\n public static function samePublicMethods($class1, $class2) {\n\n $class1methods = array();\n\n $r = new ReflectionClass($class1);\n $methods = $r->getMethods();\n\n foreach($methods as $m) {\n if ($m->isPublic()) {\n @$result = call_user_method($m->getName(), $class1);\n $class1methods[$m->getName()] = $result;\n }\n }\n\n $r = new ReflectionClass($class2);\n $methods = $r->getMethods();\n\n foreach($methods as $m) {\n\n //only comparing public methods\n if ($m->isPublic()) {\n\n //public method doesn't match method in class1 so return false\n if(!isset($class1methods[$m->getName()])) {\n return false;\n }\n\n //public method of same name doesn't return same value so return false\n @$result = call_user_method($m->getName(), $class2);\n if ($class1methods[$m->getName()] !== $result) {\n return false;\n }\n }\n }\n\n return true;\n }\n}\n\n\nclass Class1 {\n\n private $b = 'bbb';\n\n public function one() {\n return 999;\n }\n\n public function two() {\n return \"bendy\";\n }\n\n\n}\n\nclass Class2 {\n\n private $a = 'aaa';\n\n public function one() {\n return 999;\n }\n\n public function two() {\n return \"bendy\";\n }\n}\n\nclass Class3 {\n\n private $c = 'ccc';\n\n public function one() {\n return 222;\n }\n\n public function two() {\n return \"bendy\";\n }\n\n\n}\n\nclass Class4 {\n\n public function one() {\n return 999;\n }\n\n public function two() {\n return \"bendy\";\n }\n\n public function three() {\n return true;\n }\n\n}\n\nclass Class5 {\n\n public function one() {\n return 999;\n }\n\n public function two() {\n return \"bendy\";\n }\n\n private function three() {\n return true;\n }\n\n}\n\n", "You can also implement a equal($other) function like\n\n<?php\nclass Foo {\n public function equals($o) {\n return ($o instanceof 'Foo') && $o.firstName()==$this.firstName();\n }\n}\n\nor use foreach to iterate over the public properties (this behaviour might be overwritten) of one object and compare them to the other object's properties.\n\n<?php\nfunction equalsInSomeWay($a, $b) {\n if ( !($b instanceof $a) ) {\n return false;\n }\n\n foreach($a as $name=>$value) {\n if ( !isset($b->$name) || $b->$name!=$value ) {\n return false;\n }\n }\n return true;\n}\n\n(untested)\nor (more or less) the same using the Reflection classes, see http://php.net/manual/en/language.oop5.reflection.php#language.oop5.reflection.reflectionobject\nWith reflection you might also implement a more duck-typing kind of comparision, if you want to, like \"I don't care if it's an instance of or the same class as long as it has the same public methods and they return the 'same' values\"\nit really depends on how you define \"equal\".\n", "You can define PHP's __toString magic method inside your class. \nFor example\nclass cat {\n private $name;\n public function __contruct($catname) {\n $this->name = $catname;\n }\n\n public function __toString() {\n return \"My name is \" . $this->name . \"\\n\";\n }\n}\n\n$max = new cat('max');\n$toby = new cat('toby');\nprint $max; // echoes 'My name is max'\nprint $toby; // echoes 'My name is toby'\n\nif($max == $toby) {\n echo 'Woohoo!\\n';\n} else {\n echo 'Doh!\\n';\n}\n\nThen you can use the equality operator to check if both instances are equal or not. \nHTH,\nRushi\n", "George: You may have already seen this but it may help: http://usphp.com/manual/en/language.oop5.object-comparison.php\n\nWhen using the comparison operator (==), object variables are compared in a simple manner, namely: Two object instances are equal if they have the same attributes and values, and are instances of the same class. \n\nThey don't get implicitly converted to strings.\nIf you want todo comparison, you will end up modifying your classes. You can also write some method of your own todo comparison using getters & setters\n", "You can try writing a class of your own to plugin and write methods that do comparison based on what you define. For example:\nclass Validate {\n public function validateName($c1, $c2) {\n if($c1->FirstName == \"foo\" && $c2->LastName == \"foo\") {\n return true;\n } else if (// someother condition) {\n return // someval;\n } else {\n return false;\n }\n }\n\n public function validatePhoneNumber($c1, $c2) {\n // some code\n }\n}\n\nThis will probably be the only way where you wont have to modify the pre-existing class code\n" ]
[ 2, 2, 1, 1, 0 ]
[]
[]
[ "oop", "php" ]
stackoverflow_0000055713_oop_php.txt
Q: Prevent file casing problems in Subversion We encountered a problem with using Subversion on Windows. A developer committed a file foo.Data.sql and later another developer committed a file called foo.data.sql. Naturally, this caused a problem on Windows clients (all clients are Windows in our environments) because files are case sensitive in Subversion but not in Windows. We managed to resolve this problem by deleting one of the files directly in the repository, but I still have two questions: How is it possible for a developer to do this using a Windows client? Does he have an invalid working copy, or is there a bug in the client (TortoiseSVN)? How can we prevent changes like these from entering the repository (i.e. has anyone written a hook script that performs a sanity check for file casing issues)? A: There is definitely a hook script that checks case sensitivity - Sourceforge have it as one of their options. A quick google turns up: http://www.subversionary.org/howto/using-check-case-insensitive-py-on-windows and http://svn.apache.org/repos/asf/subversion/trunk/contrib/hook-scripts/case-insensitive.py The issue will have arisen on a windows platform if user 1 added foo.data.sql and user 2 added foo.Data.sql before getting an update from user 1. Hope that helps :) A: On Windows, files are case-insensitive, but case-preserving. You can rename a file, changing the case and Windows will preserve the change. The problem occurs when Subversion tries to create the second file. Windows reports that the file already exists. If you wanted to merge the two files into a single copy, instead of deleting the file in the repository, you could rename the bad file in the repository (i.e. append a suffix like '.temp'), update the client, merge into the good file, and then delete the bad file. A: 1; It is possible, because the two files came from two developers. One is renaming or creating the file with different cases and during commit does not realise that it will be an add not a commit changes. 2; Check TortoiseSVN FAQ
Prevent file casing problems in Subversion
We encountered a problem with using Subversion on Windows. A developer committed a file foo.Data.sql and later another developer committed a file called foo.data.sql. Naturally, this caused a problem on Windows clients (all clients are Windows in our environments) because files are case sensitive in Subversion but not in Windows. We managed to resolve this problem by deleting one of the files directly in the repository, but I still have two questions: How is it possible for a developer to do this using a Windows client? Does he have an invalid working copy, or is there a bug in the client (TortoiseSVN)? How can we prevent changes like these from entering the repository (i.e. has anyone written a hook script that performs a sanity check for file casing issues)?
[ "There is definitely a hook script that checks case sensitivity - Sourceforge have it as one of their options. A quick google turns up: http://www.subversionary.org/howto/using-check-case-insensitive-py-on-windows and http://svn.apache.org/repos/asf/subversion/trunk/contrib/hook-scripts/case-insensitive.py\nThe issue will have arisen on a windows platform if user 1 added foo.data.sql and user 2 added foo.Data.sql before getting an update from user 1. \nHope that helps :)\n", "On Windows, files are case-insensitive, but case-preserving. You can rename a file, changing the case and Windows will preserve the change. The problem occurs when Subversion tries to create the second file. Windows reports that the file already exists.\nIf you wanted to merge the two files into a single copy, instead of deleting the file in the repository, you could rename the bad file in the repository (i.e. append a suffix like '.temp'), update the client, merge into the good file, and then delete the bad file.\n", "1; It is possible, because the two files came from two developers. One is renaming or creating the file with different cases and during commit does not realise that it will be an add not a commit changes.\n2; Check TortoiseSVN FAQ\n" ]
[ 4, 2, 0 ]
[]
[]
[ "svn", "tortoisesvn" ]
stackoverflow_0000056022_svn_tortoisesvn.txt
Q: Stop system entering 'standby' How can i stop the host machine entering standby mode while my application is running? Is there any win32 api call to do this? A: There are two APIs, depending on what version of Windows. XP,2000, 2003: http://msdn.microsoft.com/en-us/library/aa373247(VS.85).aspx Respond to PBT_APMQUERYSUSPEND. Vista, 2008: http://msdn.microsoft.com/en-us/library/aa373208(VS.85).aspx There could be many valid reasons to prevent the computer from going to sleep. For example, watching a video, playing music, compiling a long running build, downloading large files, etc. A: This article http://www.codeguru.com/cpp/w-p/system/messagehandling/article.php/c6907 provides a demo of how to do this from C++ (thought he article is framed as if you want to do it from Java, and provides a Java wrapper). The actual code in in a zip file at http://www.codeguru.com/dbfiles/get_file/standbydetectdemo_src.zip?id=6907&lbl=STANDBYDETECTDEMO_SRC_ZIP&ds=20040406 and the C++ part of it is under com/ha/common/windows/standbydetector. Hopefully it will give you enough of a direction to get started.
Stop system entering 'standby'
How can i stop the host machine entering standby mode while my application is running? Is there any win32 api call to do this?
[ "There are two APIs, depending on what version of Windows.\nXP,2000, 2003:\nhttp://msdn.microsoft.com/en-us/library/aa373247(VS.85).aspx\nRespond to PBT_APMQUERYSUSPEND.\nVista, 2008:\nhttp://msdn.microsoft.com/en-us/library/aa373208(VS.85).aspx\nThere could be many valid reasons to prevent the computer from going to sleep. For example, watching a video, playing music, compiling a long running build, downloading large files, etc. \n", "This article http://www.codeguru.com/cpp/w-p/system/messagehandling/article.php/c6907 provides a demo of how to do this from C++ (thought he article is framed as if you want to do it from Java, and provides a Java wrapper).\nThe actual code in in a zip file at http://www.codeguru.com/dbfiles/get_file/standbydetectdemo_src.zip?id=6907&lbl=STANDBYDETECTDEMO_SRC_ZIP&ds=20040406 and the C++ part of it is under com/ha/common/windows/standbydetector.\nHopefully it will give you enough of a direction to get started.\n" ]
[ 6, 3 ]
[]
[]
[ "c++", "standards", "standby", "winapi" ]
stackoverflow_0000056046_c++_standards_standby_winapi.txt
Q: What, if anything is typically done in a repository's structure to reflect deployed units? This is a follow-up to the question: Should the folders in a solution match the namespace? The consensus on that question was a qualified "yes": that is, folders == namespaces, generally, but not slavishly (the way java requires). Indeed, that's how I set up projects. But setting up source control has made me hesitate about my current folder structure. As with the .NET Framework, the namespaces in my project do not always match the deployed units one-to-one. Say you have lib -> lib.dll lib.data -> lib.dll lib.ecom -> lib.ecom.dll lib.ecom.paypal -> lib.ecom.paypal.dll In other words, child namespaces may or may not ship with the parent. So are the namespaces that deploy together grouped in any way? By the way, I don't use VS or NAnt — just good old fashioned build batches. A: I usually don't really think about this and just do "what feels right" but usually I end up using names that fit the following strategy fairly well. I'll use the highest common namespace in the tree for the .dll name just like you seem to be doing; with lib and lib.data this is lib so the dll is called lib. With lib.ecom and lib.ecom.paypal this is lib.ecom so the dll is called ecom. In some cases you need to think about things a bit more for example we have the following namespaces (warning, simplistic example coming up) and we want to group them in two dll's myapp.view myapp.presentation myapp.model myapp.dataaccess we can't use myapp because then we would have two myapp assemblies. In this case I use the name of the namespace that is most appropriate. The first might be called myapp.presentation and the second myapp.model if those namespaces are the most important.
What, if anything is typically done in a repository's structure to reflect deployed units?
This is a follow-up to the question: Should the folders in a solution match the namespace? The consensus on that question was a qualified "yes": that is, folders == namespaces, generally, but not slavishly (the way java requires). Indeed, that's how I set up projects. But setting up source control has made me hesitate about my current folder structure. As with the .NET Framework, the namespaces in my project do not always match the deployed units one-to-one. Say you have lib -> lib.dll lib.data -> lib.dll lib.ecom -> lib.ecom.dll lib.ecom.paypal -> lib.ecom.paypal.dll In other words, child namespaces may or may not ship with the parent. So are the namespaces that deploy together grouped in any way? By the way, I don't use VS or NAnt — just good old fashioned build batches.
[ "I usually don't really think about this and just do \"what feels right\" but usually I end up using names that fit the following strategy fairly well.\nI'll use the highest common namespace in the tree for the .dll name just like you seem to be doing;\nwith lib and lib.data this is lib so the dll is called lib. With lib.ecom and lib.ecom.paypal this is lib.ecom so the dll is called ecom.\nIn some cases you need to think about things a bit more for example we have the following namespaces (warning, simplistic example coming up) and we want to group them in two dll's\nmyapp.view\nmyapp.presentation\n\nmyapp.model\nmyapp.dataaccess\n\nwe can't use myapp because then we would have two myapp assemblies. In this case I use the name of the namespace that is most appropriate. The first might be called myapp.presentation and the second myapp.model if those namespaces are the most important. \n" ]
[ 1 ]
[]
[]
[ ".net", "directory", "dll", "version_control" ]
stackoverflow_0000055823_.net_directory_dll_version_control.txt
Q: Best practices for signing .NET assemblies? I have a solution consisting of five projects, each of which compile to separate assemblies. Right now I'm code-signing them, but I'm pretty sure I'm doing it wrong. What's the best practice here? Sign each with a different key; make sure the passwords are different Sign each with a different key; use the same password if you want Sign each with the same key Something else entirely Basically I'm not quite sure what "signing" does to them, or what the best practices are here, so a more generally discussion would be good. All I really know is that FxCop yelled at me, and it was easy to fix by clicking the "Sign this assembly" checkbox and generating a .pfx file using Visual Studio (2008). A: If your only objective is to stop FxCop from yelling at you, then you have found the best practice. The best practice for signing your assemblies is something that is completely dependent on your objectives and needs. We would need more information like your intended deployment: For personal use For use on corporate network PC's as a client application Running on a web server Running in SQL Server Downloaded over the internet Sold on a CD in shrink wrap Uploaded straight into a cybernetic brain Etc. Generally you use code signing to verify that the Assemblies came from a specific trusted source and have not been modified. So each with the same key is fine. Now how that trust and identity is determined is another story. UPDATE: How this benefits your end users when you are deploying over the web is if you have obtained a software signing certificate from a certificate authority. Then when they download your assemblies they can verify they came from Domenic's Software Emporium, and they haven't been modified or corrupted along the way. You will also want to sign the installer when it is downloaded. This prevents the warning that some browsers display that it has been obtained from an unknown source. Note, you will pay for a software signing certificate. What you get is the certificate authority become the trusted 3rd party who verifies you are who you say you are. This works because of a web of trust that traces its way back to a root certificate that is installed in their operating system. There are a few certificate authorities to choose from, but you will want to make sure they are supported by the root certificates on the target operating system. A: The most obvious difference between signed and unsigned assemblies is in a ClickOnce application. If you don't sign it, then users will get a scary "Unknown Publisher" warning dialog the first time they run your application. If you've signed it with a certificate from a trusted authority, then they see a dialog that's less scary. As far as I know, signing with a certificate you generate yourself doesn't affect the "Unknown Publisher" warning. Instant SSL from Comodo has examples of the dialogs. There are some subtler differences. You have to sign an assembly before it can be installed in the global assembly cache (GAC) where it can be shared by multiple applications. Signing is integral to code access security (CAS), but I haven't found anyone who could get CAS working. I'm pretty sure that both the GAC and CAS work fine with certificates you generate yourself. A: It helps because the executable is expecting a strongly named assembly. It stops anyone maliciously substituting in another assembly for one of yours. Also the user might grant an assembly CAS permissions based on the strong name. I don't think you should be distributing the .pfx file, you keep that safe for resigning the assembly. A: It is important to keep your PFX file secret as it contains the private key. If that key is made available to others then anyone can sign assemblies or programs that masquerade as you. To associate your name with your assemblies (in the eyes of Windows) you'll need to get a digital certificate (the portion of the PFX file containing your name) signed by a trusted authority. Actually you'll get a new certificate, but with the same information. You'll have to pay for this certificate (probably annually), but the certificate authority will effectively vouch for your existence (after you've faxed them copies of your passport or driver's permit and a domestic bill). A: Signing is used to uniquely identify an assembly. More details are in How to: Sign an Assembly (Visual Studio). In terms of best practice, it's fine to use the same key as long as the assemblies have different names.
Best practices for signing .NET assemblies?
I have a solution consisting of five projects, each of which compile to separate assemblies. Right now I'm code-signing them, but I'm pretty sure I'm doing it wrong. What's the best practice here? Sign each with a different key; make sure the passwords are different Sign each with a different key; use the same password if you want Sign each with the same key Something else entirely Basically I'm not quite sure what "signing" does to them, or what the best practices are here, so a more generally discussion would be good. All I really know is that FxCop yelled at me, and it was easy to fix by clicking the "Sign this assembly" checkbox and generating a .pfx file using Visual Studio (2008).
[ "If your only objective is to stop FxCop from yelling at you, then you have found the best practice.\nThe best practice for signing your assemblies is something that is completely dependent on your objectives and needs. We would need more information like your intended deployment:\n\nFor personal use\nFor use on corporate network PC's as a client application\nRunning on a web server\nRunning in SQL Server\nDownloaded over the internet\nSold on a CD in shrink wrap\nUploaded straight into a cybernetic brain\nEtc.\n\nGenerally you use code signing to verify that the Assemblies came from a specific trusted source and have not been modified. So each with the same key is fine. Now how that trust and identity is determined is another story.\nUPDATE: How this benefits your end users when you are deploying over the web is if you have obtained a software signing certificate from a certificate authority. Then when they download your assemblies they can verify they came from Domenic's Software Emporium, and they haven't been modified or corrupted along the way. You will also want to sign the installer when it is downloaded. This prevents the warning that some browsers display that it has been obtained from an unknown source.\nNote, you will pay for a software signing certificate. What you get is the certificate authority become the trusted 3rd party who verifies you are who you say you are. This works because of a web of trust that traces its way back to a root certificate that is installed in their operating system. There are a few certificate authorities to choose from, but you will want to make sure they are supported by the root certificates on the target operating system.\n", "The most obvious difference between signed and unsigned assemblies is in a ClickOnce application. If you don't sign it, then users will get a scary \"Unknown Publisher\" warning dialog the first time they run your application. If you've signed it with a certificate from a trusted authority, then they see a dialog that's less scary. As far as I know, signing with a certificate you generate yourself doesn't affect the \"Unknown Publisher\" warning. Instant SSL from Comodo has examples of the dialogs.\nThere are some subtler differences. You have to sign an assembly before it can be installed in the global assembly cache (GAC) where it can be shared by multiple applications. Signing is integral to code access security (CAS), but I haven't found anyone who could get CAS working. I'm pretty sure that both the GAC and CAS work fine with certificates you generate yourself.\n", "It helps because the executable is expecting a strongly named assembly. It stops anyone maliciously substituting in another assembly for one of yours. Also the user might grant an assembly CAS permissions based on the strong name. \nI don't think you should be distributing the .pfx file, you keep that safe for resigning the assembly.\n", "It is important to keep your PFX file secret as it contains the private key.\nIf that key is made available to others then anyone can sign assemblies or programs that masquerade as you.\nTo associate your name with your assemblies (in the eyes of Windows) you'll need to get a digital certificate (the portion of the PFX file containing your name) signed by a trusted authority.\nActually you'll get a new certificate, but with the same information.\nYou'll have to pay for this certificate (probably annually), but the certificate authority will effectively vouch for your existence (after you've faxed them copies of your passport or driver's permit and a domestic bill).\n", "Signing is used to uniquely identify an assembly. More details are in How to: Sign an Assembly (Visual Studio).\nIn terms of best practice, it's fine to use the same key as long as the assemblies have different names.\n" ]
[ 44, 8, 3, 3, 2 ]
[]
[]
[ ".net", "assemblies", "signing" ]
stackoverflow_0000035373_.net_assemblies_signing.txt
Q: Why does windows XP minimize my swing full screen window on my second screen? In the application I'm developping (in Java/swing), I have to show a full screen window on the second screen of the user. I did this using a code similar to the one you'll find below... Be, as soon as I click in a window opened by windows explorer, or as soon as I open windows explorer (i'm using windows XP), the full screen window is minimized... Do you know any way or workaround to fix this problem, or is there something important I did not understand with full screen windows? Thanks for the help, import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JWindow; import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import java.awt.Window; import javax.swing.JButton; import javax.swing.JToggleButton; import java.awt.Rectangle; import java.awt.GridBagLayout; import javax.swing.JLabel; public class FullScreenTest { private JFrame jFrame = null; // @jve:decl-index=0:visual-constraint="94,35" private JPanel jContentPane = null; private JToggleButton jToggleButton = null; private JPanel jFSPanel = null; // @jve:decl-index=0:visual-constraint="392,37" private JLabel jLabel = null; private Window window; /** * This method initializes jFrame * * @return javax.swing.JFrame */ private JFrame getJFrame() { if (jFrame == null) { jFrame = new JFrame(); jFrame.setSize(new Dimension(474, 105)); jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); jFrame.setContentPane(getJContentPane()); } return jFrame; } /** * This method initializes jContentPane * * @return javax.swing.JPanel */ private JPanel getJContentPane() { if (jContentPane == null) { jContentPane = new JPanel(); jContentPane.setLayout(null); jContentPane.add(getJToggleButton(), null); } return jContentPane; } /** * This method initializes jToggleButton * * @return javax.swing.JToggleButton */ private JToggleButton getJToggleButton() { if (jToggleButton == null) { jToggleButton = new JToggleButton(); jToggleButton.setBounds(new Rectangle(50, 23, 360, 28)); jToggleButton.setText("Show Full Screen Window on 2nd screen"); jToggleButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent e) { showFullScreenWindow(jToggleButton.isSelected()); } }); } return jToggleButton; } protected void showFullScreenWindow(boolean b) { if(window==null){ window = initFullScreenWindow(); } window.setVisible(b); } private Window initFullScreenWindow() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice[] gds = ge.getScreenDevices(); GraphicsDevice gd = gds[1]; JWindow window = new JWindow(gd.getDefaultConfiguration()); window.setContentPane(getJFSPanel()); gd.setFullScreenWindow(window); return window; } /** * This method initializes jFSPanel * * @return javax.swing.JPanel */ private JPanel getJFSPanel() { if (jFSPanel == null) { jLabel = new JLabel(); jLabel.setBounds(new Rectangle(18, 19, 500, 66)); jLabel.setText("Hello ! Now, juste open windows explorer and see what happens..."); jFSPanel = new JPanel(); jFSPanel.setLayout(null); jFSPanel.setSize(new Dimension(500, 107)); jFSPanel.add(jLabel, null); } return jFSPanel; } /** * @param args */ public static void main(String[] args) { FullScreenTest me = new FullScreenTest(); me.getJFrame().setVisible(true); } } A: Usually when an application is in "full screen" mode it will take over the entire desktop. For a user to get to another window they would have to alt-tab to it. At that point windows would minimize the full screen app so that the other application could come to the front. This sounds like it may be a bug (undocumented feature...) in windows. It should probably not be doing this for a dual screen setup. One option to fix this is rather than setting it to be "full screen" just make the window the same size as the screen with location (0,0). You can get screen information from the GraphicsConfigurations on the GraphicsDevice. A: The following code works (thank you John). With no full screen and a large "always on top" window. But I still don't know why windows caused this stranged behavior... private Window initFullScreenWindow() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice[] gds = ge.getScreenDevices(); GraphicsDevice gd = gds[1]; JWindow window = new JWindow(gd.getDefaultConfiguration()); window.setContentPane(getJFSPanel()); window.setLocation(1280, 0); window.setSize(gd.getDisplayMode().getWidth(), gd.getDisplayMode().getHeight()); window.setAlwaysOnTop(true); //gd.setFullScreenWindow(window); return window; }
Why does windows XP minimize my swing full screen window on my second screen?
In the application I'm developping (in Java/swing), I have to show a full screen window on the second screen of the user. I did this using a code similar to the one you'll find below... Be, as soon as I click in a window opened by windows explorer, or as soon as I open windows explorer (i'm using windows XP), the full screen window is minimized... Do you know any way or workaround to fix this problem, or is there something important I did not understand with full screen windows? Thanks for the help, import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JWindow; import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import java.awt.Window; import javax.swing.JButton; import javax.swing.JToggleButton; import java.awt.Rectangle; import java.awt.GridBagLayout; import javax.swing.JLabel; public class FullScreenTest { private JFrame jFrame = null; // @jve:decl-index=0:visual-constraint="94,35" private JPanel jContentPane = null; private JToggleButton jToggleButton = null; private JPanel jFSPanel = null; // @jve:decl-index=0:visual-constraint="392,37" private JLabel jLabel = null; private Window window; /** * This method initializes jFrame * * @return javax.swing.JFrame */ private JFrame getJFrame() { if (jFrame == null) { jFrame = new JFrame(); jFrame.setSize(new Dimension(474, 105)); jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); jFrame.setContentPane(getJContentPane()); } return jFrame; } /** * This method initializes jContentPane * * @return javax.swing.JPanel */ private JPanel getJContentPane() { if (jContentPane == null) { jContentPane = new JPanel(); jContentPane.setLayout(null); jContentPane.add(getJToggleButton(), null); } return jContentPane; } /** * This method initializes jToggleButton * * @return javax.swing.JToggleButton */ private JToggleButton getJToggleButton() { if (jToggleButton == null) { jToggleButton = new JToggleButton(); jToggleButton.setBounds(new Rectangle(50, 23, 360, 28)); jToggleButton.setText("Show Full Screen Window on 2nd screen"); jToggleButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent e) { showFullScreenWindow(jToggleButton.isSelected()); } }); } return jToggleButton; } protected void showFullScreenWindow(boolean b) { if(window==null){ window = initFullScreenWindow(); } window.setVisible(b); } private Window initFullScreenWindow() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice[] gds = ge.getScreenDevices(); GraphicsDevice gd = gds[1]; JWindow window = new JWindow(gd.getDefaultConfiguration()); window.setContentPane(getJFSPanel()); gd.setFullScreenWindow(window); return window; } /** * This method initializes jFSPanel * * @return javax.swing.JPanel */ private JPanel getJFSPanel() { if (jFSPanel == null) { jLabel = new JLabel(); jLabel.setBounds(new Rectangle(18, 19, 500, 66)); jLabel.setText("Hello ! Now, juste open windows explorer and see what happens..."); jFSPanel = new JPanel(); jFSPanel.setLayout(null); jFSPanel.setSize(new Dimension(500, 107)); jFSPanel.add(jLabel, null); } return jFSPanel; } /** * @param args */ public static void main(String[] args) { FullScreenTest me = new FullScreenTest(); me.getJFrame().setVisible(true); } }
[ "Usually when an application is in \"full screen\" mode it will take over the entire desktop. For a user to get to another window they would have to alt-tab to it. At that point windows would minimize the full screen app so that the other application could come to the front. \nThis sounds like it may be a bug (undocumented feature...) in windows. It should probably not be doing this for a dual screen setup. \nOne option to fix this is rather than setting it to be \"full screen\" just make the window the same size as the screen with location (0,0). You can get screen information from the GraphicsConfigurations on the GraphicsDevice. \n", "The following code works (thank you John). With no full screen and a large \"always on top\" window.\nBut I still don't know why windows caused this stranged behavior...\nprivate Window initFullScreenWindow() {\n GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();\n GraphicsDevice[] gds = ge.getScreenDevices();\n GraphicsDevice gd = gds[1];\n JWindow window = new JWindow(gd.getDefaultConfiguration());\n window.setContentPane(getJFSPanel());\n window.setLocation(1280, 0);\n window.setSize(gd.getDisplayMode().getWidth(), gd.getDisplayMode().getHeight());\n window.setAlwaysOnTop(true);\n //gd.setFullScreenWindow(window);\n return window;\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "java", "swing", "windows" ]
stackoverflow_0000053820_java_swing_windows.txt
Q: TextBox.TextChanged & ICommandSource I am following the M-V-VM pattern for my WPF UI. I would like to hook up a command to the TextChanged event of a TextBox to a command that is in my ViewModel class. The only way I can conceive of completing this task is to inherit from the TextBox control, and implement ICommandSource. I can then instruct the command to be fired from the TextChanged event. This seems to be too much work for something which appears to be so simple. Is there an easier way (than subclassing the TextBox and implementing ICommandSource) to hook up the TextChanged event to my ViewModel class? A: First off, you've surely considered two-way data binding to your viewmodel, with an UpdateSourceTrigger of PropertyChanged? That way the property setter of the property you bind to will be called every time the text is changed? If that's not enough, then I would tackle this problem using Attached Behaviours. On Julian Dominguez’s Blog you'll find an article about how to do something very similar in Silverlight, which should be easily adaptable to WPF. Basically, in a static class (called, say TextBoxBehaviours) you define an Attached Property called (perhaps) TextChangedCommand of type ICommand. Hook up an OnPropertyChanged handler for that property, and within the handler, check that the property is being set on a TextBox; if it is, add a handler to the TextChanged event on the textbox that will call the command specified in the property. Then, assuming your viewmodel has been assigned to the DataContext of your View, you would use it like: <TextBox x:Name="MyTextBox" TextBoxBehaviours.TextChangedCommand="{Binding ViewModelTextChangedCommand}" /> A: Using the event binding and command method might not be the right thing to use. What exactly will this command do? You might want to consider using a Databinding to a string field in your VM. This way you can make a call to a command or function from there rather than having the UI care at all. <TextBox Text="{Binding WorldName}"/> .... public string WorldName { get { return WorldData.Name; } set { WorldData.Name = value; OnPropertyChanged("WorldName"); // CallYourCustomFunctionHere(); } } A: Can you not just handle the TextChanged event and execute the command from there? private void _textBox_TextChanged(object sender, EventArgs e) { MyCommand.Execute(null); } The alternative, as you say, is to create a TextBox that acts as a command source, but that does seem like overkill unless it's something you're planning on sharing and leveraging in many places.
TextBox.TextChanged & ICommandSource
I am following the M-V-VM pattern for my WPF UI. I would like to hook up a command to the TextChanged event of a TextBox to a command that is in my ViewModel class. The only way I can conceive of completing this task is to inherit from the TextBox control, and implement ICommandSource. I can then instruct the command to be fired from the TextChanged event. This seems to be too much work for something which appears to be so simple. Is there an easier way (than subclassing the TextBox and implementing ICommandSource) to hook up the TextChanged event to my ViewModel class?
[ "First off, you've surely considered two-way data binding to your viewmodel, with an UpdateSourceTrigger of PropertyChanged? That way the property setter of the property you bind to will be called every time the text is changed?\nIf that's not enough, then I would tackle this problem using Attached Behaviours. On Julian Dominguez’s Blog you'll find an article about how to do something very similar in Silverlight, which should be easily adaptable to WPF.\nBasically, in a static class (called, say TextBoxBehaviours) you define an Attached Property called (perhaps) TextChangedCommand of type ICommand. Hook up an OnPropertyChanged handler for that property, and within the handler, check that the property is being set on a TextBox; if it is, add a handler to the TextChanged event on the textbox that will call the command specified in the property.\nThen, assuming your viewmodel has been assigned to the DataContext of your View, you would use it like:\n<TextBox\n x:Name=\"MyTextBox\"\n TextBoxBehaviours.TextChangedCommand=\"{Binding ViewModelTextChangedCommand}\" />\n\n", "Using the event binding and command method might not be the right thing to use.\nWhat exactly will this command do?\nYou might want to consider using a Databinding to a string field in your VM. This way you can make a call to a command or function from there rather than having the UI care at all.\n<TextBox Text=\"{Binding WorldName}\"/>\n....\npublic string WorldName\n{\n get\n {\n return WorldData.Name;\n }\n set\n {\n WorldData.Name = value;\n OnPropertyChanged(\"WorldName\");\n // CallYourCustomFunctionHere();\n }\n}\n\n", "Can you not just handle the TextChanged event and execute the command from there?\nprivate void _textBox_TextChanged(object sender, EventArgs e)\n{\n MyCommand.Execute(null);\n}\n\nThe alternative, as you say, is to create a TextBox that acts as a command source, but that does seem like overkill unless it's something you're planning on sharing and leveraging in many places.\n" ]
[ 18, 8, 2 ]
[]
[]
[ "command", "event_binding", "mvvm", "wpf" ]
stackoverflow_0000055855_command_event_binding_mvvm_wpf.txt
Q: Is there a MySQLAdmin or SQL Server Management Studio equivalent for SQLite databases on Windows? I need some software to explore and modify some SQLite databases. Does anything similar to SQL Server Management Studio or MySQLAdmin exist for it? A: As a Firefox plugin (aimed mainly at gears, but should work) As a (sucky) web based app And a big list of management tools A: I also discovered some SQLite software for Visual Studio at http://sqlite.phxsoftware.com/ which allows you to use the Visual Studio Server Explorer to create connections to SQLite databases. A: SQLiteManager (the Firefox Plugin mentioned by Vinko) also works well as a standalone app, with XULRunner.
Is there a MySQLAdmin or SQL Server Management Studio equivalent for SQLite databases on Windows?
I need some software to explore and modify some SQLite databases. Does anything similar to SQL Server Management Studio or MySQLAdmin exist for it?
[ "As a Firefox plugin (aimed mainly at gears, but should work)\nAs a (sucky) web based app\nAnd a big list of management tools\n", "I also discovered some SQLite software for Visual Studio at http://sqlite.phxsoftware.com/ which allows you to use the Visual Studio Server Explorer to create connections to SQLite databases.\n", "SQLiteManager (the Firefox Plugin mentioned by Vinko) also works well as a standalone app, with XULRunner.\n" ]
[ 10, 0, 0 ]
[]
[]
[ "sql", "sqlite", "windows" ]
stackoverflow_0000055968_sql_sqlite_windows.txt
Q: Test Cases AND assertion statements The code in this question made me think assert(value>0); //Precondition if (value>0) { //Doit } I never write the if-statement. Asserting is enough/all you can do. "Crash early, crash often" CodeComplete states: The assert-statement makes the application Correct The if-test makes the application Robust I don't think you've made an application more robust by correcting invalid input values, or skipping code: assert(value >= 0 ); //Precondition assert(value <= 90); //Precondition if(value < 0) //Just in case value = 0; if (value > 90) //Just in case value = 90; //Doit These corrections are based on assumptions you made about the outside world. Only the caller knows what "a valid input value" is for your function, and he must check its validity before he calls your function. To paraphrase CodeComplete: "Real-world programs become too messy when we don't rely solely on assertions." Question: Am I wrong, stuborn, stupid, too non-defensive... A: The problem with trusting just Asserts, is that they may be turned off in a production environment. To quote the wikipedia article: Most languages allow assertions to be enabled or disabled globally, and sometimes independently. Assertions are often enabled during development and disabled during final testing and on release to the customer. Not checking assertions avoiding the cost of evaluating the assertions while, assuming the assertions are free of side effects, still producing the same result under normal conditions. Under abnormal conditions, disabling assertion checking can mean that a program that would have aborted will continue to run. This is sometimes preferable. Wikipedia So if the correctness of your code relies on the Asserts to be there you may run into serious problems. Sure, if the code worked during testing it should work during production... Now enter the second guy that works on the code and is just going to fix a small problem... A: Use assertions for validating input you control: private methods and such. Use if statements for validating input you don't control: public interfaces designed for consumption by the user, user input testing etc. Test you application with assertions built in. Then deploy without the assertions. A: I some cases, asserts are disabled when building for release. You may not have control over this (otherwise, you could build with asserts on), so it might be a good idea to do it like this. The problem with "correcting" the input values is that the caller will not get what they expect, and this can lead to problems or even crashes in wholly different parts of the program, making debugging a nightmare. I usually throw an exception in the if-statement to take over the role of the assert in case they are disabled assert(value>0); if(value<=0) throw new ArgumentOutOfRangeException("value"); //do stuff A: I would disagree with this statement: Only the caller knows what "a valid input value" is for your function, and he must check its validity before he calls your function. Caller might think that he know that input value is correct. Only method author knows how it suppose to work. Programmer's best goal is to make client to fall into "pit of success". You should decide what behavior is more appropriate in given case. In some cases incorrect input values can be forgivable, in other you should throw exception\return error. As for Asserts, I'd repeat other commenters, assert is a debug time check for code author, not code clients. A: If I remember correctly from CS-class Preconditions define on what conditions the output of your function is defined. If you make your function handle errorconditions your function is defined for those condition and you don't need the assert statement. So I agree. Usually you don't need both. As Rik commented this can cause problems if you remove asserts in released code. Usually I don't do that except in performance-critical places. A: Don't forget that most languages allow you to turn off assertions... Personally, if I was prepared to write if tests to protect against all ranges of invalid input, I wouldn't bother with the assertion in the first place. If, on the other hand you don't write logic to handle all cases (possibly because it's not sensible to try and continue with invalid input) then I would be using the assertion statement and going for the "fail early" approach. A: For internal functions, ones that only you will use, use asserts only. The asserts will help catch bugs during your testing, but won't hamper performance in production. Check inputs that originate externally with if-conditions. By externally, that's anywhere outside the code that you/your team control and test. Optionally, you can have both. This would be for external facing functions where integration testing is going to be done before production. A: I should have stated I was aware of the fact that asserts (here) dissappear in production code. If the if-statement actually corrects invalid input data in production code, this means the assert never went off during testing on debug code, this means you wrote code that you never executed. For me it's an OR situation: (quote Andrew) "protect against all ranges of invalid input, I wouldn't bother with the assertion in the first place." -> write an if-test. (quote aku) "incorrect input values can be forgivable" -> write an assert. I can't stand both...
Test Cases AND assertion statements
The code in this question made me think assert(value>0); //Precondition if (value>0) { //Doit } I never write the if-statement. Asserting is enough/all you can do. "Crash early, crash often" CodeComplete states: The assert-statement makes the application Correct The if-test makes the application Robust I don't think you've made an application more robust by correcting invalid input values, or skipping code: assert(value >= 0 ); //Precondition assert(value <= 90); //Precondition if(value < 0) //Just in case value = 0; if (value > 90) //Just in case value = 90; //Doit These corrections are based on assumptions you made about the outside world. Only the caller knows what "a valid input value" is for your function, and he must check its validity before he calls your function. To paraphrase CodeComplete: "Real-world programs become too messy when we don't rely solely on assertions." Question: Am I wrong, stuborn, stupid, too non-defensive...
[ "The problem with trusting just Asserts, is that they may be turned off in a production environment. To quote the wikipedia article:\n\nMost languages allow assertions to be\n enabled or disabled globally, and\n sometimes independently. Assertions\n are often enabled during development\n and disabled during final testing and\n on release to the customer. Not\n checking assertions avoiding the cost\n of evaluating the assertions while,\n assuming the assertions are free of\n side effects, still producing the same\n result under normal conditions. Under\n abnormal conditions, disabling\n assertion checking can mean that a\n program that would have aborted will\n continue to run. This is sometimes\n preferable.\n Wikipedia\n\nSo if the correctness of your code relies on the Asserts to be there you may run into serious problems. Sure, if the code worked during testing it should work during production... Now enter the second guy that works on the code and is just going to fix a small problem...\n", "Use assertions for validating input you control: private methods and such.\nUse if statements for validating input you don't control: public interfaces designed for consumption by the user, user input testing etc.\nTest you application with assertions built in. Then deploy without the assertions.\n", "I some cases, asserts are disabled when building for release. You may not have control over this (otherwise, you could build with asserts on), so it might be a good idea to do it like this.\nThe problem with \"correcting\" the input values is that the caller will not get what they expect, and this can lead to problems or even crashes in wholly different parts of the program, making debugging a nightmare.\nI usually throw an exception in the if-statement to take over the role of the assert in case they are disabled \nassert(value>0);\nif(value<=0) throw new ArgumentOutOfRangeException(\"value\");\n//do stuff\n\n", "I would disagree with this statement:\n\nOnly the caller knows what \"a valid\n input value\" is for your function, and\n he must check its validity before he\n calls your function.\n\nCaller might think that he know that input value is correct. Only method author knows how it suppose to work. Programmer's best goal is to make client to fall into \"pit of success\". You should decide what behavior is more appropriate in given case. In some cases incorrect input values can be forgivable, in other you should throw exception\\return error.\nAs for Asserts, I'd repeat other commenters, assert is a debug time check for code author, not code clients.\n", "If I remember correctly from CS-class\nPreconditions define on what conditions the output of your function is defined. If you make your function handle errorconditions your function is defined for those condition and you don't need the assert statement.\nSo I agree. Usually you don't need both.\nAs Rik commented this can cause problems if you remove asserts in released code. Usually I don't do that except in performance-critical places.\n", "Don't forget that most languages allow you to turn off assertions... Personally, if I was prepared to write if tests to protect against all ranges of invalid input, I wouldn't bother with the assertion in the first place.\nIf, on the other hand you don't write logic to handle all cases (possibly because it's not sensible to try and continue with invalid input) then I would be using the assertion statement and going for the \"fail early\" approach.\n", "For internal functions, ones that only you will use, use asserts only. The asserts will help catch bugs during your testing, but won't hamper performance in production.\nCheck inputs that originate externally with if-conditions. By externally, that's anywhere outside the code that you/your team control and test.\nOptionally, you can have both. This would be for external facing functions where integration testing is going to be done before production.\n", "I should have stated I was aware of the fact that asserts (here) dissappear in production code.\nIf the if-statement actually corrects invalid input data in production code, this means the assert never went off during testing on debug code, this means you wrote code that you never executed.\nFor me it's an OR situation:\n(quote Andrew) \"protect against all ranges of invalid input, I wouldn't bother with the assertion in the first place.\" -> write an if-test.\n(quote aku) \"incorrect input values can be forgivable\" -> write an assert.\nI can't stand both...\n" ]
[ 10, 4, 3, 2, 1, 1, 0, 0 ]
[ "A problem with assertions is that they can (and usually will) be compiled out of the code, so you need to add both walls in case one gets thrown away by the compiler.\n" ]
[ -1 ]
[ "defensive_programming" ]
stackoverflow_0000056168_defensive_programming.txt
Q: Bespoke SQL Server 'encoding' sproc - is there a neater way of doing this? I'm just wondering if there's a better way of doing this in SQL Server 2005. Effectively, I'm taking an originator_id (a number between 0 and 99) and a 'next_element' (it's really just a sequential counter between 1 and 999,999). We are trying to create a 6-character 'code' from them. The originator_id is multiplied up by a million, and then the counter added in, giving us a number between 0 and 99,999,999. Then we convert this into a 'base 32' string - a fake base 32, where we're really just using 0-9 and A-Z but with a few of the more confusing alphanums removed for clarity (I, O, S, Z). To do this, we just divide the number up by powers of 32, at each stage using the result we get for each power as an index for a character from our array of selected character. Thus, an originator ID of 61 and NextCodeElement of 9 gives a code of '1T5JA9' (61 * 1,000,000) + 9 = 61,000,009 61,000,009 div (5^32 = 33,554,432) = 1 = '1' 27,445,577 div (4^32 = 1,048,576) = 26 = 'T' 182,601 div (3^32 = 32,768) = 5 = '5' 18,761 div (2^32 = 1,024) = 18 = 'J' 329 div (1^32 = 32) = 10 = 'A' 9 div (0^32 = 1) = 9 = '9' so my code is 1T5JA9 Previously I've had this algorithm working (in Delphi) but now I really need to be able to recreate it in SQL Server 2005. Obviously I don't quite have the same functions to hand that I have in Delphi, but this is my take on the routine. It works, and I can generate codes (or reconstruct codes back into their components) just fine. But it looks a bit long-winded, and I'm not sure that the trick of selecting the result of a division into an int (ie casting it, really) is necessarily 'right' - is there a better SQLS approach to this kind of thing? CREATE procedure dummy_RP_CREATE_CODE @NextCodeElement int, @OriginatorID int, @code varchar(6) output as begin declare @raw_num int; declare @bcelems char(32); declare @chr int; select @bcelems='0123456789ABCDEFGHJKLMNPQRTUVWXY'; select @code=''; -- add in the originator_id, scaled into place select @raw_num = (@OriginatorID * 1000000) + @NextCodeElement; -- now to convert this to a 6-char code -- 5^32 select @chr = @raw_num / 33554432; select @raw_num = @raw_num - (@chr * 33554432); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 4^32 select @chr = @raw_num / 1048576; select @raw_num = @raw_num - (@chr * 1048576); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 3^32 select @chr = @raw_num / 32768; select @raw_num = @raw_num - (@chr * 32768); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 2^32 select @chr = @raw_num / 1024; select @raw_num = @raw_num - (@chr * 1024); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 1^32 select @chr = @raw_num / 32; select @raw_num = @raw_num - (@chr * 32); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 0^32 select @code = @code + SUBSTRING(@bcelems, 1 + @raw_num, 1); -- that's it! end; I'm not desperately concerned about optimisation unless this method is actually an order of magnitude (or so) worse than any alternative. The code only ever gets executed in small bursts, maybe generating 20 or 30 codes every 10 minutes or so. Because of the large range of numbers I don't want to pre-calculate the codes into a huge table (only small pockets of the overall code range will be used in any given installation). However, I'm sure there's probably a tidier way of achieving the same result - particularly those divisions and subtractions. Any constructive criticism, observations or suggestions would be most welcome. A: Seeing as it's SQL Server 2005, any reason not to use a CLR stored procedure? You could use your CLR language of choice then and it'd probably be a relatively direct port of your existing Delphi code. A: Although it can clearly be done in SQL Server 2005, I think that it's of sufficient 'non-database-ness' that some kind of precompiled, high-level language routine makese sense. I've written DLLs for Interbase/Firebird, and T-SQL sprocs for SQL Server, but never a CLR routine. It will be an interesting exercise!
Bespoke SQL Server 'encoding' sproc - is there a neater way of doing this?
I'm just wondering if there's a better way of doing this in SQL Server 2005. Effectively, I'm taking an originator_id (a number between 0 and 99) and a 'next_element' (it's really just a sequential counter between 1 and 999,999). We are trying to create a 6-character 'code' from them. The originator_id is multiplied up by a million, and then the counter added in, giving us a number between 0 and 99,999,999. Then we convert this into a 'base 32' string - a fake base 32, where we're really just using 0-9 and A-Z but with a few of the more confusing alphanums removed for clarity (I, O, S, Z). To do this, we just divide the number up by powers of 32, at each stage using the result we get for each power as an index for a character from our array of selected character. Thus, an originator ID of 61 and NextCodeElement of 9 gives a code of '1T5JA9' (61 * 1,000,000) + 9 = 61,000,009 61,000,009 div (5^32 = 33,554,432) = 1 = '1' 27,445,577 div (4^32 = 1,048,576) = 26 = 'T' 182,601 div (3^32 = 32,768) = 5 = '5' 18,761 div (2^32 = 1,024) = 18 = 'J' 329 div (1^32 = 32) = 10 = 'A' 9 div (0^32 = 1) = 9 = '9' so my code is 1T5JA9 Previously I've had this algorithm working (in Delphi) but now I really need to be able to recreate it in SQL Server 2005. Obviously I don't quite have the same functions to hand that I have in Delphi, but this is my take on the routine. It works, and I can generate codes (or reconstruct codes back into their components) just fine. But it looks a bit long-winded, and I'm not sure that the trick of selecting the result of a division into an int (ie casting it, really) is necessarily 'right' - is there a better SQLS approach to this kind of thing? CREATE procedure dummy_RP_CREATE_CODE @NextCodeElement int, @OriginatorID int, @code varchar(6) output as begin declare @raw_num int; declare @bcelems char(32); declare @chr int; select @bcelems='0123456789ABCDEFGHJKLMNPQRTUVWXY'; select @code=''; -- add in the originator_id, scaled into place select @raw_num = (@OriginatorID * 1000000) + @NextCodeElement; -- now to convert this to a 6-char code -- 5^32 select @chr = @raw_num / 33554432; select @raw_num = @raw_num - (@chr * 33554432); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 4^32 select @chr = @raw_num / 1048576; select @raw_num = @raw_num - (@chr * 1048576); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 3^32 select @chr = @raw_num / 32768; select @raw_num = @raw_num - (@chr * 32768); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 2^32 select @chr = @raw_num / 1024; select @raw_num = @raw_num - (@chr * 1024); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 1^32 select @chr = @raw_num / 32; select @raw_num = @raw_num - (@chr * 32); select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1); -- 0^32 select @code = @code + SUBSTRING(@bcelems, 1 + @raw_num, 1); -- that's it! end; I'm not desperately concerned about optimisation unless this method is actually an order of magnitude (or so) worse than any alternative. The code only ever gets executed in small bursts, maybe generating 20 or 30 codes every 10 minutes or so. Because of the large range of numbers I don't want to pre-calculate the codes into a huge table (only small pockets of the overall code range will be used in any given installation). However, I'm sure there's probably a tidier way of achieving the same result - particularly those divisions and subtractions. Any constructive criticism, observations or suggestions would be most welcome.
[ "Seeing as it's SQL Server 2005, any reason not to use a CLR stored procedure? You could use your CLR language of choice then and it'd probably be a relatively direct port of your existing Delphi code.\n", "Although it can clearly be done in SQL Server 2005, I think that it's of sufficient 'non-database-ness' that some kind of precompiled, high-level language routine makese sense.\nI've written DLLs for Interbase/Firebird, and T-SQL sprocs for SQL Server, but never a CLR routine. It will be an interesting exercise!\n" ]
[ 3, 0 ]
[]
[]
[ "sql_server", "tsql" ]
stackoverflow_0000055340_sql_server_tsql.txt
Q: Can you view an aggregate changeset in git? If so, how? In Subversion you can specify a range of versions to get an aggregate view of a series of commits. Is this possible in git? If so, how? A: You can pass ranges of revisions using the .. (and ...) operator. git diff 6d0918...HEAD instead of giving (abbreviated) SHA hashes, you can also denote revisions relative to branches and tags: git diff HEAD~4 # shows the diff from the last four commits to the current one Have a look at the chapter "Specifying Revisions" in the git-rev-parse manual page
Can you view an aggregate changeset in git? If so, how?
In Subversion you can specify a range of versions to get an aggregate view of a series of commits. Is this possible in git? If so, how?
[ "You can pass ranges of revisions using the .. (and ...) operator.\ngit diff 6d0918...HEAD\n\ninstead of giving (abbreviated) SHA hashes, you can also denote revisions relative to branches and tags:\ngit diff HEAD~4 # shows the diff from the last four commits to the current one\n\nHave a look at the chapter \"Specifying Revisions\" in the git-rev-parse manual page\n" ]
[ 9 ]
[]
[]
[ "diff", "git" ]
stackoverflow_0000056296_diff_git.txt
Q: Merging two Collection I got a Function that returns a Collection<string>, and that calls itself recursively to eventually return one big Collection<string>. Now, i just wonder what the best approach to merge the lists? Collection.CopyTo() only copies to string[], and using a foreach() loop feels like being inefficient. However, since I also want to filter out duplicates, I feel like i'll end up with a foreach that calls Contains() on the Collection. I wonder, is there a more efficient way to have a recursive function that returns a list of strings without duplicates? I don't have to use a Collection, it can be pretty much any suitable data type. Only exclusion, I'm bound to Visual Studio 2005 and .net 3.0, so no LINQ. Edit: To clarify: The Function takes a user out of Active Directory, looks at the Direct Reports of the user, and then recursively looks at the direct reports of every user. So the end result is a List of all users that are in the "command chain" of a given user.Since this is executed quite often and at the moment takes 20 Seconds for some users, i'm looking for ways to improve it. Caching the result for 24 Hours is also on my list btw., but I want to see how to improve it before applying caching. A: If you're using List<> you can use .AddRange to add one list to the other list. Or you can use yield return to combine lists on the fly like this: public IEnumerable<string> Combine(IEnumerable<string> col1, IEnumerable<string> col2) { foreach(string item in col1) yield return item; foreach(string item in col2) yield return item; } A: I think HashSet<T> is a great help. The HashSet<T> class provides high performance set operations. A set is a collection that contains no duplicate elements, and whose elements are in no particular order. Just add items to it and then use CopyTo. Update: HashSet<T> is in .Net 3.5 Maybe you can use Dictionary<TKey, TValue>. Setting a duplicate key to a dictionary will not raise an exception. A: You might want to take a look at Iesi.Collections and Extended Generic Iesi.Collections (because the first edition was made in 1.1 when there were no generics yet). Extended Iesi has an ISet class which acts exactly as a HashSet: it enforces unique members and does not allow duplicates. The nifty thing about Iesi is that it has set operators instead of methods for merging collections, so you have the choice between a union (|), intersection (&), XOR (^) and so forth. A: Can you pass the Collection into you method by refernce so that you can just add items to it, that way you dont have to return anything. This is what it might look like if you did it in c#. class Program { static void Main(string[] args) { Collection<string> myitems = new Collection<string>(); myMthod(ref myitems); Console.WriteLine(myitems.Count.ToString()); Console.ReadLine(); } static void myMthod(ref Collection<string> myitems) { myitems.Add("string"); if(myitems.Count <5) myMthod(ref myitems); } } As Stated by @Zooba Passing by ref is not necessary here, if you passing by value it will also work. A: As far as merging goes: I wonder, is there a more efficient way to have a recursive function that returns a list of strings without duplicates? I don't have to use a Collection, it can be pretty much any suitable data type. Your function assembles a return value, right? You're splitting the supplied list in half, invoking self again (twice) and then merging those results. During the merge step, why not just check before you add each string to the result? If it's already there, skip it. Assuming you're working with sorted lists of course.
Merging two Collection
I got a Function that returns a Collection<string>, and that calls itself recursively to eventually return one big Collection<string>. Now, i just wonder what the best approach to merge the lists? Collection.CopyTo() only copies to string[], and using a foreach() loop feels like being inefficient. However, since I also want to filter out duplicates, I feel like i'll end up with a foreach that calls Contains() on the Collection. I wonder, is there a more efficient way to have a recursive function that returns a list of strings without duplicates? I don't have to use a Collection, it can be pretty much any suitable data type. Only exclusion, I'm bound to Visual Studio 2005 and .net 3.0, so no LINQ. Edit: To clarify: The Function takes a user out of Active Directory, looks at the Direct Reports of the user, and then recursively looks at the direct reports of every user. So the end result is a List of all users that are in the "command chain" of a given user.Since this is executed quite often and at the moment takes 20 Seconds for some users, i'm looking for ways to improve it. Caching the result for 24 Hours is also on my list btw., but I want to see how to improve it before applying caching.
[ "If you're using List<> you can use .AddRange to add one list to the other list.\nOr you can use yield return to combine lists on the fly like this:\npublic IEnumerable<string> Combine(IEnumerable<string> col1, IEnumerable<string> col2)\n{\n foreach(string item in col1)\n yield return item;\n\n foreach(string item in col2)\n yield return item;\n}\n\n", "I think HashSet<T> is a great help.\n\nThe HashSet<T> class provides\n high performance set operations. A set\n is a collection that contains no\n duplicate elements, and whose elements\n are in no particular order.\n\nJust add items to it and then use CopyTo.\n\nUpdate: HashSet<T> is in .Net 3.5\nMaybe you can use Dictionary<TKey, TValue>. Setting a duplicate key to a dictionary will not raise an exception.\n", "You might want to take a look at Iesi.Collections and Extended Generic Iesi.Collections (because the first edition was made in 1.1 when there were no generics yet).\nExtended Iesi has an ISet class which acts exactly as a HashSet: it enforces unique members and does not allow duplicates.\nThe nifty thing about Iesi is that it has set operators instead of methods for merging collections, so you have the choice between a union (|), intersection (&), XOR (^) and so forth.\n", "Can you pass the Collection into you method by refernce so that you can just add items to it, that way you dont have to return anything. This is what it might look like if you did it in c#.\nclass Program\n{\n static void Main(string[] args)\n {\n Collection<string> myitems = new Collection<string>();\n myMthod(ref myitems);\n Console.WriteLine(myitems.Count.ToString());\n Console.ReadLine();\n }\n\n static void myMthod(ref Collection<string> myitems)\n {\n myitems.Add(\"string\");\n if(myitems.Count <5)\n myMthod(ref myitems);\n }\n}\n\nAs Stated by @Zooba Passing by ref is not necessary here, if you passing by value it will also work.\n", "As far as merging goes:\n\nI wonder, is there a more efficient\n way to have a recursive function that\n returns a list of strings without\n duplicates? I don't have to use a\n Collection, it can be pretty much any\n suitable data type.\n\nYour function assembles a return value, right? You're splitting the supplied list in half, invoking self again (twice) and then merging those results. \nDuring the merge step, why not just check before you add each string to the result? If it's already there, skip it.\nAssuming you're working with sorted lists of course.\n" ]
[ 18, 1, 1, 1, 0 ]
[]
[]
[ "c#", "collections" ]
stackoverflow_0000056078_c#_collections.txt
Q: How do I perform a simple one-statement SQL search across tables? Suppose that two tables exist: users and groups. How does one provide "simple search" in which a user enters text and results contain both users and groups whose names contain the text? The result of the search must distinguish between the two types. A: The trick is to combine a UNION with a literal string to determine the type of 'object' returned. In most (?) cases, UNION ALL will be more efficient, and should be used unless duplicates are required in the sub-queries. The following pattern should suffice: SELECT "group" type, name FROM groups WHERE name LIKE "%$text%" UNION ALL SELECT "user" type, name FROM users WHERE name LIKE "%$text%" NOTE: I've added the answer myself, because I came across this problem yesterday, couldn't find a good solution, and used this method. If someone has a better approach, please feel free to add it. A: If you use "UNION ALL" then the db doesn't try to remove duplicates - you won't have duplicates between the two queries anyway (since the first column is different), so UNION ALL will be faster. (I assume that you don't have duplicates inside each query that you want to remove) A: Using LIKE will cause a number of problems as it will require a table scan every single time when the LIKE comparator starts with a %. This forces SQL to check every single row and work it's way, byte by byte, through the string you are using for comparison. While this may be fine when you start, it quickly causes scaling issues. A better way to handle this is using Full Text Search. While this would be a more complex option, it will provide you with better results for very large databases. Then you can use a functioning version of the example Bobby Jack gave you to UNION ALL your two result sets together and display the results. A: I would suggest another addition SELECT "group" type, name FROM groups WHERE UPPER(name) LIKE UPPER("%$text%") UNION ALL SELECT "user" type, name FROM users WHERE UPPER(name) LIKE UPPER("%$text%") You could convert $text to upper case first or do just do it in the query. This way you get a case insensitive search.
How do I perform a simple one-statement SQL search across tables?
Suppose that two tables exist: users and groups. How does one provide "simple search" in which a user enters text and results contain both users and groups whose names contain the text? The result of the search must distinguish between the two types.
[ "The trick is to combine a UNION with a literal string to determine the type of 'object' returned. In most (?) cases, UNION ALL will be more efficient, and should be used unless duplicates are required in the sub-queries. The following pattern should suffice:\n SELECT \"group\" type, name\n FROM groups\n WHERE name LIKE \"%$text%\"\nUNION ALL\n SELECT \"user\" type, name\n FROM users\n WHERE name LIKE \"%$text%\"\n\nNOTE: I've added the answer myself, because I came across this problem yesterday, couldn't find a good solution, and used this method. If someone has a better approach, please feel free to add it.\n", "If you use \"UNION ALL\" then the db doesn't try to remove duplicates - you won't have duplicates between the two queries anyway (since the first column is different), so UNION ALL will be faster.\n(I assume that you don't have duplicates inside each query that you want to remove)\n", "Using LIKE will cause a number of problems as it will require a table scan every single time when the LIKE comparator starts with a %. This forces SQL to check every single row and work it's way, byte by byte, through the string you are using for comparison. While this may be fine when you start, it quickly causes scaling issues.\nA better way to handle this is using Full Text Search. While this would be a more complex option, it will provide you with better results for very large databases. Then you can use a functioning version of the example Bobby Jack gave you to UNION ALL your two result sets together and display the results.\n", "I would suggest another addition\n SELECT \"group\" type, name\n FROM groups\n WHERE UPPER(name) LIKE UPPER(\"%$text%\")\nUNION ALL\n SELECT \"user\" type, name\n FROM users\n WHERE UPPER(name) LIKE UPPER(\"%$text%\")\n\nYou could convert $text to upper case first or do just do it in the query. This way you get a case insensitive search. \n" ]
[ 3, 1, 1, 1 ]
[]
[]
[ "search", "sql" ]
stackoverflow_0000056334_search_sql.txt
Q: DELETE Statement hangs on SQL Server for no apparent reason Edit: Solved, there was a trigger with a loop on the table (read my own answer further below). We have a simple delete statement that looks like this: DELETE FROM tablename WHERE pk = 12345 This just hangs, no timeout, no nothing. We've looked at the execution plan, and it consists of many lookups on related tables to ensure no foreign keys would trip up the delete, but we've verified that none of those other tables have any rows referring to that particular row. There is no other user connected to the database at this time. We've run DBCC CHECKDB against it, and it reports 0 errors. Looking at the results of sp_who and sp_lock while the query is hanging, I notice that my spid has plenty of PAG and KEY locks, as well as the occasional TAB lock. The table has 1.777.621 rows, and yes, pk is the primary key, so it's a single row delete based on index. There is no table scan in the execution plan, though I notice that it contains something that says Table Spool (Eager Spool), but says Estimated number of rows 1. Can this actually be a table-scan in disguise? It only says it looks at the primary key column. Tried DBCC DBREINDEX and UPDATE STATISTICS on the table. Both completed within reasonable time. There is unfortunately a high number of indexes on this particular table. It is the core table in our system, with plenty of columns, and references, both outgoing and incoming. The exact number is 48 indexes + the primary key clustered index. What else should we look at? Note also that this table did not have this problem before, this problem occured suddently today. We also have many databases with the same table setup (copies of customer databases), and they behave as expected, it's just this one that is problematic. A: One piece of information missing is the number of indices on the table you are deleting the data from. As SQL Server uses the Primary Key as a pointer in every index, any change to the primary index requires updating every index. Though, unless we are talking a high number, this shouldn't be an issue. I am guessing, from your description, that this is a primary table in the database, referenced by many other tables in FK relationships. This would account for the large number of locks as it checks the rest of the tables for references. And, if you have cascading deletes turned on, this could lead to a delete in table a requiring checks several tables deep. A: Try recreating the index on that table, and try regenerating the statistics. DBCC REINDEX UPDATE STATISTICS A: Ok, this is embarrasing. A collegue had added a trigger to that table a while ago, and the trigger had a bug. Although he had fixed the bug, the trigger had never been recreated for that table. So the server was actually doing nothing, it just did it a huge number of times. Oh well... Thanks for the eyeballs to everyone who read this and pondered the problem. I'm going to accept Josef's answer, as his was the closest, and indirectly thouched upon the issue with the cascading deletes.
DELETE Statement hangs on SQL Server for no apparent reason
Edit: Solved, there was a trigger with a loop on the table (read my own answer further below). We have a simple delete statement that looks like this: DELETE FROM tablename WHERE pk = 12345 This just hangs, no timeout, no nothing. We've looked at the execution plan, and it consists of many lookups on related tables to ensure no foreign keys would trip up the delete, but we've verified that none of those other tables have any rows referring to that particular row. There is no other user connected to the database at this time. We've run DBCC CHECKDB against it, and it reports 0 errors. Looking at the results of sp_who and sp_lock while the query is hanging, I notice that my spid has plenty of PAG and KEY locks, as well as the occasional TAB lock. The table has 1.777.621 rows, and yes, pk is the primary key, so it's a single row delete based on index. There is no table scan in the execution plan, though I notice that it contains something that says Table Spool (Eager Spool), but says Estimated number of rows 1. Can this actually be a table-scan in disguise? It only says it looks at the primary key column. Tried DBCC DBREINDEX and UPDATE STATISTICS on the table. Both completed within reasonable time. There is unfortunately a high number of indexes on this particular table. It is the core table in our system, with plenty of columns, and references, both outgoing and incoming. The exact number is 48 indexes + the primary key clustered index. What else should we look at? Note also that this table did not have this problem before, this problem occured suddently today. We also have many databases with the same table setup (copies of customer databases), and they behave as expected, it's just this one that is problematic.
[ "One piece of information missing is the number of indices on the table you are deleting the data from. As SQL Server uses the Primary Key as a pointer in every index, any change to the primary index requires updating every index. Though, unless we are talking a high number, this shouldn't be an issue.\nI am guessing, from your description, that this is a primary table in the database, referenced by many other tables in FK relationships. This would account for the large number of locks as it checks the rest of the tables for references. And, if you have cascading deletes turned on, this could lead to a delete in table a requiring checks several tables deep.\n", "Try recreating the index on that table, and try regenerating the statistics.\nDBCC REINDEX\nUPDATE STATISTICS\n", "Ok, this is embarrasing.\nA collegue had added a trigger to that table a while ago, and the trigger had a bug. Although he had fixed the bug, the trigger had never been recreated for that table.\nSo the server was actually doing nothing, it just did it a huge number of times.\nOh well...\nThanks for the eyeballs to everyone who read this and pondered the problem.\nI'm going to accept Josef's answer, as his was the closest, and indirectly thouched upon the issue with the cascading deletes.\n" ]
[ 5, 3, 1 ]
[]
[]
[ "sql", "sql_server", "sql_server_2005" ]
stackoverflow_0000056070_sql_sql_server_sql_server_2005.txt
Q: Are there any resources for becoming a Cygwin "power user"? I've got it configured, but I want more from it...maybe Cygwin isn't the right tool, but I like how it provides a *nix-like environment within Windows. A: If you've already read the Cygwin User Guide, take a look at Ten Steps To Higher Cygwin Productivity. Also, if you're using a shell such as bash in Cygwin, and you're familiar with Emacs, consider using Eshell (the Emacs shell) instead. A: I've found Cygwin to be very useful in the past. FWIW, lately however I've shied away from it in favor of the following: XAMPP Unixutils I like these tools even better. A: I'm quite interested in this question myself. I've used the Cygwin Setup guide to get set up, but it doesn't get you all the way. One thing that I learned from it, though, is that it recommends leaving the setup.exe in the directory with Cygwin so that you can quickly add packages, since apt-get apparently doesn't work that well in Cygwin. The article also talks about cyg-get as an alternative.
Are there any resources for becoming a Cygwin "power user"?
I've got it configured, but I want more from it...maybe Cygwin isn't the right tool, but I like how it provides a *nix-like environment within Windows.
[ "If you've already read the Cygwin User Guide, take a look at Ten Steps To Higher Cygwin Productivity.\nAlso, if you're using a shell such as bash in Cygwin, and you're familiar with Emacs, consider using Eshell (the Emacs shell) instead.\n", "I've found Cygwin to be very useful in the past. FWIW, lately however I've shied away from it in favor of the following:\n\nXAMPP\nUnixutils\n\nI like these tools even better.\n", "I'm quite interested in this question myself. I've used the Cygwin Setup guide to get set up, but it doesn't get you all the way. One thing that I learned from it, though, is that it recommends leaving the setup.exe in the directory with Cygwin so that you can quickly add packages, since apt-get apparently doesn't work that well in Cygwin. The article also talks about cyg-get as an alternative.\n" ]
[ 12, 5, 2 ]
[]
[]
[ "cygwin", "unix", "windows" ]
stackoverflow_0000056427_cygwin_unix_windows.txt
Q: Remotely starting and stopping a service on a W2008 server I'm having an amazing amount of trouble starting and stopping a service on my remote server from my msbuild script. SC.EXE and the ServiceController MSBuild task don't provide switches to allow a username/password so they won't authenticate, so I'm using RemoteService.exe from www.intelliadmin.com -Authenticating with \xx.xx.xx.xxx -Authentication complete -Stopping service -Error: Access Denied The user account details I'm specifying are for a local admin on the server, so whats up?! I'm tearing my hair out! Update: OK here's a bit more background. I have an an XP machine in the office running the CI server. The build script connects a VPN to the datacentre, where I have a Server 2008 machine. Neither of them are on a domain. A: Often, you can connect to the IPC$ "pseudo-share" on the machine to help establish the credentials before running commands like SC.EXE. Use a command like: C:\> net use \\xx.xx.xx.xx\ipc$ * /user:username The * tells it to prompt you for the password. A: I've disabled UAC and now it seems to work. A: If I understand your scenario correctly, it could help running the script with a domain account which is administrator on your remote machine (or better: has the right to start and stop the service). A: Quick followup question - can you use the "runas" command from an MSBuild script? If so, wouldn't you be able to simply impersonate another user with runas /user:dsfsdf /password:dfdf sc.exe ... (or similiar - I haven't researched the command-line options)?
Remotely starting and stopping a service on a W2008 server
I'm having an amazing amount of trouble starting and stopping a service on my remote server from my msbuild script. SC.EXE and the ServiceController MSBuild task don't provide switches to allow a username/password so they won't authenticate, so I'm using RemoteService.exe from www.intelliadmin.com -Authenticating with \xx.xx.xx.xxx -Authentication complete -Stopping service -Error: Access Denied The user account details I'm specifying are for a local admin on the server, so whats up?! I'm tearing my hair out! Update: OK here's a bit more background. I have an an XP machine in the office running the CI server. The build script connects a VPN to the datacentre, where I have a Server 2008 machine. Neither of them are on a domain.
[ "Often, you can connect to the IPC$ \"pseudo-share\" on the machine to help establish the credentials before running commands like SC.EXE. Use a command like:\nC:\\> net use \\\\xx.xx.xx.xx\\ipc$ * /user:username\n\nThe * tells it to prompt you for the password.\n", "I've disabled UAC and now it seems to work.\n", "If I understand your scenario correctly, it could help running the script with a domain account which is administrator on your remote machine (or better: has the right to start and stop the service).\n", "Quick followup question - can you use the \"runas\" command from an MSBuild script? If so, wouldn't you be able to simply impersonate another user with runas /user:dsfsdf /password:dfdf sc.exe ... (or similiar - I haven't researched the command-line options)?\n" ]
[ 6, 1, 0, 0 ]
[]
[]
[ "service", "windows_server_2008" ]
stackoverflow_0000025269_service_windows_server_2008.txt
Q: Best Practice: Collaborative Environment, Bin Directory, SVN What are the best practices for checking in BIN directories in a collaborative development environment using SVN? Should project level references be excluded from checkin? Is it easier to just add all bin directories? I develop a lot of DotNetNuke sites and it seems that in a multi-developer environment, it's always a huge task to get the environment setup correctly. The ultimate goal (of course) is to have a new developer checkout the trunk from SVN, restore the DNN database and have it all just 'work'... A: Any assemblies that are expected to be in the GAC should stay in the GAC. This includes System.web.dll or any other 3rd party dll that you'll deploy to the GAC in production. This means a new developer would have to install these assemblies. All other 3rd party assemblies should be references through a relative path. My typical structure is: -Project --Project.sln --References ---StructureMap.dll ---NUnit.dll ---System.Web.Mvc.dll --Project.Web ---Project.Web.Proj ---Project.Web.Proj files --Project ---Project.Proj ---Project.Proj files Project.Web and Project reference the assemblies in the root/References folder relatively. These .dlls are checked into subversion. Aside from that, */bin */bin/* obj should be in your global ignore path. With this setup, all references to assemblies are either through the GAC (so should work across all computers), or relative to each project within your solution. A: Is this a .Net specific question? Generally the best practice is to not check in anything which is built automatically from files that are already in SCM. All of that is ideally created as part of your automatic build process. If the bin directory you're referring to contains third-party binaries, rather than a build of your project, ignore (downvote?) this advice. A: Tree Surgeon is a great tool which creates an empty .NET development tree. It has been tweaked over years of use and implements lots of best practices. A: Maven helps quite a lot with this problem when I'm coding java. We commit the pom.xml to the scs and the maven repository contains all our dependencies. For me that seems like a nice way to do it. A: We follow the practice of using a vendor directory which contains all vendor specific headers and binaries. The goal is that anybody should be able to build the product just by checking it out and running some top level build script.
Best Practice: Collaborative Environment, Bin Directory, SVN
What are the best practices for checking in BIN directories in a collaborative development environment using SVN? Should project level references be excluded from checkin? Is it easier to just add all bin directories? I develop a lot of DotNetNuke sites and it seems that in a multi-developer environment, it's always a huge task to get the environment setup correctly. The ultimate goal (of course) is to have a new developer checkout the trunk from SVN, restore the DNN database and have it all just 'work'...
[ "Any assemblies that are expected to be in the GAC should stay in the GAC. This includes System.web.dll or any other 3rd party dll that you'll deploy to the GAC in production. This means a new developer would have to install these assemblies.\nAll other 3rd party assemblies should be references through a relative path. My typical structure is:\n-Project\n--Project.sln\n--References\n---StructureMap.dll\n---NUnit.dll\n---System.Web.Mvc.dll\n--Project.Web\n---Project.Web.Proj\n---Project.Web.Proj files\n--Project\n---Project.Proj\n---Project.Proj files\n\nProject.Web and Project reference the assemblies in the root/References folder relatively. These .dlls are checked into subversion.\nAside from that, */bin */bin/* obj should be in your global ignore path.\nWith this setup, all references to assemblies are either through the GAC (so should work across all computers), or relative to each project within your solution.\n", "Is this a .Net specific question?\nGenerally the best practice is to not check in anything which is built automatically from files that are already in SCM. All of that is ideally created as part of your automatic build process.\nIf the bin directory you're referring to contains third-party binaries, rather than a build of your project, ignore (downvote?) this advice.\n", "Tree Surgeon is a great tool which creates an empty .NET development tree. It has been tweaked over years of use and implements lots of best practices.\n", "Maven helps quite a lot with this problem when I'm coding java. We commit the pom.xml to the scs and the maven repository contains all our dependencies.\nFor me that seems like a nice way to do it.\n", "We follow the practice of using a vendor directory which contains all vendor specific headers and binaries. The goal is that anybody should be able to build the product just by checking it out and running some top level build script. \n" ]
[ 19, 4, 4, 2, 1 ]
[]
[]
[ "collaboration", "svn" ]
stackoverflow_0000000265_collaboration_svn.txt
Q: Why won't my 2008 Team Build trigger on developer check-ins despite CI being enabled I have a Team Foundation Server 2008 Installation and a separate machine with the Team Build service. I can create team builds and trigger them manually in Visual Studio or via the command line (where they complete successfully). However check ins to the source tree do not cause a build to trigger despite the option to build every check in being ticked on the build definition. Update: To be clear I had a fully working build definition with the CI option enabled. The source tree is configured is a pretty straight forward manner with code either under a Main folder or under a Branch\branchName folder. Each branch of code (including main) has a standard Team Build definition relating to the solution file contained within. The only thing that is slightly changed from default settings is that the build server working folder; i.e. for main this is Server:"$\main" Local:"c:\build\main" due to path length. The only thing I've been able to guess at (possible red herring) is that there might be some oddity with the developer workspaces. Currently each developer maps Server:"$\" to local:"c:\tfs\" so that there is only one workspace for all branches. This is mainly to avoid re-mapping problems that some of the developers had previously gotten themselves into. But I can't see how this would affect CI. UPDATE: Ifound the answer indirectly; please read below A: Ok I have found the answer myself after several dead ends. In the end I fixed this unintentionally while fixing another issue. Basically we had just turned on the automatic execution of unit tests for our builds. The test would run sucessfully but then immediately the build would bomb out with a message saying it was unable to report to the build drop folder. What was happening was that while the Build service runs under one account and has a set of rights; some of the functionality is actually driven through the TFSService account. fter wading a heap of permissions I had my tests being reported. Then I noticed that builds had started to trigger on check-ins; I can't tell you exactly which permission fixed this but hopefully this answer will at least set people down the right path. One other note a few of the builds started failing due to conflicting workspace mappings - this was a separate issue that I resolved by deleting some obsolete workspaces using the Attrice Sidekicks for Team Foundation tool. Hope this helps somebody else. A: Select your team project from team explorer, then right click on the Builds folder. Select a new build definition and then select the trigger tab. Move the radio button to "Build each check-in (more builds)" More info can be found here MSDN How to: Create a Build Definition A: Are there any errors in the log on the TFS application server? Anything that indicates that it tried to fire but failed?
Why won't my 2008 Team Build trigger on developer check-ins despite CI being enabled
I have a Team Foundation Server 2008 Installation and a separate machine with the Team Build service. I can create team builds and trigger them manually in Visual Studio or via the command line (where they complete successfully). However check ins to the source tree do not cause a build to trigger despite the option to build every check in being ticked on the build definition. Update: To be clear I had a fully working build definition with the CI option enabled. The source tree is configured is a pretty straight forward manner with code either under a Main folder or under a Branch\branchName folder. Each branch of code (including main) has a standard Team Build definition relating to the solution file contained within. The only thing that is slightly changed from default settings is that the build server working folder; i.e. for main this is Server:"$\main" Local:"c:\build\main" due to path length. The only thing I've been able to guess at (possible red herring) is that there might be some oddity with the developer workspaces. Currently each developer maps Server:"$\" to local:"c:\tfs\" so that there is only one workspace for all branches. This is mainly to avoid re-mapping problems that some of the developers had previously gotten themselves into. But I can't see how this would affect CI. UPDATE: Ifound the answer indirectly; please read below
[ "Ok I have found the answer myself after several dead ends. In the end I fixed this unintentionally while fixing another issue. Basically we had just turned on the automatic execution of unit tests for our builds. The test would run sucessfully but then immediately the build would bomb out with a message saying it was unable to report to the build drop folder. \nWhat was happening was that while the Build service runs under one account and has a set of rights; some of the functionality is actually driven through the TFSService account. fter wading a heap of permissions I had my tests being reported. Then I noticed that builds had started to trigger on check-ins; I can't tell you exactly which permission fixed this but hopefully this answer will at least set people down the right path.\nOne other note a few of the builds started failing due to conflicting workspace mappings - this was a separate issue that I resolved by deleting some obsolete workspaces using the Attrice Sidekicks for Team Foundation tool.\nHope this helps somebody else.\n", "Select your team project from team explorer, then right click on the Builds folder. Select a new build definition and then select the trigger tab. Move the radio button to \"Build each check-in (more builds)\"\nMore info can be found here\nMSDN How to: Create a Build Definition\n", "Are there any errors in the log on the TFS application server? Anything that indicates that it tried to fire but failed?\n" ]
[ 4, 0, 0 ]
[]
[]
[ "build_automation", "msbuild", "tfs" ]
stackoverflow_0000043815_build_automation_msbuild_tfs.txt
Q: how to represent an empty field in yaml I am working with fixtures on rails and I want one of the fixture fields to be blank. Example: two: name: test path: - I want this blank but not to act as a group heading. test: 4 But, I do not know how to leave path: blank without it acting as a group title. Does anybody know how to do that? A: YAML files are based on indentation. Once you actually have correct indentation it will read everything at the same level as siblings. two: name: test path: test: 4 A: Google says the following should work: path: \"\"
how to represent an empty field in yaml
I am working with fixtures on rails and I want one of the fixture fields to be blank. Example: two: name: test path: - I want this blank but not to act as a group heading. test: 4 But, I do not know how to leave path: blank without it acting as a group title. Does anybody know how to do that?
[ "YAML files are based on indentation. Once you actually have correct indentation it will read everything at the same level as siblings.\n\ntwo:\n name: test\n path: \n test: 4\n\n", "Google says the following should work:\npath: \\\"\\\"\n\n" ]
[ 7, 1 ]
[]
[]
[ "ruby_on_rails", "yaml" ]
stackoverflow_0000056037_ruby_on_rails_yaml.txt
Q: CakePHP ACL Database Setup: ARO / ACO structure? I'm struggling to implement ACL in CakePHP. After reading the documentation in the cake manual as well as several other tutorials, blog posts etc, I found Aran Johnson's excellent tutorial which has helped fill in many of the gaps. His examples seem to conflict with others I've seen though in a few places - specifically in the ARO tree structure he uses. In his examples his user groups are set up as a cascading tree, with the most general user type being at the top of the tree, and its children branching off for each more restricted access type. Elsewhere I've usually seen each user type as a child of the same generic user type. How do you set up your AROs and ACOs in CakePHP? Any and all tips appreciated! A: CakePHP's built-in ACL system is really powerful, but poorly documented in terms of actual implementation details. A system that we've used with some success in a number of CakePHP-based projects is as follows. It's a modification of some group-level access systems that have been documented elsewhere. Our system's aims are to have a simple system where users are authorised on a group-level, but they can have specific additional rights on items that were created by them, or on a per-user basis. We wanted to avoid having to create a specific entry for each user (or, more specifically for each ARO) in the aros_acos table. We have a Users table, and a Roles table. Users user_id, user_name, role_id Roles id, role_name Create the ARO tree for each role (we usually have 4 roles - Unauthorised Guest (id 1), Authorised User (id 2), Site Moderator (id 3) and Administrator (id 4)) : cake acl create aro / Role.1 cake acl create aro 1 Role.2 ... etc ... After this, you have to use SQL or phpMyAdmin or similar to add aliases for all of these, as the cake command line tool doesn't do it. We use 'Role-{id}' and 'User-{id}' for all of ours. We then create a ROOT ACO - cake acl create aco / 'ROOT' and then create ACOs for all the controllers under this ROOT one: cake acl create aco 'ROOT' 'MyController' ... etc ... So far so normal. We add an additional field in the aros_acos table called _editown which we can use as an additional action in the ACL component's actionMap. CREATE TABLE IF NOT EXISTS `aros_acos` ( `id` int(11) NOT NULL auto_increment, `aro_id` int(11) default NULL, `aco_id` int(11) default NULL, `_create` int(11) NOT NULL default '0', `_read` int(11) NOT NULL default '0', `_update` int(11) NOT NULL default '0', `_delete` int(11) NOT NULL default '0', `_editown` int(11) NOT NULL default '0', PRIMARY KEY (`id`), KEY `acl` (`aro_id`,`aco_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; We can then setup the Auth component to use the 'crud' method, which validates the requested controller/action against an AclComponent::check(). In the app_controller we have something along the lines of: private function setupAuth() { if(isset($this->Auth)) { .... $this->Auth->authorize = 'crud'; $this->Auth->actionMap = array( 'index' => 'read', 'add' => 'create', 'edit' => 'update' 'editMine' => 'editown', 'view' => 'read' ... etc ... ); ... etc ... } } Again, this is fairly standard CakePHP stuff. We then have a checkAccess method in the AppController that adds in the group-level stuff to check whether to check a group ARO or a user ARO for access: private function checkAccess() { if(!$user = $this->Auth->user()) { $role_alias = 'Role-1'; $user_alias = null; } else { $role_alias = 'Role-' . $user['User']['role_id']; $user_alias = 'User-' . $user['User']['id']; } // do we have an aro for this user? if($user_alias && ($user_aro = $this->User->Aro->findByAlias($user_alias))) { $aro_alias = $user_alias; } else { $aro_alias = $role_alias; } if ('editown' == $this->Auth->actionMap[$this->action]) { if($this->Acl->check($aro_alias, $this->name, 'editown') and $this->isMine()) { $this->Auth->allow(); } else { $this->Auth->authorize = 'controller'; $this->Auth->deny('*'); } } else { // check this user-level aro for access if($this->Acl->check($aro_alias, $this->name, $this->Auth->actionMap[$this->action])) { $this->Auth->allow(); } else { $this->Auth->authorize = 'controller'; $this->Auth->deny('*'); } } } The setupAuth() and checkAccess() methods are called in the AppController's beforeFilter() callback. There's an isMine method in the AppControler too (see below) that just checks that the user_id of the requested item is the same as the currently authenticated user. I've left this out for clarity. That's really all there is to it. You can then allow / deny particular groups access to specific acos - cake acl grant 'Role-2' 'MyController' 'read' cake acl grant 'Role-2' 'MyController' 'editown' cake acl deny 'Role-2' 'MyController' 'update' cake acl deny 'Role-2' 'MyController' 'delete' I'm sure you get the picture. Anyway, this answer's way longer than I intended it to be, and it probably makes next to no sense, but I hope it's some help to you ... -- edit -- As requested, here's an edited (purely for clarity - there's a lot of stuff in our boilerplate code that's meaningless here) isMine() method that we have in our AppController. I've removed a lot of error checking stuff too, but this is the essence of it: function isMine($model=null, $id=null, $usermodel='User', $foreignkey='user_id') { if(empty($model)) { // default model is first item in $this->uses array $model = $this->uses[0]; } if(empty($id)) { if(!empty($this->passedArgs['id'])) { $id = $this->passedArgs['id']; } elseif(!empty($this->passedArgs[0])) { $id = $this->passedArgs[0]; } } if(is_array($id)) { foreach($id as $i) { if(!$this->_isMine($model, $i, $usermodel, $foreignkey)) { return false; } } return true; } return $this->_isMine($model, $id, $usermodel, $foreignkey); } function _isMine($model, $id, $usermodel='User', $foreignkey='user_id') { $user = Configure::read('curr.loggedinuser'); // this is set in the UsersController on successful login if(isset($this->$model)) { $model = $this->$model; } else { $model = ClassRegistry::init($model); } //read model if(!($record = $model->read(null, $id))) { return false; } //get foreign key if($usermodel == $model->alias) { if($record[$model->alias][$model->primaryKey] == $user['User']['id']) { return true; } } elseif($record[$model->alias][$foreignkey] == $user['User']['id']) { return true; } return false; }
CakePHP ACL Database Setup: ARO / ACO structure?
I'm struggling to implement ACL in CakePHP. After reading the documentation in the cake manual as well as several other tutorials, blog posts etc, I found Aran Johnson's excellent tutorial which has helped fill in many of the gaps. His examples seem to conflict with others I've seen though in a few places - specifically in the ARO tree structure he uses. In his examples his user groups are set up as a cascading tree, with the most general user type being at the top of the tree, and its children branching off for each more restricted access type. Elsewhere I've usually seen each user type as a child of the same generic user type. How do you set up your AROs and ACOs in CakePHP? Any and all tips appreciated!
[ "CakePHP's built-in ACL system is really powerful, but poorly documented in terms of actual implementation details. A system that we've used with some success in a number of CakePHP-based projects is as follows.\nIt's a modification of some group-level access systems that have been documented elsewhere. Our system's aims are to have a simple system where users are authorised on a group-level, but they can have specific additional rights on items that were created by them, or on a per-user basis. We wanted to avoid having to create a specific entry for each user (or, more specifically for each ARO) in the aros_acos table.\nWe have a Users table, and a Roles table.\nUsers\nuser_id, user_name, role_id\nRoles\nid, role_name\nCreate the ARO tree for each role (we usually have 4 roles - Unauthorised Guest (id 1), Authorised User (id 2), Site Moderator (id 3) and Administrator (id 4)) :\ncake acl create aro / Role.1\ncake acl create aro 1 Role.2 ... etc ...\nAfter this, you have to use SQL or phpMyAdmin or similar to add aliases for all of these, as the cake command line tool doesn't do it. We use 'Role-{id}' and 'User-{id}' for all of ours.\nWe then create a ROOT ACO - \ncake acl create aco / 'ROOT'\nand then create ACOs for all the controllers under this ROOT one:\ncake acl create aco 'ROOT' 'MyController' ... etc ...\nSo far so normal. We add an additional field in the aros_acos table called _editown which we can use as an additional action in the ACL component's actionMap.\nCREATE TABLE IF NOT EXISTS `aros_acos` (\n`id` int(11) NOT NULL auto_increment,\n`aro_id` int(11) default NULL,\n`aco_id` int(11) default NULL,\n`_create` int(11) NOT NULL default '0',\n`_read` int(11) NOT NULL default '0',\n`_update` int(11) NOT NULL default '0',\n`_delete` int(11) NOT NULL default '0',\n`_editown` int(11) NOT NULL default '0',\nPRIMARY KEY (`id`),\nKEY `acl` (`aro_id`,`aco_id`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8;\n\nWe can then setup the Auth component to use the 'crud' method, which validates the requested controller/action against an AclComponent::check(). In the app_controller we have something along the lines of:\nprivate function setupAuth() {\n if(isset($this->Auth)) {\n ....\n $this->Auth->authorize = 'crud';\n $this->Auth->actionMap = array( 'index' => 'read',\n 'add' => 'create',\n 'edit' => 'update'\n 'editMine' => 'editown',\n 'view' => 'read'\n ... etc ...\n );\n ... etc ...\n }\n}\n\nAgain, this is fairly standard CakePHP stuff. We then have a checkAccess method in the AppController that adds in the group-level stuff to check whether to check a group ARO or a user ARO for access:\nprivate function checkAccess() {\n if(!$user = $this->Auth->user()) {\n $role_alias = 'Role-1';\n $user_alias = null;\n } else {\n $role_alias = 'Role-' . $user['User']['role_id'];\n $user_alias = 'User-' . $user['User']['id'];\n }\n\n // do we have an aro for this user?\n if($user_alias && ($user_aro = $this->User->Aro->findByAlias($user_alias))) {\n $aro_alias = $user_alias;\n } else {\n $aro_alias = $role_alias;\n }\n\n if ('editown' == $this->Auth->actionMap[$this->action]) {\n if($this->Acl->check($aro_alias, $this->name, 'editown') and $this->isMine()) {\n $this->Auth->allow();\n } else {\n $this->Auth->authorize = 'controller';\n $this->Auth->deny('*');\n }\n } else {\n // check this user-level aro for access\n if($this->Acl->check($aro_alias, $this->name, $this->Auth->actionMap[$this->action])) {\n $this->Auth->allow();\n } else {\n $this->Auth->authorize = 'controller';\n $this->Auth->deny('*');\n }\n }\n}\n\nThe setupAuth() and checkAccess() methods are called in the AppController's beforeFilter() callback. There's an isMine method in the AppControler too (see below) that just checks that the user_id of the requested item is the same as the currently authenticated user. I've left this out for clarity.\nThat's really all there is to it. You can then allow / deny particular groups access to specific acos - \ncake acl grant 'Role-2' 'MyController' 'read'\ncake acl grant 'Role-2' 'MyController' 'editown'\ncake acl deny 'Role-2' 'MyController' 'update'\ncake acl deny 'Role-2' 'MyController' 'delete'\nI'm sure you get the picture.\nAnyway, this answer's way longer than I intended it to be, and it probably makes next to no sense, but I hope it's some help to you ...\n-- edit --\nAs requested, here's an edited (purely for clarity - there's a lot of stuff in our boilerplate code that's meaningless here) isMine() method that we have in our AppController. I've removed a lot of error checking stuff too, but this is the essence of it:\nfunction isMine($model=null, $id=null, $usermodel='User', $foreignkey='user_id') {\n if(empty($model)) {\n // default model is first item in $this->uses array\n $model = $this->uses[0];\n }\n\n if(empty($id)) {\n if(!empty($this->passedArgs['id'])) {\n $id = $this->passedArgs['id'];\n } elseif(!empty($this->passedArgs[0])) {\n $id = $this->passedArgs[0];\n }\n }\n\n if(is_array($id)) {\n foreach($id as $i) {\n if(!$this->_isMine($model, $i, $usermodel, $foreignkey)) {\n return false;\n }\n }\n\n return true;\n }\n\n return $this->_isMine($model, $id, $usermodel, $foreignkey);\n}\n\n\nfunction _isMine($model, $id, $usermodel='User', $foreignkey='user_id') {\n $user = Configure::read('curr.loggedinuser'); // this is set in the UsersController on successful login\n\n if(isset($this->$model)) {\n $model = $this->$model;\n } else {\n $model = ClassRegistry::init($model);\n }\n\n //read model\n if(!($record = $model->read(null, $id))) {\n return false;\n }\n\n //get foreign key\n if($usermodel == $model->alias) {\n if($record[$model->alias][$model->primaryKey] == $user['User']['id']) {\n return true;\n }\n } elseif($record[$model->alias][$foreignkey] == $user['User']['id']) {\n return true;\n }\n\n return false;\n}\n\n" ]
[ 51 ]
[]
[]
[ "acl", "cakephp", "permissions", "php", "security_roles" ]
stackoverflow_0000054230_acl_cakephp_permissions_php_security_roles.txt
Q: Checking Inheritance with templates in C++ I've a class which is a wrapper class(serves as a common interface) around another class implementing the functionality required. So my code looks like this. template<typename ImplemenationClass> class WrapperClass { // the code goes here } Now, how do I make sure that ImplementationClass can be derived from a set of classes only, similar to java's generics <? extends BaseClass> syntax? A: It's verbose, but you can do it like this: #include <boost/utility/enable_if.hpp> #include <boost/type_traits/is_base_of.hpp> struct base {}; template <typename ImplementationClass, class Enable = void> class WrapperClass; template <typename ImplementationClass> class WrapperClass<ImplementationClass, typename boost::enable_if< boost::is_base_of<base,ImplementationClass> >::type> {}; struct derived : base {}; struct not_derived {}; int main() { WrapperClass<derived> x; // Compile error here: WrapperClass<not_derived> y; } This requires a compiler with good support for the standard (most recent compilers should be fine but old versions of Visual C++ won't be). For more information, see the Boost.Enable_If documentation. As Ferruccio said, a simpler but less powerful implementation: #include <boost/static_assert.hpp> #include <boost/type_traits/is_base_of.hpp> struct base {}; template <typename ImplementationClass> class WrapperClass { BOOST_STATIC_ASSERT(( boost::is_base_of<base, ImplementationClass>::value)); }; A: In the current state of things, there is no good way other than by comments or a third-party solution. Boost provides a concept check library for this, and I think gcc also has an implementation. Concepts are on the list of C++0x improvements, but I'm not sure if you can specify subtypes - they are more for "must support these operations" which is (roughly) equivalent. Edit: Wikipedia has this section about concepts in C++0x, which is significantly easier to read than draft proposals. A: See Stoustrup's own words on the subject. Basically a small class, that you instantiate somewhere, e.g. the templated classes constructor. template<class T, class B> struct Derived_from { static void constraints(T* p) { B* pb = p; } Derived_from() { void(*p)(T*) = constraints; } };
Checking Inheritance with templates in C++
I've a class which is a wrapper class(serves as a common interface) around another class implementing the functionality required. So my code looks like this. template<typename ImplemenationClass> class WrapperClass { // the code goes here } Now, how do I make sure that ImplementationClass can be derived from a set of classes only, similar to java's generics <? extends BaseClass> syntax?
[ "It's verbose, but you can do it like this:\n#include <boost/utility/enable_if.hpp>\n#include <boost/type_traits/is_base_of.hpp>\n\nstruct base {};\n\ntemplate <typename ImplementationClass, class Enable = void>\nclass WrapperClass;\n\ntemplate <typename ImplementationClass>\nclass WrapperClass<ImplementationClass,\n typename boost::enable_if<\n boost::is_base_of<base,ImplementationClass> >::type>\n{};\n\nstruct derived : base {};\nstruct not_derived {};\n\nint main() {\n WrapperClass<derived> x;\n\n // Compile error here:\n WrapperClass<not_derived> y;\n}\n\nThis requires a compiler with good support for the standard (most recent compilers should be fine but old versions of Visual C++ won't be). For more information, see the Boost.Enable_If documentation.\nAs Ferruccio said, a simpler but less powerful implementation:\n#include <boost/static_assert.hpp>\n#include <boost/type_traits/is_base_of.hpp>\n\nstruct base {};\n\ntemplate <typename ImplementationClass>\nclass WrapperClass\n{\n BOOST_STATIC_ASSERT((\n boost::is_base_of<base, ImplementationClass>::value));\n};\n\n", "In the current state of things, there is no good way other than by comments or a third-party solution. Boost provides a concept check library for this, and I think gcc also has an implementation. Concepts are on the list of C++0x improvements, but I'm not sure if you can specify subtypes - they are more for \"must support these operations\" which is (roughly) equivalent.\nEdit: Wikipedia has this section about concepts in C++0x, which is significantly easier to read than draft proposals.\n", "See Stoustrup's own words on the subject.\nBasically a small class, that you instantiate somewhere, e.g. the templated classes constructor.\ntemplate<class T, class B> struct Derived_from {\n static void constraints(T* p) { B* pb = p; }\n Derived_from() { void(*p)(T*) = constraints; }\n};\n\n" ]
[ 7, 2, 0 ]
[]
[]
[ "c++", "java", "templates" ]
stackoverflow_0000055440_c++_java_templates.txt
Q: Create drop down list options from enum in a DataGridView I currently have a class and I'm trying to create an easy GUI to create a collection of this class. Most of the attributes of this class are strings. However, one of the attributes I want the user to be able to set is an Enum. Therefore, I would like the user interface, to have a dropdownlist for this enum, to restrict the user from entering a value that is not valid. Currently, I am taking the initial list of objects, adding them to a DataTable and setting the DataSource of my DataGridView to the table. Works nicely, even creates a checkbox column for the one Boolean property. But, I don't know how to make the column for the enum into a dropdownlist. I am using C# and .NET 2.0. Also, I have tried assigning the DataSource of the DataGridView to the list of my objects, but when I do this, it doesn't help with the enum and I'm unable to create new rows in the DataGridView, but I am definitely not bound to using a DataTable as my DataSource, it was simply the option I have semi-working. A: I do not know if that would work with a DataGridView column but it works with ComboBoxes: comboBox1.DataSource = Enum.GetValues(typeof(MyEnum)); and: MyEnum value = (MyEnum)comboBox1.SelectedValue; UPDATE: It works with DataGridView columns too, just remember to set the value type. DataGridViewComboBoxColumn col = new DataGridViewComboBoxColumn(); col.Name = "My Enum Column"; col.DataSource = Enum.GetValues(typeof(MyEnum)); col.ValueType = typeof(MyEnum); dataGridView1.Columns.Add(col); A: Or, if you need to do some filtering of the enumerator values, you can loop through Enum.GetValues(typeof(EnumeratorName)) and add the ones you want using: dataGridViewComboBoxColumn.Items.Add(EnumeratorValue) As an aside, rather than using a DataTable, you can set the DataSource of the DataGridView to a BindingSource object, with the DataSource of the BindingSource object set to a BindingList<Your Class>, which you populate by passing an IList into the constructor. Actually, I'd be interested to know from anyone if this is preferable to using a DataTable in situations where you don't already have one (i.e. it is returned from a database call).
Create drop down list options from enum in a DataGridView
I currently have a class and I'm trying to create an easy GUI to create a collection of this class. Most of the attributes of this class are strings. However, one of the attributes I want the user to be able to set is an Enum. Therefore, I would like the user interface, to have a dropdownlist for this enum, to restrict the user from entering a value that is not valid. Currently, I am taking the initial list of objects, adding them to a DataTable and setting the DataSource of my DataGridView to the table. Works nicely, even creates a checkbox column for the one Boolean property. But, I don't know how to make the column for the enum into a dropdownlist. I am using C# and .NET 2.0. Also, I have tried assigning the DataSource of the DataGridView to the list of my objects, but when I do this, it doesn't help with the enum and I'm unable to create new rows in the DataGridView, but I am definitely not bound to using a DataTable as my DataSource, it was simply the option I have semi-working.
[ "I do not know if that would work with a DataGridView column but it works with ComboBoxes:\ncomboBox1.DataSource = Enum.GetValues(typeof(MyEnum));\n\nand:\nMyEnum value = (MyEnum)comboBox1.SelectedValue;\n\nUPDATE: It works with DataGridView columns too, just remember to set the value type.\nDataGridViewComboBoxColumn col = new DataGridViewComboBoxColumn();\ncol.Name = \"My Enum Column\";\ncol.DataSource = Enum.GetValues(typeof(MyEnum));\ncol.ValueType = typeof(MyEnum);\ndataGridView1.Columns.Add(col);\n\n", "Or, if you need to do some filtering of the enumerator values, you can loop through Enum.GetValues(typeof(EnumeratorName)) and add the ones you want using:\ndataGridViewComboBoxColumn.Items.Add(EnumeratorValue)\n\nAs an aside, rather than using a DataTable, you can set the DataSource of the DataGridView to a BindingSource object, with the DataSource of the BindingSource object set to a BindingList<Your Class>, which you populate by passing an IList into the constructor.\nActually, I'd be interested to know from anyone if this is preferable to using a DataTable in situations where you don't already have one (i.e. it is returned from a database call).\n" ]
[ 43, 3 ]
[]
[]
[ ".net", ".net_2.0", "c#", "user_interface", "winforms" ]
stackoverflow_0000056443_.net_.net_2.0_c#_user_interface_winforms.txt
Q: Apache XML-RPC Exception Handling What is the easiest way to extract the original exception from an exception returned via Apache's implementation of XML-RPC? A: It turns out that getting the cause exception from the Apache exception is the right one. } catch (XmlRpcException rpce) { Throwable cause = rpce.getCause(); if(cause != null) { if(cause instanceof ExceptionYouCanHandleException) { handler(cause); } else { throw(cause); } } else { throw(rpce); } } A: According to the XML-RPC Spec it returns the "fault" in the xml. Is this the "Exception" you are referring to or are you refering to a Java Exception generated while making the XML-RPC call? Fault example HTTP/1.1 200 OK Connection: close Content-Length: 426 Content-Type: text/xml Date: Fri, 17 Jul 1998 19:55:02 GMT Server: UserLand Frontier/5.1.2-WinNT <?xml version="1.0"?> <methodResponse> <fault> <value> <struct> <member> <name>faultCode</name> <value><int>4</int></value> </member> <member> <name>faultString</name> <value> <string>Too many parameters.</string> </value> </member> </struct> </value> </fault> </methodResponse>
Apache XML-RPC Exception Handling
What is the easiest way to extract the original exception from an exception returned via Apache's implementation of XML-RPC?
[ "It turns out that getting the cause exception from the Apache exception is the right one. \n} catch (XmlRpcException rpce) {\n Throwable cause = rpce.getCause();\n if(cause != null) {\n if(cause instanceof ExceptionYouCanHandleException) {\n handler(cause);\n }\n else { throw(cause); }\n }\n else { throw(rpce); }\n}\n\n", "According to the XML-RPC Spec it returns the \"fault\" in the xml.\nIs this the \"Exception\" you are referring to or are you refering to a Java Exception generated while making the XML-RPC call?\nFault example\nHTTP/1.1 200 OK\nConnection: close\nContent-Length: 426\nContent-Type: text/xml\nDate: Fri, 17 Jul 1998 19:55:02 GMT\nServer: UserLand Frontier/5.1.2-WinNT\n\n<?xml version=\"1.0\"?>\n<methodResponse>\n <fault>\n <value>\n <struct>\n <member>\n <name>faultCode</name>\n <value><int>4</int></value>\n </member>\n <member>\n <name>faultString</name>\n <value>\n <string>Too many parameters.</string>\n </value>\n </member>\n </struct>\n </value>\n </fault>\n</methodResponse> \n\n" ]
[ 4, 1 ]
[]
[]
[ "exception", "xml_rpc" ]
stackoverflow_0000051593_exception_xml_rpc.txt
Q: How to escape XML content with XSL to safely output it as JSON? How to escape XML content with XSL to safely output it as JSON? A: Sorry, I have myself found the answer on Google (literaly): http://code.google.com/p/xml2json-xslt/
How to escape XML content with XSL to safely output it as JSON?
How to escape XML content with XSL to safely output it as JSON?
[ "Sorry, I have myself found the answer on Google (literaly):\nhttp://code.google.com/p/xml2json-xslt/\n" ]
[ 4 ]
[]
[]
[ "escaping", "json", "xml", "xslt" ]
stackoverflow_0000056611_escaping_json_xml_xslt.txt
Q: Best way to query disk space on remote server I am trying to nail down free space on a remote server by querying all the drives and then looping until I find the drive I am seeking. Is there a better way to do this? Dim oConn As New ConnectionOptions Dim sNameSpace As String = "\\mnb-content2\root\cimv2" Dim oMS As New ManagementScope(sNameSpace, oConn) Dim oQuery As System.Management.ObjectQuery = New System.Management.ObjectQuery("select FreeSpace,Size,Name from Win32_LogicalDisk where DriveType=3") Dim oSearcher As ManagementObjectSearcher = New ManagementObjectSearcher(oMS, oQuery) Dim oReturnCollection As ManagementObjectCollection = oSearcher.Get() Dim oReturn As ManagementObject For Each oReturn In oReturnCollection 'Disk name Console.WriteLine("Name : " + oReturn("Name").ToString()) 'Free Space in bytes Dim sFreespace As String = oReturn("FreeSpace").ToString() If Left(oReturn("Name").ToString(), 1) = "Y" Then Console.WriteLine(sFreespace) End If Next A: Why not just make your WMI query only pull back where name='Y'? So: Dim oQuery As System.Management.ObjectQuery = New System.Management.ObjectQuery("select FreeSpace,Size,Name from Win32_LogicalDisk where DriveType=3 AND name='Y'")
Best way to query disk space on remote server
I am trying to nail down free space on a remote server by querying all the drives and then looping until I find the drive I am seeking. Is there a better way to do this? Dim oConn As New ConnectionOptions Dim sNameSpace As String = "\\mnb-content2\root\cimv2" Dim oMS As New ManagementScope(sNameSpace, oConn) Dim oQuery As System.Management.ObjectQuery = New System.Management.ObjectQuery("select FreeSpace,Size,Name from Win32_LogicalDisk where DriveType=3") Dim oSearcher As ManagementObjectSearcher = New ManagementObjectSearcher(oMS, oQuery) Dim oReturnCollection As ManagementObjectCollection = oSearcher.Get() Dim oReturn As ManagementObject For Each oReturn In oReturnCollection 'Disk name Console.WriteLine("Name : " + oReturn("Name").ToString()) 'Free Space in bytes Dim sFreespace As String = oReturn("FreeSpace").ToString() If Left(oReturn("Name").ToString(), 1) = "Y" Then Console.WriteLine(sFreespace) End If Next
[ "Why not just make your WMI query only pull back where name='Y'?\nSo:\nDim oQuery As System.Management.ObjectQuery = New System.Management.ObjectQuery(\"select FreeSpace,Size,Name from Win32_LogicalDisk where DriveType=3 AND name='Y'\")\n\n" ]
[ 9 ]
[]
[]
[ "vb.net", "wmi" ]
stackoverflow_0000056715_vb.net_wmi.txt
Q: How to convert from PRTime to .NET datetime I want to convert a number that is in PRTime format (a 64-bit integer representing the number of microseconds since midnight (00:00:00) 1 January 1970 Coordinated Universal Time (UTC)) to a DateTime. Note that this is slightly different than the usual "number of milliseconds since 1/1/1970". A: Dim prTimeInMillis As UInt64 prTimeInMillis = prTime/1000 Dim prDateTime As New DateTime(1970, 1, 1) prDateTime = prDateTime.AddMilliseconds(prTimeInMillis) A: DateTime has a constructor that takes Ticks (which are 100 nanoseconds). So take your prTime, multiply it by 10 and add it to the number of ticks representing the Epoch time and you have your conversion. private static DateTime epoch = new DateTime(1970, 1, 1); private static DateTime ConvertPrTime(long time) { return new DateTime(epoch.Ticks + (time*10), DateTimeKind.Utc); }
How to convert from PRTime to .NET datetime
I want to convert a number that is in PRTime format (a 64-bit integer representing the number of microseconds since midnight (00:00:00) 1 January 1970 Coordinated Universal Time (UTC)) to a DateTime. Note that this is slightly different than the usual "number of milliseconds since 1/1/1970".
[ "Dim prTimeInMillis As UInt64\nprTimeInMillis = prTime/1000\n\nDim prDateTime As New DateTime(1970, 1, 1)\nprDateTime = prDateTime.AddMilliseconds(prTimeInMillis)\n\n", "DateTime has a constructor that takes Ticks (which are 100 nanoseconds).\nSo take your prTime, multiply it by 10 and add it to the number of ticks representing the Epoch time and you have your conversion.\nprivate static DateTime epoch = new DateTime(1970, 1, 1);\n\nprivate static DateTime ConvertPrTime(long time)\n{\n return new DateTime(epoch.Ticks + (time*10), DateTimeKind.Utc);\n}\n\n" ]
[ 2, 0 ]
[]
[]
[ ".net", "datetime", "firefox" ]
stackoverflow_0000056638_.net_datetime_firefox.txt
Q: Differences Between DataSet Merges Is there a difference (performance, overhead) between these two ways of merging data sets? MyTypedDataSet aDataSet = new MyTypedDataSet(); aDataSet .Merge(anotherDataSet); aDataSet .Merge(yetAnotherDataSet); and MyTypedDataSet aDataSet = anotherDataSet; aDataSet .Merge(yetAnotherDataSet); Which do you recommend? A: Those two lines do different things. The first one creates a new set, and then merges a second set into it. The second one sets the ds reference to point to the second set, so: MyTypedDataSet ds1 = new MyTypedDataSet(); ds1.Merge(anotherDataSet); //ds1 is a copy of anotherDataSet ds1.Tables.Add("test") //anotherDataSet does not contain the new table MyTypedDataSet ds2 = anotherDataSet; //ds12 actually points to anotherDataSet ds2.Tables.Add("test"); //anotherDataSet now contains the new table Ok, let's assume that what you meant was: MyClass o1 = new MyClass(); o1.LoadFrom( /* some data */ ); //vs MyClass o2 = new MyClass( /* some data */ ); Then the latter is better, as the former creates an empty object before populating it. However unless initialising an empty class has a high cost or is repeated a large number of times the difference is not that important. A: Your second example does not create a new dataset. It's just a second reference to an existing dataset. A: While Keith is right, I suppose the example was simply badly chosen. Generally, it is better to initialize to the “right” object from the beginning and not construct an intermediate, empty object as in your case. Two reasons: Performance. This should be obvious: Object creation costs time so creating less objects is better. Much more important however, it better states your intent. You do generally not intend to create stateless/empty objects. Rather, you intend to create objects with some state or content. Do it. No need to create a useless (because empty) temporary.
Differences Between DataSet Merges
Is there a difference (performance, overhead) between these two ways of merging data sets? MyTypedDataSet aDataSet = new MyTypedDataSet(); aDataSet .Merge(anotherDataSet); aDataSet .Merge(yetAnotherDataSet); and MyTypedDataSet aDataSet = anotherDataSet; aDataSet .Merge(yetAnotherDataSet); Which do you recommend?
[ "Those two lines do different things.\nThe first one creates a new set, and then merges a second set into it.\nThe second one sets the ds reference to point to the second set, so:\nMyTypedDataSet ds1 = new MyTypedDataSet();\nds1.Merge(anotherDataSet);\n//ds1 is a copy of anotherDataSet\nds1.Tables.Add(\"test\")\n\n//anotherDataSet does not contain the new table\n\nMyTypedDataSet ds2 = anotherDataSet;\n//ds12 actually points to anotherDataSet\nds2.Tables.Add(\"test\");\n\n//anotherDataSet now contains the new table\n\n\nOk, let's assume that what you meant was:\nMyClass o1 = new MyClass();\no1.LoadFrom( /* some data */ );\n\n//vs\n\nMyClass o2 = new MyClass( /* some data */ );\n\nThen the latter is better, as the former creates an empty object before populating it.\nHowever unless initialising an empty class has a high cost or is repeated a large number of times the difference is not that important.\n", "Your second example does not create a new dataset. It's just a second reference to an existing dataset.\n", "While Keith is right, I suppose the example was simply badly chosen. Generally, it is better to initialize to the “right” object from the beginning and not construct an intermediate, empty object as in your case. Two reasons:\n\nPerformance. This should be obvious: Object creation costs time so creating less objects is better.\nMuch more important however, it better states your intent. You do generally not intend to create stateless/empty objects. Rather, you intend to create objects with some state or content. Do it. No need to create a useless (because empty) temporary.\n\n" ]
[ 5, 4, 1 ]
[]
[]
[ "c#", "dataset", "initialization", "object" ]
stackoverflow_0000056767_c#_dataset_initialization_object.txt
Q: Starting a new job focused on brownfield application refactoring & Agile I am starting a new job on Monday. The company has a home grown enterprise case management application written in ASP.NET/VB.NET. They are attempting to implement an Agile development process. They have gone so far as to get two people Scrum Master certified and hire an Agile coach. They are currently focused on 6-9 months of refactoring. My question is what are some good approaches/tooling given this environment for becoming familiar with the code base and being productive as soon as I hit the ground? Any suggestion? A: Great question! I would say the first thing to do is get the daily scrums going. Your part in the scrum will be learning the code. It will provide you a way to ask questions and get a feel for who can help you learn the code. Once you have that guy (or guys) picked out start pair programming with them. Let them drive but ask questions. You will be surprised how much you can pick up that way. Given their bend on Agile, that should be an easy sell. :) Once you have that established, be sure to swap partners every so often so you get a feel for the enitre code base. Just sticking woth one guy who is doing one part won't give you a big picture but jumping between people will get you a better big picture view of the code. Just my 2 cents. :) Good luck and have fun!! A: Congratulations on the new job! Relax and keep your cool. Read something on here. I guess, the process itself will make sure you are productive as long as you apply common sense :)
Starting a new job focused on brownfield application refactoring & Agile
I am starting a new job on Monday. The company has a home grown enterprise case management application written in ASP.NET/VB.NET. They are attempting to implement an Agile development process. They have gone so far as to get two people Scrum Master certified and hire an Agile coach. They are currently focused on 6-9 months of refactoring. My question is what are some good approaches/tooling given this environment for becoming familiar with the code base and being productive as soon as I hit the ground? Any suggestion?
[ "Great question!\nI would say the first thing to do is get the daily scrums going. Your part in the scrum will be learning the code. It will provide you a way to ask questions and get a feel for who can help you learn the code.\nOnce you have that guy (or guys) picked out start pair programming with them. Let them drive but ask questions. You will be surprised how much you can pick up that way. Given their bend on Agile, that should be an easy sell. :)\nOnce you have that established, be sure to swap partners every so often so you get a feel for the enitre code base. Just sticking woth one guy who is doing one part won't give you a big picture but jumping between people will get you a better big picture view of the code.\nJust my 2 cents. :) Good luck and have fun!!\n", "Congratulations on the new job!\nRelax and keep your cool. Read something on here.\nI guess, the process itself will make sure you are productive as long as you apply common sense :)\n" ]
[ 4, 2 ]
[]
[]
[ ".net", "brownfield", "refactoring", "vb.net" ]
stackoverflow_0000056764_.net_brownfield_refactoring_vb.net.txt
Q: Custom Counter Creation Through Web Application I have a .NET 3.5 web application for which I have implemented a class called CalculationsCounterManager (code below). This class has some shared/static members that manage the creation and incrementing of two custom performance counters that monitor data calls to a SQL Server database. Execution of these data calls drives the creation of the counters if the don't exist. Of course, everything works fine through the unit tests that are executed through the nUnit GUI for this class. The counters are created and incremented fine. However, when the same code executes through the ASPNET worker process, the following error message occurs: "Requested registry access is not allowed.". This error happens on line 44 in the CalculationsCounterManager class when a read is done to check if the counter category already exists. Does anyone know a way to provide enough priveledges to the worker process account in order to allow it to create the counters in a production environment without opening the server up to security problems? Namespace eA.Analytics.DataLayer.PerformanceMetrics ''' <summary> ''' Manages performance counters for the calculatioins data layer assembly ''' </summary> ''' <remarks>GAJ 09/10/08 - Initial coding and testing</remarks> Public Class CalculationCounterManager Private Shared _AvgRetrieval As PerformanceCounter Private Shared _TotalRequests As PerformanceCounter Private Shared _ManagerInitialized As Boolean Private Shared _SW As Stopwatch ''' <summary> ''' Creates/recreates the perf. counters if they don't exist ''' </summary> ''' <param name="recreate"></param> ''' <remarks></remarks> Public Shared Sub SetupCalculationsCounters(ByVal recreate As Boolean) If PerformanceCounterCategory.Exists(CollectionSettings.CalculationMetricsCollectionName) = False Or recreate = True Then Dim AvgCalcsProductRetrieval As New CounterCreationData(CounterSettings.AvgProductRetrievalTimeCounterName, _ CounterSettings.AvgProductRetrievalTimeCounterHelp, _ CounterSettings.AvgProductRetrievalTimeCounterType) Dim TotalCalcsProductRetrievalRequests As New CounterCreationData(CounterSettings.TotalRequestsCounterName, _ CounterSettings.AvgProductRetrievalTimeCounterHelp, _ CounterSettings.TotalRequestsCounterType) Dim CounterData As New CounterCreationDataCollection() ' Add counters to the collection. CounterData.Add(AvgCalcsProductRetrieval) CounterData.Add(TotalCalcsProductRetrievalRequests) If recreate = True Then If PerformanceCounterCategory.Exists(CollectionSettings.CalculationMetricsCollectionName) = True Then PerformanceCounterCategory.Delete(CollectionSettings.CalculationMetricsCollectionName) End If End If If PerformanceCounterCategory.Exists(CollectionSettings.CalculationMetricsCollectionName) = False Then PerformanceCounterCategory.Create(CollectionSettings.CalculationMetricsCollectionName, CollectionSettings.CalculationMetricsDescription, _ PerformanceCounterCategoryType.SingleInstance, CounterData) End If End If _AvgRetrieval = New PerformanceCounter(CollectionSettings.CalculationMetricsCollectionName, CounterSettings.AvgProductRetrievalTimeCounterName, False) _TotalRequests = New PerformanceCounter(CollectionSettings.CalculationMetricsCollectionName, CounterSettings.TotalRequestsCounterName, False) _ManagerInitialized = True End Sub Public Shared ReadOnly Property CategoryName() As String Get Return CollectionSettings.CalculationMetricsCollectionName End Get End Property ''' <summary> ''' Determines if the performance counters have been initialized ''' </summary> ''' <value></value> ''' <returns></returns> ''' <remarks></remarks> Public Shared ReadOnly Property ManagerInitializaed() As Boolean Get Return _ManagerInitialized End Get End Property Public Shared ReadOnly Property AvgRetrieval() As PerformanceCounter Get Return _AvgRetrieval End Get End Property Public Shared ReadOnly Property TotalRequests() As PerformanceCounter Get Return _TotalRequests End Get End Property ''' <summary> ''' Initializes the Average Retrieval Time counter by starting a stopwatch ''' </summary> ''' <remarks></remarks> Public Shared Sub BeginIncrementAvgRetrieval() If _SW Is Nothing Then _SW = New Stopwatch End If _SW.Start() End Sub ''' <summary> ''' Increments the Average Retrieval Time counter by stopping the stopwatch and changing the ''' raw value of the perf counter. ''' </summary> ''' <remarks></remarks> Public Shared Sub EndIncrementAvgRetrieval(ByVal resetStopwatch As Boolean, ByVal outputToTrace As Boolean) _SW.Stop() _AvgRetrieval.RawValue = CLng(_SW.ElapsedMilliseconds) If outPutToTrace = True Then Trace.WriteLine(_AvgRetrieval.NextValue.ToString) End If If resetStopwatch = True Then _SW.Reset() End If End Sub ''' <summary> ''' Increments the total requests counter ''' </summary> ''' <remarks></remarks> Public Shared Sub IncrementTotalRequests() _TotalRequests.IncrementBy(1) End Sub Public Shared Sub DeleteAll() If PerformanceCounterCategory.Exists(CollectionSettings.CalculationMetricsCollectionName) = True Then PerformanceCounterCategory.Delete(CollectionSettings.CalculationMetricsCollectionName) End If End Sub End Class End Namespace A: Yes, it’s not possible. You can’t add privileges to the worker process without opening the server up to potential security / DOS problems in a production environment. An installer (like a MSI) usually runs with elevated permissions, and installs / uninstalls the performance counter categories and counters as well as other locked down objects. For example, Windows Installer XML (WiX) has support for Performance Counters...
Custom Counter Creation Through Web Application
I have a .NET 3.5 web application for which I have implemented a class called CalculationsCounterManager (code below). This class has some shared/static members that manage the creation and incrementing of two custom performance counters that monitor data calls to a SQL Server database. Execution of these data calls drives the creation of the counters if the don't exist. Of course, everything works fine through the unit tests that are executed through the nUnit GUI for this class. The counters are created and incremented fine. However, when the same code executes through the ASPNET worker process, the following error message occurs: "Requested registry access is not allowed.". This error happens on line 44 in the CalculationsCounterManager class when a read is done to check if the counter category already exists. Does anyone know a way to provide enough priveledges to the worker process account in order to allow it to create the counters in a production environment without opening the server up to security problems? Namespace eA.Analytics.DataLayer.PerformanceMetrics ''' <summary> ''' Manages performance counters for the calculatioins data layer assembly ''' </summary> ''' <remarks>GAJ 09/10/08 - Initial coding and testing</remarks> Public Class CalculationCounterManager Private Shared _AvgRetrieval As PerformanceCounter Private Shared _TotalRequests As PerformanceCounter Private Shared _ManagerInitialized As Boolean Private Shared _SW As Stopwatch ''' <summary> ''' Creates/recreates the perf. counters if they don't exist ''' </summary> ''' <param name="recreate"></param> ''' <remarks></remarks> Public Shared Sub SetupCalculationsCounters(ByVal recreate As Boolean) If PerformanceCounterCategory.Exists(CollectionSettings.CalculationMetricsCollectionName) = False Or recreate = True Then Dim AvgCalcsProductRetrieval As New CounterCreationData(CounterSettings.AvgProductRetrievalTimeCounterName, _ CounterSettings.AvgProductRetrievalTimeCounterHelp, _ CounterSettings.AvgProductRetrievalTimeCounterType) Dim TotalCalcsProductRetrievalRequests As New CounterCreationData(CounterSettings.TotalRequestsCounterName, _ CounterSettings.AvgProductRetrievalTimeCounterHelp, _ CounterSettings.TotalRequestsCounterType) Dim CounterData As New CounterCreationDataCollection() ' Add counters to the collection. CounterData.Add(AvgCalcsProductRetrieval) CounterData.Add(TotalCalcsProductRetrievalRequests) If recreate = True Then If PerformanceCounterCategory.Exists(CollectionSettings.CalculationMetricsCollectionName) = True Then PerformanceCounterCategory.Delete(CollectionSettings.CalculationMetricsCollectionName) End If End If If PerformanceCounterCategory.Exists(CollectionSettings.CalculationMetricsCollectionName) = False Then PerformanceCounterCategory.Create(CollectionSettings.CalculationMetricsCollectionName, CollectionSettings.CalculationMetricsDescription, _ PerformanceCounterCategoryType.SingleInstance, CounterData) End If End If _AvgRetrieval = New PerformanceCounter(CollectionSettings.CalculationMetricsCollectionName, CounterSettings.AvgProductRetrievalTimeCounterName, False) _TotalRequests = New PerformanceCounter(CollectionSettings.CalculationMetricsCollectionName, CounterSettings.TotalRequestsCounterName, False) _ManagerInitialized = True End Sub Public Shared ReadOnly Property CategoryName() As String Get Return CollectionSettings.CalculationMetricsCollectionName End Get End Property ''' <summary> ''' Determines if the performance counters have been initialized ''' </summary> ''' <value></value> ''' <returns></returns> ''' <remarks></remarks> Public Shared ReadOnly Property ManagerInitializaed() As Boolean Get Return _ManagerInitialized End Get End Property Public Shared ReadOnly Property AvgRetrieval() As PerformanceCounter Get Return _AvgRetrieval End Get End Property Public Shared ReadOnly Property TotalRequests() As PerformanceCounter Get Return _TotalRequests End Get End Property ''' <summary> ''' Initializes the Average Retrieval Time counter by starting a stopwatch ''' </summary> ''' <remarks></remarks> Public Shared Sub BeginIncrementAvgRetrieval() If _SW Is Nothing Then _SW = New Stopwatch End If _SW.Start() End Sub ''' <summary> ''' Increments the Average Retrieval Time counter by stopping the stopwatch and changing the ''' raw value of the perf counter. ''' </summary> ''' <remarks></remarks> Public Shared Sub EndIncrementAvgRetrieval(ByVal resetStopwatch As Boolean, ByVal outputToTrace As Boolean) _SW.Stop() _AvgRetrieval.RawValue = CLng(_SW.ElapsedMilliseconds) If outPutToTrace = True Then Trace.WriteLine(_AvgRetrieval.NextValue.ToString) End If If resetStopwatch = True Then _SW.Reset() End If End Sub ''' <summary> ''' Increments the total requests counter ''' </summary> ''' <remarks></remarks> Public Shared Sub IncrementTotalRequests() _TotalRequests.IncrementBy(1) End Sub Public Shared Sub DeleteAll() If PerformanceCounterCategory.Exists(CollectionSettings.CalculationMetricsCollectionName) = True Then PerformanceCounterCategory.Delete(CollectionSettings.CalculationMetricsCollectionName) End If End Sub End Class End Namespace
[ "Yes, it’s not possible. You can’t add privileges to the worker process without opening the server up to potential security / DOS problems in a production environment. An installer (like a MSI) usually runs with elevated permissions, and installs / uninstalls the performance counter categories and counters as well as other locked down objects.\nFor example, Windows Installer XML (WiX) has support for Performance Counters...\n" ]
[ 3 ]
[]
[]
[ ".net" ]
stackoverflow_0000056659_.net.txt
Q: GSM Modems, PCs, SMS and Telephone Calls What all would be the requirements for the following scenario: A GSM modem connected to a PC running a web based (ASP.NET) application. In the application the user selects a phone number from a list of phone nos. When he clicks on a button named the PC should call the selected phone number. When the person on the phone responds he should be able to have a conversation with the PC user. Similarly there should be a facility to send SMS. Now I don't want any code listings. I just need to know what would be the requirements besides asp.net, database for storing phone numbers, and GSM modem. Any help in terms of reference websites would be highly appreciated. A: I'll pick some points of your very broad question and answer them. Note that there are other points where others may be of more help... First, a GSM modem is probably not the way you'd want to go as they usually don't allow for concurrency. So unless you just want one user at the time to use your service, you'd probably need another solution. Also, think about cost issues - at least where I live, providing such a service would be prohibitively expensive using a normal GSM modem and a normal contract - but this is drifting into off-topicness. The next issue will be to get voice data from the client to the server (which will relay it to the phone system - using whatever practical means). Pure browser based functionality won't be of much help, so you would absolutely need something plugin based. Flash may work, seeing they provide access to the microphone, but please don't ask me about the details. I've never done anything like this. Also, privacy would be a concern. While GSM data is encrypted, the path between client and server is not per default. And even if you use SSL, you'd have to convince your users trusting you that you don't record all the conversations going on, but this too is more of a political than a coding issue. Finally, you'd have to think of bandwidth. Voice uses a lot of it and also it requires low latency. If you use a SIP trunk, you'll need the bandwidth twice per user: Once from and to your client and once from and to the SIP trunk. Calculate with 10-64 KBit/s per user and channel. A feasible architecture would probably be to use a SIP trunk (they optimize on using VoIP as much as possible and thus can provide much lower rates than a GSM provider generally does. Also, they allow for concurrency), an Asterisk box (http://www.asterisk.org - a free PBX), some custom made flash client and a custom made SIP client on the server. All in all, this is quite the undertaking :-) A: You'll need a GSM library. There appear to be a few of these. e.g. http://www.wirelessdevstudio.com/eng/ A: Have a look at the Ekiga project at http://www.Ekiga.org. This provides audio and or video chat between users using the standard SIP (Session Initiation Protocol) over the Internet. Like most SIP clients, it can also be used to make calls to and receive calls from the telephone network, but this requires an account with a commercial service provider (there are many, and fees are quite reasonable compared to normal phone line accounts). Ekiga uses the open source OPAL library to implement SIP communications (OPAL has support for several VoIP and video over IP standards - see www.opalvoip.org for more info).
GSM Modems, PCs, SMS and Telephone Calls
What all would be the requirements for the following scenario: A GSM modem connected to a PC running a web based (ASP.NET) application. In the application the user selects a phone number from a list of phone nos. When he clicks on a button named the PC should call the selected phone number. When the person on the phone responds he should be able to have a conversation with the PC user. Similarly there should be a facility to send SMS. Now I don't want any code listings. I just need to know what would be the requirements besides asp.net, database for storing phone numbers, and GSM modem. Any help in terms of reference websites would be highly appreciated.
[ "I'll pick some points of your very broad question and answer them. Note that there are other points where others may be of more help...\nFirst, a GSM modem is probably not the way you'd want to go as they usually don't allow for concurrency. So unless you just want one user at the time to use your service, you'd probably need another solution.\nAlso, think about cost issues - at least where I live, providing such a service would be prohibitively expensive using a normal GSM modem and a normal contract - but this is drifting into off-topicness.\nThe next issue will be to get voice data from the client to the server (which will relay it to the phone system - using whatever practical means). Pure browser based functionality won't be of much help, so you would absolutely need something plugin based.\nFlash may work, seeing they provide access to the microphone, but please don't ask me about the details. I've never done anything like this.\nAlso, privacy would be a concern. While GSM data is encrypted, the path between client and server is not per default. And even if you use SSL, you'd have to convince your users trusting you that you don't record all the conversations going on, but this too is more of a political than a coding issue.\nFinally, you'd have to think of bandwidth. Voice uses a lot of it and also it requires low latency. If you use a SIP trunk, you'll need the bandwidth twice per user: Once from and to your client and once from and to the SIP trunk. Calculate with 10-64 KBit/s per user and channel.\nA feasible architecture would probably be to use a SIP trunk (they optimize on using VoIP as much as possible and thus can provide much lower rates than a GSM provider generally does. Also, they allow for concurrency), an Asterisk box (http://www.asterisk.org - a free PBX), some custom made flash client and a custom made SIP client on the server.\nAll in all, this is quite the undertaking :-)\n", "You'll need a GSM library. There appear to be a few of these.\ne.g. http://www.wirelessdevstudio.com/eng/\n", "Have a look at the Ekiga project at http://www.Ekiga.org. \nThis provides audio and or video chat between users using the standard SIP (Session Initiation Protocol) over the Internet. Like most SIP clients, it can also be used to make calls to and receive calls from the telephone network, but this requires an account with a commercial service provider (there are many, and fees are quite reasonable compared to normal phone line accounts).\nEkiga uses the open source OPAL library to implement SIP communications (OPAL has support for several VoIP and video over IP standards - see www.opalvoip.org for more info). \n" ]
[ 1, 0, 0 ]
[]
[]
[ "asp.net" ]
stackoverflow_0000049416_asp.net.txt
Q: What is the proper regular expression for an unescaped backslash before a character? Let's say I want to represent \q (or any other particular "backslash-escaped character"). That is, I want to match \q but not \\q, since the latter is a backslash-escaped backslash followed by a q. Yet \\\q would match, since it's a backslash-escaped backslash followed by a backslash-escaped q. (Well, it would match the \q at the end, not the \\ at the beginning.) I know I need a negative lookbehind, but they always tie my head up in knots, especially since the backslashes themselves have to be escaped in the regexp. A: Updated: My new and improved Perl regex, supporting more than 3 backslashes: /(?<!\\) # Not preceded by a single backslash (?>\\\\)* # an even number of backslashes \\q # Followed by a \q /x; or if your regex library doesn't support extended syntax. /(?<!\\)(?>\\\\)*\\q/ Output of my test program: q does not match \q does match \\q does not match \\\q does match \\\\q does not match \\\\\q does match Older version /(?:(?<!\\)|(?<=\\\\))\\q/ A: Leon Timmermans got exactly what I was looking for. I would add one small improvement for those who come here later: /(?<!\\)(?:\\\\)*\\q/ The additional ?: at the beginning of the (\\\\) group makes it not saved into any match-data. I can't imagine a scenario where I'd want the text of that saved. A: Now You Have Two Problems. Just write a simple parser. If the regex ties your head up in knots now, just wait a month.
What is the proper regular expression for an unescaped backslash before a character?
Let's say I want to represent \q (or any other particular "backslash-escaped character"). That is, I want to match \q but not \\q, since the latter is a backslash-escaped backslash followed by a q. Yet \\\q would match, since it's a backslash-escaped backslash followed by a backslash-escaped q. (Well, it would match the \q at the end, not the \\ at the beginning.) I know I need a negative lookbehind, but they always tie my head up in knots, especially since the backslashes themselves have to be escaped in the regexp.
[ "Updated:\nMy new and improved Perl regex, supporting more than 3 backslashes:\n/(?<!\\\\) # Not preceded by a single backslash\n (?>\\\\\\\\)* # an even number of backslashes\n \\\\q # Followed by a \\q\n /x;\nor if your regex library doesn't support extended syntax.\n/(?<!\\\\)(?>\\\\\\\\)*\\\\q/\nOutput of my test program:\nq does not match\n\\q does match\n\\\\q does not match\n\\\\\\q does match\n\\\\\\\\q does not match\n\\\\\\\\\\q does match\nOlder version\n/(?:(?<!\\\\)|(?<=\\\\\\\\))\\\\q/\n", "Leon Timmermans got exactly what I was looking for. I would add one small improvement for those who come here later:\n/(?<!\\\\)(?:\\\\\\\\)*\\\\q/\n\nThe additional ?: at the beginning of the (\\\\\\\\) group makes it not saved into any match-data. I can't imagine a scenario where I'd want the text of that saved.\n", "Now You Have Two Problems.\nJust write a simple parser. If the regex ties your head up in knots now, just wait a month.\n" ]
[ 19, 3, 0 ]
[ "The best solution to this is to do your own string parsing as Regular Expressions don't really support what you are trying to do. (rep @Frank Krueger if you go this way, I'm just repeating his advice)\nI did however take a shot at a exclusionary regex. This will match all strings that do not fit your criteria of a \"\\\" followed by a character.\n(?:[\\\\][\\\\])(?!(([\\\\](?![\\\\])[a-zA-Z])))\n\n" ]
[ -1 ]
[ "regex" ]
stackoverflow_0000056554_regex.txt
Q: The difference between the connections strings in SQLCLR I was reviewing some code that a consultant checked in and notice they were using SQLCLR. I don't have any experience with it so thought I would research what it was about. I noticed that they used Dim cn As New SqlConnection("server=LOCALHOST;integrated security=yes;database=" & sDb) instead of DIM conn As New SqlConnection("context connection=true") I'm wondering what the difference since it's localhost on the first? A: The context connection uses the user's already established connection to the server. So you inherit things like their database context, connection options, etc. Using localhost will connect to the server using a normal shared memory connection. This can be useful if you don't want to use the user's connection (i.e. if you want to connect to a different database, or with different options, etc). In most cases you should use the context connection, since it doesn't create a separate connection to the server. Also, be warned that using a separate connection means you are not part of the user's transaction and are subject to normal locking semantics. A: Consider a big office phone systems: My office has an internal phone system. But every phone also has an external phone number (virtual numbers that utilize one of a group of real TELCO lines). I can call another office by dialing their phone extension directly and the call will route through our internal phone system (one hop). Alternatively I could dial that phone's public number and the call routes out from the building's system to the TELCO switching office, then back through the building's system then to the office extension (3 hops). The first SQL connection behaves as any standard SQL connection would when connecting to the server specified in the connection string. A new connection is created using the standard native SQL connectivity. This behaves like dialing the full public phone number of another office phone. Sure, you are connecting to the local machine, but the connection is routed differently. The context connection has the new SqlConnection instance using the existing connection that is executing the SQLCLR object. It's using the existing/local context. This is like dialing my office mate's extension directly. Local context and more efficient. Although I'm not positive, I believe that when using the context connection, the calls to the SQLCLR objects also then participate in the context's transaction. Someone please correct me if I'm wrong. Peter
The difference between the connections strings in SQLCLR
I was reviewing some code that a consultant checked in and notice they were using SQLCLR. I don't have any experience with it so thought I would research what it was about. I noticed that they used Dim cn As New SqlConnection("server=LOCALHOST;integrated security=yes;database=" & sDb) instead of DIM conn As New SqlConnection("context connection=true") I'm wondering what the difference since it's localhost on the first?
[ "The context connection uses the user's already established connection to the server. So you inherit things like their database context, connection options, etc.\nUsing localhost will connect to the server using a normal shared memory connection. This can be useful if you don't want to use the user's connection (i.e. if you want to connect to a different database, or with different options, etc).\nIn most cases you should use the context connection, since it doesn't create a separate connection to the server.\nAlso, be warned that using a separate connection means you are not part of the user's transaction and are subject to normal locking semantics.\n", "Consider a big office phone systems:\nMy office has an internal phone system. But every phone also has an external phone number (virtual numbers that utilize one of a group of real TELCO lines). I can call another office by dialing their phone extension directly and the call will route through our internal phone system (one hop). Alternatively I could dial that phone's public number and the call routes out from the building's system to the TELCO switching office, then back through the building's system then to the office extension (3 hops).\nThe first SQL connection behaves as any standard SQL connection would when connecting to the server specified in the connection string. A new connection is created using the standard native SQL connectivity. This behaves like dialing the full public phone number of another office phone. Sure, you are connecting to the local machine, but the connection is routed differently.\nThe context connection has the new SqlConnection instance using the existing connection that is executing the SQLCLR object. It's using the existing/local context. This is like dialing my office mate's extension directly. Local context and more efficient.\nAlthough I'm not positive, I believe that when using the context connection, the calls to the SQLCLR objects also then participate in the context's transaction. Someone please correct me if I'm wrong.\nPeter\n" ]
[ 6, 1 ]
[]
[]
[ "sqlclr" ]
stackoverflow_0000056801_sqlclr.txt
Q: UI Thread Safety Any suggestions on the best way to ensure thread safety when changing the properties on Form controls? I have been using Me. Invoke in the past, and I was wondering if you have pros/cons, comments, suggestions, etc. A: Invoke is the proper way to do it if you're pushing stuff at the form from another thread. But you might consider whether the form might be better pulling data itself, perhaps from a timer, and perhaps less frequently than a background process might push individual updates. A: I do control. Invoke on the target control rather than the entire form, but that's just me. I claim no advanced knowledge of win forms, I just have to use it every now and then.
UI Thread Safety
Any suggestions on the best way to ensure thread safety when changing the properties on Form controls? I have been using Me. Invoke in the past, and I was wondering if you have pros/cons, comments, suggestions, etc.
[ "Invoke is the proper way to do it if you're pushing stuff at the form from another thread.\nBut you might consider whether the form might be better pulling data itself, perhaps from a timer, and perhaps less frequently than a background process might push individual updates.\n", "I do control. Invoke on the target control rather than the entire form, but that's just me. I claim no advanced knowledge of win forms, I just have to use it every now and then.\n" ]
[ 3, 0 ]
[]
[]
[ ".net_3.5", "form_control", "multithreading", "vb.net", "winforms" ]
stackoverflow_0000056886_.net_3.5_form_control_multithreading_vb.net_winforms.txt
Q: Architecture for modeling A common solution to building a model of a system which consists of many items of different types is to create a modular system, where each module is responsible for particular type. For example, there will be module for wombats WombatModule:IModule, where IModule interface has methods like GetCount() (to find number of wombats) and Update() (to update all wombats' state). More object-oriented approach would be to have class for every item type and create an instance for every item. That will make class Wombat:IItem with methods like Update() (to update this one wombat). From code perspective difference is negligible, but run-time is significantly different. Module-oriented solution is certainly faster: less object creation, easier to optimize operations common for all wombats. Problems come when number of types and modules grow. Either you lose most of performance advantage because each module only supports several items, or modules' complexity grows to accomodate for slightly different items of one general type - say, fat and slim wombats. Or both. At least once I've seen it degrade into poor state when all WombatModule does is keep a collection of hidden Wombat objects and run their methods in loop. When performance is less of a problem than long-term development, can you identify any architectural reasons to use modules instead of per-item objects? May be there's another possibility I'm missing? A: I work for an embedded software company and our code base is quite large. The code base was designed with modules that perform specific functions and maintain some objects - also some objects exist as just independent objects. The largest problem we see with our approach is distinguishing the boundaries of modules. Our modules have tended to grow unnecessarily complicated over time and slowly grow to perform functions that were originally outside of it's boundaries. I would say the best direction to take would be to design modularly and implement very specific objects and to make a dedicated effort to not let modules grow larger than you intend.
Architecture for modeling
A common solution to building a model of a system which consists of many items of different types is to create a modular system, where each module is responsible for particular type. For example, there will be module for wombats WombatModule:IModule, where IModule interface has methods like GetCount() (to find number of wombats) and Update() (to update all wombats' state). More object-oriented approach would be to have class for every item type and create an instance for every item. That will make class Wombat:IItem with methods like Update() (to update this one wombat). From code perspective difference is negligible, but run-time is significantly different. Module-oriented solution is certainly faster: less object creation, easier to optimize operations common for all wombats. Problems come when number of types and modules grow. Either you lose most of performance advantage because each module only supports several items, or modules' complexity grows to accomodate for slightly different items of one general type - say, fat and slim wombats. Or both. At least once I've seen it degrade into poor state when all WombatModule does is keep a collection of hidden Wombat objects and run their methods in loop. When performance is less of a problem than long-term development, can you identify any architectural reasons to use modules instead of per-item objects? May be there's another possibility I'm missing?
[ "I work for an embedded software company and our code base is quite large. The code base was designed with modules that perform specific functions and maintain some objects - also some objects exist as just independent objects. The largest problem we see with our approach is distinguishing the boundaries of modules. Our modules have tended to grow unnecessarily complicated over time and slowly grow to perform functions that were originally outside of it's boundaries. I would say the best direction to take would be to design modularly and implement very specific objects and to make a dedicated effort to not let modules grow larger than you intend.\n" ]
[ 1 ]
[]
[]
[ "architecture", "modeling", "modularity" ]
stackoverflow_0000056716_architecture_modeling_modularity.txt
Q: Is a Flex debugger included in the sdk? I have been writing Flex applications for a few months now and luckily have not needed a full debugger as of yet, so far I have just used a few Alert boxes... Is there an available debugger that is included in the free Flex SDK? I am not using FlexBuilder (I have been using Emacs and compiling with ant). If not, how do you debug Flex applications without FlexBuilder? (note: I have no intentions of using flexbuilder) A: A debugger called fdb is included in the Flex SDK. Here's some documentation on how to use it: Adobe DevCenter: Debugging Client-Side Code in Flex Applications Flex 3 Help: Using the Command-Line Debugger A: I had the same problem when programming with ActionScript and having to test it on a browser. Try this. It involves using Firefox (which I believe you do) and FireBug to receive the debug messages.
Is a Flex debugger included in the sdk?
I have been writing Flex applications for a few months now and luckily have not needed a full debugger as of yet, so far I have just used a few Alert boxes... Is there an available debugger that is included in the free Flex SDK? I am not using FlexBuilder (I have been using Emacs and compiling with ant). If not, how do you debug Flex applications without FlexBuilder? (note: I have no intentions of using flexbuilder)
[ "A debugger called fdb is included in the Flex SDK. Here's some documentation on how to use it:\n\nAdobe DevCenter: Debugging Client-Side Code in Flex Applications\nFlex 3 Help: Using the Command-Line Debugger\n\n", "I had the same problem when programming with ActionScript and having to test it on a browser. Try this. It involves using Firefox (which I believe you do) and FireBug to receive the debug messages.\n" ]
[ 6, 1 ]
[]
[]
[ "apache_flex" ]
stackoverflow_0000056680_apache_flex.txt
Q: Alternatives for enhanced reading and parsing text files using .NET I need to read from a variety of different text files (I've some delimited files and some fixed width files). I've considered parsing the files line by line (slow using the File.ReadLine type methods) and reading the file using the ODBC text driver (faster) but does anyone have any other (better) suggestions? I'm using .NET/C#. A: I'm not sure you could really do a text-and-Excel file parser, not unless by Excel file you mean a comma/pipe/tab delimited file, which is actually just another text file. Reading actual excel files require you to use the MS Office libraries. For delimited text file parsing, you could look into FileHelpers -- open source and they pretty much have it covered. Not sure if it will match your speed requirements though. A: Answering my own question: I ended up using the Microsoft.VisualBasic.FileIO.TextFieldParser object, see: http://msdn.microsoft.com/en-us/library/f68t4563.aspx (example of implementation here) This allows me to handle csv files without worrying about how to cope with whether fields are enclosed in quotes, contain commas, escaped quotes etc. A: Ignoring the Excel part (which you say isn't important): I've found LINQ to be fairly useful in parsing txt files (pipe-delimited or csv) e.g. This reads a pipe-delimited file skipping the hader row and creates an IEnumerable as the result: var records = from line in File.ReadAllLines(@"c:\blah.txt").Skip(1) let parts = line.Split('|') select parts; A: If the files are relatively small you can use the File class. It has these methods which may help you: ReadAllBytes ReadAllLines ReadAllText A: Your question is a little vague. I assume that the text files contain structured data, not just random lines of text. If you are parsing the files yourself then .NET has a library function to read all the lines from a text file into an array of strings (File.ReadAllLines). If you know your files are small enough to hold in memory, then you can use this method and iterate over the array using a regular expression to validate & extract the fields. Excel files are a different ball game. .XLS files are binary, not text, so you would need to use a 3rd party library to access them. .XLSX files from Excel 2007 contain compressed XML data, so once again you would need to decompress the XML then use an XML parser to get at the data. I would not recommend writing your own XML parser, unless you feel the need for the intellectual exercise. A: I agree with John, For example:- using System.IO; ... public class Program { public static void Main() { foreach(string s in File.ReadAllLines(@"c:\foo\bar\something.txt") { // Do something with each line... } } } A: The File reading process is not slow if you read all file at once using the File class and the methods suggested by John. Depending upon the file's size and what you want to do with them, it may use more or less memory. I'd suggest you try with File.ReadAllText (or whatever is appropriate for you) A: Regarding reading XLS Files: If you have Microsoft Office XP and above, you have access to the already included .NET SDK Office Libraries, where you can "natively" read XLS files, Word, PPT, etc. Please note that under Office XP you have to manually check that during install (unless you had .NET previously installed). I don't know if these libraries are available as a separate package if you don't have Microsoft Office. For some obscure reason, all these libraries (including the latest versions from Office 2007 -a.k.a.: Office 12), are COM components that are a pain to use, cause ugly dependencies and are not backwards compatible. I.E.: if you have some methods that work with Office XP (Office11), and you install that onto a customer with Office 12, it doesn't work, because some interfaces where changed. So you need to maintain two set of "libraries" and methods to deal with that. The same holds true if use Office 12 libraries to program, and you customer has Office 11. Your libraries don't work. :S I don't know why Microsoft never created a Microsoft.Office.XXXX managed library (wrapper) around those ugly things. Anyways, your question is quite strange, try to follow some advice here. Good luck! A: The ODBC text driver is now rather out of date - it has no Unicode support. Amazingly MS Excel still uses it, so if you open a Unicode CSV in Excel 2007 (rather than import it) you lose all non-ASCII chars. You best bet is to use .Net's file reading methods, as others have suggested.
Alternatives for enhanced reading and parsing text files using .NET
I need to read from a variety of different text files (I've some delimited files and some fixed width files). I've considered parsing the files line by line (slow using the File.ReadLine type methods) and reading the file using the ODBC text driver (faster) but does anyone have any other (better) suggestions? I'm using .NET/C#.
[ "I'm not sure you could really do a text-and-Excel file parser, not unless by Excel file you mean a comma/pipe/tab delimited file, which is actually just another text file. Reading actual excel files require you to use the MS Office libraries.\nFor delimited text file parsing, you could look into FileHelpers -- open source and they pretty much have it covered. Not sure if it will match your speed requirements though.\n", "Answering my own question:\nI ended up using the Microsoft.VisualBasic.FileIO.TextFieldParser object, see:\nhttp://msdn.microsoft.com/en-us/library/f68t4563.aspx\n(example of implementation here)\nThis allows me to handle csv files without worrying about how to cope with whether fields are enclosed in quotes, contain commas, escaped quotes etc.\n", "Ignoring the Excel part (which you say isn't important):\nI've found LINQ to be fairly useful in parsing txt files (pipe-delimited or csv)\ne.g. This reads a pipe-delimited file skipping the hader row and creates an IEnumerable as the result:\nvar records =\n from line in File.ReadAllLines(@\"c:\\blah.txt\").Skip(1)\n let parts = line.Split('|')\n select parts;\n", "If the files are relatively small you can use the File class. It has these methods which may help you:\n\nReadAllBytes\nReadAllLines\nReadAllText \n\n", "Your question is a little vague. I assume that the text files contain structured data, not just random lines of text. \nIf you are parsing the files yourself then .NET has a library function to read all the lines from a text file into an array of strings (File.ReadAllLines). If you know your files are small enough to hold in memory, then you can use this method and iterate over the array using a regular expression to validate & extract the fields.\nExcel files are a different ball game. .XLS files are binary, not text, so you would need to use a 3rd party library to access them. .XLSX files from Excel 2007 contain compressed XML data, so once again you would need to decompress the XML then use an XML parser to get at the data. I would not recommend writing your own XML parser, unless you feel the need for the intellectual exercise.\n", "I agree with John,\nFor example:-\nusing System.IO;\n\n...\n\npublic class Program {\n public static void Main() {\n foreach(string s in File.ReadAllLines(@\"c:\\foo\\bar\\something.txt\") {\n // Do something with each line...\n }\n }\n}\n\n", "The File reading process is not slow if you read all file at once using the File class and the methods suggested by John. Depending upon the file's size and what you want to do with them, it may use more or less memory. I'd suggest you try with File.ReadAllText (or whatever is appropriate for you)\n", "Regarding reading XLS Files: \nIf you have Microsoft Office XP and above, you have access to the already included .NET SDK Office Libraries, where you can \"natively\" read XLS files, Word, PPT, etc. Please note that under Office XP you have to manually check that during install (unless you had .NET previously installed). \nI don't know if these libraries are available as a separate package if you don't have Microsoft Office. \nFor some obscure reason, all these libraries (including the latest versions from Office 2007 -a.k.a.: Office 12), are COM components that are a pain to use, cause ugly dependencies and are not backwards compatible. I.E.: if you have some methods that work with Office XP (Office11), and you install that onto a customer with Office 12, it doesn't work, because some interfaces where changed. So you need to maintain two set of \"libraries\" and methods to deal with that. The same holds true if use Office 12 libraries to program, and you customer has Office 11. Your libraries don't work. :S\nI don't know why Microsoft never created a Microsoft.Office.XXXX managed library (wrapper) around those ugly things. \nAnyways, your question is quite strange, try to follow some advice here. Good luck!\n", "The ODBC text driver is now rather out of date - it has no Unicode support.\nAmazingly MS Excel still uses it, so if you open a Unicode CSV in Excel 2007 (rather than import it) you lose all non-ASCII chars.\nYou best bet is to use .Net's file reading methods, as others have suggested.\n" ]
[ 5, 4, 3, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ ".net", "file_io", "text_files" ]
stackoverflow_0000034182_.net_file_io_text_files.txt
Q: Opening a file stored in a database in .NET I'm storing a Word document in a SQL Server 2005 database in a varbinary(max) column. Is there a way to open this document from a VB.NET Windows Forms application without serialising to a file first (i.e. directly from the byte array I've read from the database)? A: Depends on what's reading it. If it's Word, you'll probably have to serialize to a file, but if it's a function or library that can take an IO.Stream then you could wrap a new MemoryStream around the byte array and pass that. A: Not really. You need to treat it like an e-mail attachment, where the file is generally copied to a temp folder that is cleaned out periodically.
Opening a file stored in a database in .NET
I'm storing a Word document in a SQL Server 2005 database in a varbinary(max) column. Is there a way to open this document from a VB.NET Windows Forms application without serialising to a file first (i.e. directly from the byte array I've read from the database)?
[ "Depends on what's reading it. If it's Word, you'll probably have to serialize to a file, but if it's a function or library that can take an IO.Stream then you could wrap a new MemoryStream around the byte array and pass that.\n", "Not really. You need to treat it like an e-mail attachment, where the file is generally copied to a temp folder that is cleaned out periodically.\n" ]
[ 3, 2 ]
[]
[]
[ "sql_server", "vb.net" ]
stackoverflow_0000056906_sql_server_vb.net.txt
Q: Race Condition Analysers for .NET I've seen there are some race condition analysis tools for C++, C and Java. Anyone know of any static analysis tools that do the same for .NET? A: I haven't ever used this tool, but it looks like TypeMock has a tool called Racer that can handle this. Roy Osherove blogged about it here. Another post with a better preview is here.
Race Condition Analysers for .NET
I've seen there are some race condition analysis tools for C++, C and Java. Anyone know of any static analysis tools that do the same for .NET?
[ "I haven't ever used this tool, but it looks like TypeMock has a tool called Racer that can handle this. Roy Osherove blogged about it here. Another post with a better preview is here.\n" ]
[ 4 ]
[]
[]
[ ".net", "race_condition", "static_analysis" ]
stackoverflow_0000056949_.net_race_condition_static_analysis.txt
Q: Right Align Numeric Data in SQL Server We all know T-SQL's string manipulation capabilities sometimes leaves much to be desired... I have a numeric field that needs to be output in T-SQL as a right-aligned text column. Example: Value ---------- 143.55 3532.13 1.75 How would you go about that? A good solution ought to be clear and compact, but remember there is such a thing as "too clever". I agree this is the wrong place to do this, but sometimes we're stuck by forces outside our control. Thank you. A: The STR function has an optional length argument as well as a number-of-decimals one. SELECT STR(123.45, 6, 1) ------ 123.5 (1 row(s) affected) A: If you MUST do this in SQL you can use the folowing code (This code assumes that you have no numerics that are bigger than 40 chars): SELECT REPLICATE(' ', 40 - LEN(CAST(numColumn as varchar(40)))) + CAST(numColumn AS varchar(40)) FROM YourTable A: The easiest way is to pad left: CREATE FUNCTION PadLeft(@PadString nvarchar(100), @PadLength int) RETURNS nvarchar(200) AS begin return replicate(' ',@padlength-len(@PadString)) + @PadString end go print dbo.PadLeft('123.456', 20) print dbo.PadLeft('1.23', 20)
Right Align Numeric Data in SQL Server
We all know T-SQL's string manipulation capabilities sometimes leaves much to be desired... I have a numeric field that needs to be output in T-SQL as a right-aligned text column. Example: Value ---------- 143.55 3532.13 1.75 How would you go about that? A good solution ought to be clear and compact, but remember there is such a thing as "too clever". I agree this is the wrong place to do this, but sometimes we're stuck by forces outside our control. Thank you.
[ "The STR function has an optional length argument as well as a number-of-decimals one.\nSELECT STR(123.45, 6, 1)\n\n------\n 123.5\n\n(1 row(s) affected)\n\n", "If you MUST do this in SQL you can use the folowing code (This code assumes that you have no numerics that are bigger than 40 chars):\nSELECT REPLICATE(' ', 40 - LEN(CAST(numColumn as varchar(40)))) + \nCAST(numColumn AS varchar(40)) FROM YourTable\n\n", "The easiest way is to pad left:\nCREATE FUNCTION PadLeft(@PadString nvarchar(100), @PadLength int)\nRETURNS nvarchar(200)\nAS\nbegin\nreturn replicate(' ',@padlength-len(@PadString)) + @PadString\nend\ngo\nprint dbo.PadLeft('123.456', 20)\nprint dbo.PadLeft('1.23', 20)\n\n" ]
[ 17, 1, 1 ]
[]
[]
[ "sql_server", "tsql" ]
stackoverflow_0000056950_sql_server_tsql.txt
Q: Converting scripts from ksh to bash I have some ksh scripts which I'd like to convert to run with bash instead. Are there any useful on-line resources for this? I'm really looking for a list of differences between the two shells and any gotchas I might encounter, although all information is welcome :-) A: Have you tried looking at this page? It has a useful matrix of features and links to elsewhere. Also this link, search for: C2) How does bash differ from the Korn shell A: Here's a comparison from HP on the differences between shells: https://web.archive.org/web/20100829200456/http://docs.hp.com/en/B2355-90046/ch15s03.html Here's a great set of UNIX shell tutorials from Richard's Shell Scripting Universe: http://www.injunea.demon.co.uk/index.htm The second is by far one of the most useful scripting resources I have found, and it really helps you learn how to write scripts with portability in mind. Good luck with your conversions. 2022 EDIT: HP retired the comparison page, so I updated the link to an archived version in the wayback machine.
Converting scripts from ksh to bash
I have some ksh scripts which I'd like to convert to run with bash instead. Are there any useful on-line resources for this? I'm really looking for a list of differences between the two shells and any gotchas I might encounter, although all information is welcome :-)
[ "Have you tried looking at this page? It has a useful matrix of features and links to elsewhere.\nAlso this link, search for:\n\nC2) How does bash differ from the Korn shell\n\n", "Here's a comparison from HP on the differences between shells:\n\nhttps://web.archive.org/web/20100829200456/http://docs.hp.com/en/B2355-90046/ch15s03.html\n\nHere's a great set of UNIX shell tutorials from Richard's Shell Scripting Universe:\n\nhttp://www.injunea.demon.co.uk/index.htm\n\nThe second is by far one of the most useful scripting resources I have found, and it really helps you learn how to write scripts with portability in mind.\nGood luck with your conversions.\n2022 EDIT: HP retired the comparison page, so I updated the link to an archived version in the wayback machine.\n" ]
[ 3, 2 ]
[]
[]
[ "bash", "ksh", "unix" ]
stackoverflow_0000056798_bash_ksh_unix.txt
Q: Keyword for the outer class from an anonymous inner class In the following snippet: public class a { public void otherMethod(){} public void doStuff(String str, InnerClass b){} public void method(a){ doStuff("asd", new InnerClass(){ public void innerMethod(){ otherMethod(); } } ); } } Is there a keyword to refer to the outer class from the inner class? Basically what I want to do is outer.otherMethod(), or something of the like, but can't seem to find anything. A: In general you use OuterClassName.this to refer to the enclosing instance of the outer class. In your example that would be a.this.otherMethod() A: OuterClassName.this.outerClassMethod();
Keyword for the outer class from an anonymous inner class
In the following snippet: public class a { public void otherMethod(){} public void doStuff(String str, InnerClass b){} public void method(a){ doStuff("asd", new InnerClass(){ public void innerMethod(){ otherMethod(); } } ); } } Is there a keyword to refer to the outer class from the inner class? Basically what I want to do is outer.otherMethod(), or something of the like, but can't seem to find anything.
[ "In general you use OuterClassName.this to refer to the enclosing instance of the outer class.\nIn your example that would be a.this.otherMethod()\n", "OuterClassName.this.outerClassMethod();\n\n" ]
[ 361, 48 ]
[]
[]
[ "anonymous_inner_class", "java" ]
stackoverflow_0000056974_anonymous_inner_class_java.txt
Q: Using an external "windows"-keyboard under Mac OS X I use a MacBook, but I've got a usual keyboard attached to it. The problem is that the keys don't exactly map 1-to-1. One thing is the APPLE and ALT keys. They map to WIN and ALT, but they are usually physically inverted, so if you want to use them with the same layout you have to invert them in the OS. The Function keys work differently too. Fx on the external = Fn + Fx on the MacBook keyboard. And then there are all the insert, delete, keys. So, the question is, how do you come around this? Now I remap all the things I want at the System Preferences panel, but when I unplug the external keyboard it's all messed up. Is there a way to remap keys only for the external one? Some model of keyboard can store it's own mappings without needing the OS? Am I the only one who is bothered by this? (I would like to avoid buying an external mac keyboard, because I wanted to try one of the ergonomic models, and as far as I know, there are no mac ergonomic models) Update: Thanks for the responses, I fixed this. To set the control keys for different keyboards, you have to go to System Preferences/Modifier Keys, then the drop down menu Select Keyboard allows you to choose one particular keyboard and set these keys. Works after unpluging/pluging it seems The suggestion from @Matthew Schinckel seems to work for the rest of the issues (function keys, ...). I didn't try it yet, as the commands keys were my biggest gripe. A: In OS X 10.5 they allow you to have different keyboard setups for different keyboards. This works most of the time. I've had issues with very old keyboards that are plugged in via a PS2 to USB but otherwise it works fine. A: The best method I have is to download the Logitech Control Center for OSX from Logitech. Search throw the Installer package for the LCCKCHR.rsrc. Drop this file into either ~/Library/Keyboard Layouts or /Library/Keyboard Layouts. Logout and log back in and you'll notice a few more options in the International System Preferences under Inputs. Check the keyboard layouts you would like. Although this keyboard layout is for Logitech keyboards it works for most keybaords (especially international users) A: You create create your own custom keyboard mapping, which could then be used with the keyboard language menu. So when you plug in your keyboard, just switch to your custom layout. OS X has supported this since 10.2, and Apple has documentation on how to produce your own custom maps. http://developer.apple.com/technotes/tn2002/tn2056.html It's not something I've tried myself, just read about it once or twice. Looks like it could potentially do the job. I'd just duplicate a mapping that is as close as you can get to what you want, and then customise it from there. A: You could investigate DoubleCommand, it may do what you need. There's an experimental version that allows for different properties for different keyboards.
Using an external "windows"-keyboard under Mac OS X
I use a MacBook, but I've got a usual keyboard attached to it. The problem is that the keys don't exactly map 1-to-1. One thing is the APPLE and ALT keys. They map to WIN and ALT, but they are usually physically inverted, so if you want to use them with the same layout you have to invert them in the OS. The Function keys work differently too. Fx on the external = Fn + Fx on the MacBook keyboard. And then there are all the insert, delete, keys. So, the question is, how do you come around this? Now I remap all the things I want at the System Preferences panel, but when I unplug the external keyboard it's all messed up. Is there a way to remap keys only for the external one? Some model of keyboard can store it's own mappings without needing the OS? Am I the only one who is bothered by this? (I would like to avoid buying an external mac keyboard, because I wanted to try one of the ergonomic models, and as far as I know, there are no mac ergonomic models) Update: Thanks for the responses, I fixed this. To set the control keys for different keyboards, you have to go to System Preferences/Modifier Keys, then the drop down menu Select Keyboard allows you to choose one particular keyboard and set these keys. Works after unpluging/pluging it seems The suggestion from @Matthew Schinckel seems to work for the rest of the issues (function keys, ...). I didn't try it yet, as the commands keys were my biggest gripe.
[ "In OS X 10.5 they allow you to have different keyboard setups for different keyboards. This works most of the time. I've had issues with very old keyboards that are plugged in via a PS2 to USB but otherwise it works fine.\n", "The best method I have is to download the Logitech Control Center for OSX from Logitech. Search throw the Installer package for the LCCKCHR.rsrc. Drop this file into either ~/Library/Keyboard Layouts or /Library/Keyboard Layouts. Logout and log back in and you'll notice a few more options in the International System Preferences under Inputs. Check the keyboard layouts you would like.\nAlthough this keyboard layout is for Logitech keyboards it works for most keybaords (especially international users)\n", "You create create your own custom keyboard mapping, which could then be used with the keyboard language menu. So when you plug in your keyboard, just switch to your custom layout. OS X has supported this since 10.2, and Apple has documentation on how to produce your own custom maps.\nhttp://developer.apple.com/technotes/tn2002/tn2056.html\nIt's not something I've tried myself, just read about it once or twice. Looks like it could potentially do the job. I'd just duplicate a mapping that is as close as you can get to what you want, and then customise it from there.\n", "You could investigate DoubleCommand, it may do what you need.\nThere's an experimental version that allows for different properties for different keyboards.\n" ]
[ 26, 6, 3, 1 ]
[]
[]
[ "ergonomics", "keyboard", "macos" ]
stackoverflow_0000056553_ergonomics_keyboard_macos.txt
Q: Which is the best database schema for my navigation? I'm creating a web site where all pages hang off a database-driven tree-hierarchy. All but one node has a parent node. Nodes may have role-based read permissions. Some nodes may have special rules (such as: don't display within navigation menus). Nodes may represent links to other nodes (like a shortcut in Windows). Nodes typically represent pages. Pages present either HTML content or execute programming. Some pages may be roots of subtrees (alternate masterpages and stylesheets). Please help me setup my nodes database in Microsoft SQL Server for use by Linq to SQL. I've got three ideas: Many lightweight tables with almost zero nullalbe fields. Heavyweight Node table with lots of nullalbe fields. Best (or worst) of both: Lots of nullalbe foreign keys to many lightweight tables. Which do you feel best represents the data? Which will be easiest to use with Linq to SQL? How can I keep my data integrity rules within the database? How do I best enforce them within my programming? Nodes must be either (but not both) links or pages. Pages must be either (but not both) html or code. Links may not be roots, html, nor code. Can I make an ASP.NET Site Map Provider with such a structure? Should I? Update: I've asked a more general question: What’s the best way to handle one-to-one relationships in SQL? Related question: How do I enforce data integrity rules in my database? A: My initial impression after reading your post is that I would be very averse to let any one technology (in this case linq) heavily influence the database schema design to the extent you seem to be suggesting. I think your schema should be pretty much the same, regardless of what technology you then chose to build your business/presentation layers. I hope I haven't misunderstood you.
Which is the best database schema for my navigation?
I'm creating a web site where all pages hang off a database-driven tree-hierarchy. All but one node has a parent node. Nodes may have role-based read permissions. Some nodes may have special rules (such as: don't display within navigation menus). Nodes may represent links to other nodes (like a shortcut in Windows). Nodes typically represent pages. Pages present either HTML content or execute programming. Some pages may be roots of subtrees (alternate masterpages and stylesheets). Please help me setup my nodes database in Microsoft SQL Server for use by Linq to SQL. I've got three ideas: Many lightweight tables with almost zero nullalbe fields. Heavyweight Node table with lots of nullalbe fields. Best (or worst) of both: Lots of nullalbe foreign keys to many lightweight tables. Which do you feel best represents the data? Which will be easiest to use with Linq to SQL? How can I keep my data integrity rules within the database? How do I best enforce them within my programming? Nodes must be either (but not both) links or pages. Pages must be either (but not both) html or code. Links may not be roots, html, nor code. Can I make an ASP.NET Site Map Provider with such a structure? Should I? Update: I've asked a more general question: What’s the best way to handle one-to-one relationships in SQL? Related question: How do I enforce data integrity rules in my database?
[ "My initial impression after reading your post is that I would be very averse to let any one technology (in this case linq) heavily influence the database schema design to the extent you seem to be suggesting.\nI think your schema should be pretty much the same, regardless of what technology you then chose to build your business/presentation layers.\nI hope I haven't misunderstood you.\n" ]
[ 4 ]
[]
[]
[ ".net", "database", "database_design", "linq_to_sql", "sql" ]
stackoverflow_0000056981_.net_database_database_design_linq_to_sql_sql.txt
Q: What is the best way to send large batches of emails in ASP.NET? I'm currently looping through a datareader and calling the System.Net.Mail.SmtpClient's Send() method. The problem with this is that it's slow. Each email takes about 5-10 seconds to send (it's possible this is just an issue with my host). I had to override the executionTimeout default in my web.config file (it defaults to 90 seconds) like this: <httpRuntime executionTimeout="3000" /> One caveat: I'm on a shared host, so I don't think it is possible for me to send using the PickupDirectoryFromIis option (at least, it gave me errors when I turned it on). A: You could send the mail asynchronous. That way the timeout should not interrupt your sending. This article should help you get started with that: Sending Emails Asynchronously in C#. There is another approach here: http://www.vikramlakhotia.com/Sending_Email_asynchronously_in_AspNet_20.aspx And off course there are several commercial clients available, but the only one that i have tried and can recommend is http://www.aspnetemail.com/ A: Definitely spawn it off on a background worker process so they go out asynchronously. BTW, 5-10 seconds per e-mail seems way slow to me. On my server it takes just fractions of a second per e-mail.
What is the best way to send large batches of emails in ASP.NET?
I'm currently looping through a datareader and calling the System.Net.Mail.SmtpClient's Send() method. The problem with this is that it's slow. Each email takes about 5-10 seconds to send (it's possible this is just an issue with my host). I had to override the executionTimeout default in my web.config file (it defaults to 90 seconds) like this: <httpRuntime executionTimeout="3000" /> One caveat: I'm on a shared host, so I don't think it is possible for me to send using the PickupDirectoryFromIis option (at least, it gave me errors when I turned it on).
[ "You could send the mail asynchronous. That way the timeout should not interrupt your sending.\nThis article should help you get started with that: Sending Emails Asynchronously in C#.\nThere is another approach here: http://www.vikramlakhotia.com/Sending_Email_asynchronously_in_AspNet_20.aspx\nAnd off course there are several commercial clients available, but the only one that i have tried and can recommend is http://www.aspnetemail.com/\n", "Definitely spawn it off on a background worker process so they go out asynchronously. \nBTW, 5-10 seconds per e-mail seems way slow to me. On my server it takes just fractions of a second per e-mail. \n" ]
[ 6, 0 ]
[]
[]
[ "asp.net", "batch_file", "email" ]
stackoverflow_0000056975_asp.net_batch_file_email.txt
Q: Context.User losing Roles after being assigned in Global.asax.Application_AuthenticateRequest I am using Forms authentication in my asp.net (3.5) application. I am also using roles to define what user can access which subdirectories of the app. Thus, the pertinent sections of my web.config file look like this: <system.web> <authentication mode="Forms"> <forms loginUrl="Default.aspx" path="/" protection="All" timeout="360" name="MyAppName" cookieless="UseCookies" /> </authentication> <authorization > <allow users="*"/> </authorization> </system.web> <location path="Admin"> <system.web> <authorization> <allow roles="Admin"/> <deny users="*"/> </authorization> </system.web> </location> Based on what I have read, this should ensure that the only users able to access the Admin directory will be users who have been Authenticated and assigned the Admin role. User authentication, saving the authentication ticket, and other related issues all work fine. If I remove the tags from the web.config file, everything works fine. The problem comes when I try to enforce that only users with the Admin role should be able to access the Admin directory. Based on this MS KB article along with other webpages giving the same information, I have added the following code to my Global.asax file: protected void Application_AuthenticateRequest(Object sender, EventArgs e) { if (HttpContext.Current.User != null) { if (Request.IsAuthenticated == true) { // Debug#1 FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(Context.Request.Cookies[FormsAuthentication.FormsCookieName].Value); // In this case, ticket.UserData = "Admin" string[] roles = new string[1] { ticket.UserData }; FormsIdentity id = new FormsIdentity(ticket); Context.User = new System.Security.Principal.GenericPrincipal(id, roles); // Debug#2 } } } However, when I try to log in, I am unable to access the Admin folder (get redirected to login page). Trying to debug the issue, if I step through a request, if I execute Context.User.IsInRole("Admin") at the line marked Debug#1 above, it returns a false. If I execute the same statement at line Debug#2, it equals true. So at least as far as Global.asax is concerned, the Role is being assigned properly. After Global.asax, execution jumps right to the Login page (since the lack of role causes the page load in the admin folder to be rejected). However, when I execute the same statement on the first line of Page_Load of the login, it returns false. So somewhere after Application_AuthenticateRequest in Global.asax and the initial load of the WebForm in the restricted directory, the role information is being lost, causing authentication to fail (note: in Page_Load, the proper Authentication ticket is still assigned to Context.User.Id - only the role is being lost). What am I doing wrong, and how can I get it to work properly? Update: I entered the solution below A: Here was the problem and solution: Earlier in development I had gone to the Website menu and clicked on Asp.net configuration. This resulted in the following line being added to the web.config: <system.web> <roleManager enabled="true" /> </system.web> From that point on, the app was assuming that I was doing roles through the Asp.net site manager, and not through FormsAuthentication roles. Thus the repeated failures, despite the fact that the actual authentication and roles logic was set up correctly. After this line was removed from web.config everything worked perfectly. A: this is just a random shot, but are you getting blocked because of the order of authorization for Admin? Maybe you should try switching your deny all and your all Admin. Just in case it's getting overwritten by the deny. (I had code samples in here but they weren't showing up.
Context.User losing Roles after being assigned in Global.asax.Application_AuthenticateRequest
I am using Forms authentication in my asp.net (3.5) application. I am also using roles to define what user can access which subdirectories of the app. Thus, the pertinent sections of my web.config file look like this: <system.web> <authentication mode="Forms"> <forms loginUrl="Default.aspx" path="/" protection="All" timeout="360" name="MyAppName" cookieless="UseCookies" /> </authentication> <authorization > <allow users="*"/> </authorization> </system.web> <location path="Admin"> <system.web> <authorization> <allow roles="Admin"/> <deny users="*"/> </authorization> </system.web> </location> Based on what I have read, this should ensure that the only users able to access the Admin directory will be users who have been Authenticated and assigned the Admin role. User authentication, saving the authentication ticket, and other related issues all work fine. If I remove the tags from the web.config file, everything works fine. The problem comes when I try to enforce that only users with the Admin role should be able to access the Admin directory. Based on this MS KB article along with other webpages giving the same information, I have added the following code to my Global.asax file: protected void Application_AuthenticateRequest(Object sender, EventArgs e) { if (HttpContext.Current.User != null) { if (Request.IsAuthenticated == true) { // Debug#1 FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(Context.Request.Cookies[FormsAuthentication.FormsCookieName].Value); // In this case, ticket.UserData = "Admin" string[] roles = new string[1] { ticket.UserData }; FormsIdentity id = new FormsIdentity(ticket); Context.User = new System.Security.Principal.GenericPrincipal(id, roles); // Debug#2 } } } However, when I try to log in, I am unable to access the Admin folder (get redirected to login page). Trying to debug the issue, if I step through a request, if I execute Context.User.IsInRole("Admin") at the line marked Debug#1 above, it returns a false. If I execute the same statement at line Debug#2, it equals true. So at least as far as Global.asax is concerned, the Role is being assigned properly. After Global.asax, execution jumps right to the Login page (since the lack of role causes the page load in the admin folder to be rejected). However, when I execute the same statement on the first line of Page_Load of the login, it returns false. So somewhere after Application_AuthenticateRequest in Global.asax and the initial load of the WebForm in the restricted directory, the role information is being lost, causing authentication to fail (note: in Page_Load, the proper Authentication ticket is still assigned to Context.User.Id - only the role is being lost). What am I doing wrong, and how can I get it to work properly? Update: I entered the solution below
[ "Here was the problem and solution: \nEarlier in development I had gone to the Website menu and clicked on Asp.net configuration. This resulted in the following line being added to the web.config: \n<system.web>\n <roleManager enabled=\"true\" />\n</system.web>\n\nFrom that point on, the app was assuming that I was doing roles through the Asp.net site manager, and not through FormsAuthentication roles. Thus the repeated failures, despite the fact that the actual authentication and roles logic was set up correctly.\nAfter this line was removed from web.config everything worked perfectly.\n", "this is just a random shot, but are you getting blocked because of the order of authorization for Admin? Maybe you should try switching your deny all and your all Admin.\nJust in case it's getting overwritten by the deny.\n(I had code samples in here but they weren't showing up.\n" ]
[ 5, 0 ]
[]
[]
[ ".net", "asp.net", "c#", "forms_authentication", "roles" ]
stackoverflow_0000056271_.net_asp.net_c#_forms_authentication_roles.txt
Q: jQuery and Java applets I'm working on a project where we're using a Java applet for part of the UI (a map, specifically), but building the rest of the UI around the applet in HTML/JavaScript, communicating with the applet through LiveConnect/NPAPI. A little bizarre, I know, but let's presume that setup is not under discussion. I started out planning on using jQuery as my JavaScript framework, but I've run into two issues. Issue the first: Selecting the applet doesn't provide access to the applet's methods. Java: public class MyApplet extends JApplet { // ... public String foo() { return "foo!"; } } JavaScript: var applet = $("#applet-id"); alert(applet.foo()); Running the above JavaScript results in $("#applet-id").foo is not a function This is in contrast to Prototype, where the analogous code does work: var applet = $("applet-id"); alert(applet.foo()); So...where'd the applet methods go? Issue the second: There's a known problem with jQuery and applets in Firefox 2: http://www.pengoworks.com/workshop/jquery/bug_applet/jquery_applet_bug.htm It's a long shot, but does anybody know of a workaround? I suspect this problem isn't fixable, which will mean switching to Prototype. Thanks for the help! A: For the first issue, how about trying alert( $("#applet-id")[0].foo() ); For the second issue here is a thread with a possible workaround. Quoting the workaround // Prevent memory leaks in IE // And prevent errors on refresh with events like mouseover in other browsers // Window isn't included so as not to unbind existing unload events jQuery(window).bind("unload", function() { jQuery("*").add(document).unbind(); }); change that code to: // Window isn't included so as not to unbind existing unload events jQuery(window).bind("unload", function() { jQuery("*:not('applet, object')").add(document).unbind(); });
jQuery and Java applets
I'm working on a project where we're using a Java applet for part of the UI (a map, specifically), but building the rest of the UI around the applet in HTML/JavaScript, communicating with the applet through LiveConnect/NPAPI. A little bizarre, I know, but let's presume that setup is not under discussion. I started out planning on using jQuery as my JavaScript framework, but I've run into two issues. Issue the first: Selecting the applet doesn't provide access to the applet's methods. Java: public class MyApplet extends JApplet { // ... public String foo() { return "foo!"; } } JavaScript: var applet = $("#applet-id"); alert(applet.foo()); Running the above JavaScript results in $("#applet-id").foo is not a function This is in contrast to Prototype, where the analogous code does work: var applet = $("applet-id"); alert(applet.foo()); So...where'd the applet methods go? Issue the second: There's a known problem with jQuery and applets in Firefox 2: http://www.pengoworks.com/workshop/jquery/bug_applet/jquery_applet_bug.htm It's a long shot, but does anybody know of a workaround? I suspect this problem isn't fixable, which will mean switching to Prototype. Thanks for the help!
[ "For the first issue, how about trying\nalert( $(\"#applet-id\")[0].foo() );\n\nFor the second issue here is a thread with a possible workaround.\nQuoting the workaround\n\n// Prevent memory leaks in IE\n// And prevent errors on refresh with events like mouseover in other browsers\n// Window isn't included so as not to unbind existing unload events\njQuery(window).bind(\"unload\",\nfunction() {\n jQuery(\"*\").add(document).unbind();\n});\n\n\nchange that code to:\n\n// Window isn't included so as not to unbind existing unload events\njQuery(window).bind(\"unload\",\nfunction() {\n jQuery(\"*:not('applet, object')\").add(document).unbind();\n});\n\n\n" ]
[ 12 ]
[]
[]
[ "applet", "java", "javascript", "jquery" ]
stackoverflow_0000057034_applet_java_javascript_jquery.txt
Q: How can I find what search terms (if any) brought a user to my site? I want to create dynamic content based on this. I know it's somewhere, as web analytics engines can get this data to determine how people got to your site (referrer, search terms used, etc.), but I don't know how to get at it myself. A: You can use the "referer" part of the request that the user sent to figure out what he searched for. Example from Google: http://www.google.no/search?q=stack%20overflow So you must search the string (in ASP(.NET) this can be found be looking in Request.Referer) for "q=" and then URLDecode the contents. Also, you should take a look at this article that talks more about referrers and also other methods to track your visitors: http://www.15seconds.com/issue/021119.htm A: This is some code to backup the idea of using a querystring method and if that's not available using the UrlReferrer property of the Request object. This can then be stashed in a session object (or somewhere else if that works better for you) so that you can track the source between pages. (Page_Load doesn't seem to be formatted correctly inside the code sample here) public void Page_Load(Object Sender, EventArgs E) { if (null == Session["source"] || Session["source"].ToString().Equals(string.Empty)) { if (Request.QueryString["src"] != null) { Session["source"] = Server.UrlDecode(Request.QueryString["src"].ToString()); } else { if (Request.UrlReferrer != null) { Session["source"] = Request.UrlReferrer.ToString(); } else { Session["source"] = string.Empty; } } }}
How can I find what search terms (if any) brought a user to my site?
I want to create dynamic content based on this. I know it's somewhere, as web analytics engines can get this data to determine how people got to your site (referrer, search terms used, etc.), but I don't know how to get at it myself.
[ "You can use the \"referer\" part of the request that the user sent to figure out what he searched for. Example from Google:\n\nhttp://www.google.no/search?q=stack%20overflow\n\nSo you must search the string (in ASP(.NET) this can be found be looking in Request.Referer) for \"q=\" and then URLDecode the contents.\nAlso, you should take a look at this article that talks more about referrers and also other methods to track your visitors:\nhttp://www.15seconds.com/issue/021119.htm\n", "This is some code to backup the idea of using a querystring method and if that's not available using the UrlReferrer property of the Request object. This can then be stashed in a session object (or somewhere else if that works better for you) so that you can track the source between pages. (Page_Load doesn't seem to be formatted correctly inside the code sample here)\npublic void Page_Load(Object Sender, EventArgs E) {\n if (null == Session[\"source\"] || Session[\"source\"].ToString().Equals(string.Empty)) {\n if (Request.QueryString[\"src\"] != null) {\n Session[\"source\"] = Server.UrlDecode(Request.QueryString[\"src\"].ToString());\n } else {\n if (Request.UrlReferrer != null) {\n Session[\"source\"] = Request.UrlReferrer.ToString();\n } else {\n Session[\"source\"] = string.Empty;\n }\n }\n }}\n\n" ]
[ 7, 0 ]
[]
[]
[ "analytics", "search_engine" ]
stackoverflow_0000057004_analytics_search_engine.txt
Q: Can't access variable in C++ DLL from a C app I'm stuck on a fix to a legacy Visual C++ 6 app. In the C++ DLL source I have put extern "C" _declspec(dllexport) char* MyNewVariable = 0; which results in MyNewVariable showing up (nicely undecorated) in the export table (as shown by dumpbin /exports blah.dll). However, I can't figure out how to declare the variable so that I can access it in a C source file. I have tried various things, including _declspec(dllimport) char* MyNewVariable; but that just gives me a linker error: unresolved external symbol "__declspec(dllimport) char * MyNewVariable" (__imp_?MyNewVariable@@3PADA) extern "C" _declspec(dllimport) char* MyNewVariable; as suggested by Tony (and as I tried before) results in a different expected decoration, but still hasn't removed it: unresolved external symbol __imp__MyNewVariable How do I write the declaration so that the C++ DLL variable is accessible from the C app? The Answer As identified by botismarius and others (many thanks to all), I needed to link with the DLL's .lib. To prevent the name being mangled I needed to declare it (in the C source) with no decorators, which means I needed to use the .lib file. A: you must link against the lib generated after compiling the DLL. In the linker options of the project, you must add the .lib file. And yes, you should also declare the variable as: extern "C" { declspec(dllimport) char MyNewVariable; } A: extern "C" is how you remove decoration - it should work to use: extern "C" declspec(dllimport) char MyNewVariable; or if you want a header that can be used by C++ or C (with /TC switch) #ifdef __cplusplus extern "C" { #endif declspec(dllimport) char MyNewVariable; #ifdef __cplusplus } #endif And of course, link with the import library generated by the dll doing the export. A: I'm not sure who downmodded botismarius, because he's right. The reason is the .lib generated is the import library that makes it easy to simply declare the external variable/function with __declspec(dllimport) and just use it. The import library simply automates the necessary LoadLibrary() and GetProcAddress() calls. Without it, you need to call these manually. A: They're both right. The fact that the error message describes __imp_?MyNewVariable@@3PADA means that it's looking for the decorated name, so the extern "C" is necessary. However, linking with the import library is also necessary or you'll just get a different link error. A: @Graeme: You're right on that, too. I think the "C" compiler that the OP is using is not enforcing C99 standard, but compiling as C++, thus mangling the names. A true C compiler wouldn't understand the "C" part of the extern "C" keyword. A: In the dll source code you should have this implementation so that the .lib file exports the symbol: extern "C" _declspec(dllexport) char* MyNewVariable = 0; The c client should use a header with this declaration so that the client code will import the symbol: extern "C" _declspec(dllimport) char* MyNewVariable; This header will cause a compile error if #include-ed in the dll source code, so it is usually put in an export header that is used only for exported functions and only by clients. If you need to, you can also create a "universal" header that can be included anywhere that looks like this: #ifdef __cplusplus extern "C" { #endif #ifdef dll_source_file #define EXPORTED declspec(dllexport) #else #define EXPORTED declspec(dllimport) #endif dll_source_file #ifdef __cplusplus } #endif EXPORTED char* MyNewVariable; Then the dll source code looks like this: #define dll_source_code #include "universal_header.h" EXPORTED char* MyNewVariable = 0; And the client looks like this: #include "universal_header.h" ... MyNewVariable = "Hello, world"; If you do this a lot, the monster #ifdef at the top can go in export_magic.h and universal_header.h becomes: #include "export_magic.h" EXPORTED char *MyNewVariable; A: I've never used _declspec(dllimport) when I was programming in Windows. You should be able to simply declare extern "C" char* MyNewVariable; and link to the .libb created when DLL was compiled.
Can't access variable in C++ DLL from a C app
I'm stuck on a fix to a legacy Visual C++ 6 app. In the C++ DLL source I have put extern "C" _declspec(dllexport) char* MyNewVariable = 0; which results in MyNewVariable showing up (nicely undecorated) in the export table (as shown by dumpbin /exports blah.dll). However, I can't figure out how to declare the variable so that I can access it in a C source file. I have tried various things, including _declspec(dllimport) char* MyNewVariable; but that just gives me a linker error: unresolved external symbol "__declspec(dllimport) char * MyNewVariable" (__imp_?MyNewVariable@@3PADA) extern "C" _declspec(dllimport) char* MyNewVariable; as suggested by Tony (and as I tried before) results in a different expected decoration, but still hasn't removed it: unresolved external symbol __imp__MyNewVariable How do I write the declaration so that the C++ DLL variable is accessible from the C app? The Answer As identified by botismarius and others (many thanks to all), I needed to link with the DLL's .lib. To prevent the name being mangled I needed to declare it (in the C source) with no decorators, which means I needed to use the .lib file.
[ "you must link against the lib generated after compiling the DLL. In the linker options of the project, you must add the .lib file. And yes, you should also declare the variable as:\nextern \"C\" { declspec(dllimport) char MyNewVariable; }\n\n", "extern \"C\" is how you remove decoration - it should work to use:\nextern \"C\" declspec(dllimport) char MyNewVariable;\nor if you want a header that can be used by C++ or C (with /TC switch)\n#ifdef __cplusplus\nextern \"C\" {\n#endif\ndeclspec(dllimport) char MyNewVariable;\n#ifdef __cplusplus\n}\n#endif\n\nAnd of course, link with the import library generated by the dll doing the export.\n", "I'm not sure who downmodded botismarius, because he's right. The reason is the .lib generated is the import library that makes it easy to simply declare the external variable/function with __declspec(dllimport) and just use it. The import library simply automates the necessary LoadLibrary() and GetProcAddress() calls. Without it, you need to call these manually.\n", "They're both right. The fact that the error message describes __imp_?MyNewVariable@@3PADA means that it's looking for the decorated name, so the extern \"C\" is necessary. However, linking with the import library is also necessary or you'll just get a different link error.\n", "@Graeme: You're right on that, too. I think the \"C\" compiler that the OP is using is not enforcing C99 standard, but compiling as C++, thus mangling the names. A true C compiler wouldn't understand the \"C\" part of the extern \"C\" keyword.\n", "In the dll source code you should have this implementation so that the .lib file exports the symbol:\nextern \"C\" _declspec(dllexport) char* MyNewVariable = 0;\n\nThe c client should use a header with this declaration so that the client code will import the symbol:\nextern \"C\" _declspec(dllimport) char* MyNewVariable;\n\nThis header will cause a compile error if #include-ed in the dll source code, so it is usually put in an export header that is used only for exported functions and only by clients.\nIf you need to, you can also create a \"universal\" header that can be included anywhere that looks like this:\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n#ifdef dll_source_file\n#define EXPORTED declspec(dllexport) \n#else\n#define EXPORTED declspec(dllimport) \n#endif dll_source_file\n#ifdef __cplusplus\n}\n#endif\n\nEXPORTED char* MyNewVariable;\n\nThen the dll source code looks like this:\n#define dll_source_code \n#include \"universal_header.h\"\n\nEXPORTED char* MyNewVariable = 0;\n\nAnd the client looks like this:\n#include \"universal_header.h\"\n...\nMyNewVariable = \"Hello, world\";\n\nIf you do this a lot, the monster #ifdef at the top can go in export_magic.h and universal_header.h becomes:\n#include \"export_magic.h\"\n\nEXPORTED char *MyNewVariable;\n\n", "I've never used _declspec(dllimport) when I was programming in Windows. You should be able to simply declare \nextern \"C\" char* MyNewVariable;\n\nand link to the .libb created when DLL was compiled.\n" ]
[ 5, 4, 3, 1, 1, 1, 0 ]
[]
[]
[ "c", "c++", "interop", "name_decoration" ]
stackoverflow_0000056500_c_c++_interop_name_decoration.txt
Q: Do I need to leave gaps in a standard server rack? We have a 42U rack which is getting a load of new 1U and 2U servers real soon. One of the guys here reckons that you need to leave a gap between the servers (of 1U) to aid cooling. Question is, do you? When looking around the datacenter, no-one else seems to be, and it also diminishes how much we can fit in. We're using Dell 1850 and 2950 hardware. A: Simply NO, the servers and switches, and KVMs, and PSUs are all designed to be on the rack stacked on top of eachother. I'm basing this on a few years building, COs and Data centers for AT&T. A: You don't need to leave a gap between systems for gear designed to be rack-mountable. If you were building the systems yourself you'd need to select components carefully: some CPU+motherboards run too hot even if they can physically fit inside a 1U case. Dell gear will be fine. You do need to keep the space between and behind the racks clear of clutter. Most servers today channel their airflow front to back, if you don't leave enough open air behind the rack it will get very hot back there and reduce the cooling capacity. On a typical 48 port switch the front panel is covered with RJ-45 connectors and the back by redundant power connections, PoE power tray hookups, stacking ports and uplinks. Many 1U network switches route their airflow side-to-side, because they can't get enough air through the maze of connectors front-to-back. So you also need to make sure the channels beside the rack are relatively open, to let the switches get enough airflow. In a crowded server rack, tidiness is important. A: I agree with Unkwntech that gaps are not normally required, but I think there are two things to watch out for: 1) Equipment that is not as deep as the rest may have trouble ventilating if mounted below deeper equipment (see below). This is of course less of a concern in a well ventilated server room. TOP OF RACK =============== =============== =============== =============== =============== ======== (Shallow equipment, trapped hot air) 2) When mounting equipment in a cabinet, you usually need to leave a few inches clear at the top to allow proper ventilation. A: Generally, no. That's kind of the whole point a of 1U server: if it needed extra space (even for cooling) they'd give it a bigger chassis and call it 2U. In some designs, where the airflow is controlled and only the rack is supposed to be cooled, the gap is even counter-productive, as it allows for the warm air from the back to flow and mix with the cool air in the front, reducing cooling efficiency. Even when you have gaps for logical groupings, you're supposed to plug them with blank panels to the control the airflow. Unfortunately, in practice some whitebox vendors occasionally push too hard for that 1U designation, and you'll find that if you stack too many too close together without the occasional gap for airflow you have issues. This isn't a problem with good quality servers and an adequate cooling design, but the bottom end of the market might surprise you. A: The last two places I worked have large datacenters and they stack all their servers and appliances with no gaps. The servers have plenty of cooling with their internal fans. It is also recommended to run the rack on a raised floor with perforated tiles in the front of the rack and A/C air return above the rear of the racks for circulation.
Do I need to leave gaps in a standard server rack?
We have a 42U rack which is getting a load of new 1U and 2U servers real soon. One of the guys here reckons that you need to leave a gap between the servers (of 1U) to aid cooling. Question is, do you? When looking around the datacenter, no-one else seems to be, and it also diminishes how much we can fit in. We're using Dell 1850 and 2950 hardware.
[ "Simply NO, the servers and switches, and KVMs, and PSUs are all designed to be on the rack stacked on top of eachother. I'm basing this on a few years building, COs and Data centers for AT&T.\n", "You don't need to leave a gap between systems for gear designed to be rack-mountable. If you were building the systems yourself you'd need to select components carefully: some CPU+motherboards run too hot even if they can physically fit inside a 1U case.\nDell gear will be fine.\nYou do need to keep the space between and behind the racks clear of clutter. Most servers today channel their airflow front to back, if you don't leave enough open air behind the rack it will get very hot back there and reduce the cooling capacity.\nOn a typical 48 port switch the front panel is covered with RJ-45 connectors and the back by redundant power connections, PoE power tray hookups, stacking ports and uplinks. Many 1U network switches route their airflow side-to-side, because they can't get enough air through the maze of connectors front-to-back. So you also need to make sure the channels beside the rack are relatively open, to let the switches get enough airflow.\nIn a crowded server rack, tidiness is important.\n", "I agree with Unkwntech that gaps are not normally required, but I think there are two things to watch out for:\n1) Equipment that is not as deep as the rest may have trouble ventilating if mounted below deeper equipment (see below). This is of course less of a concern in a well ventilated server room.\nTOP OF RACK\n===============\n===============\n===============\n===============\n===============\n======== (Shallow equipment, trapped hot air)\n\n2) When mounting equipment in a cabinet, you usually need to leave a few inches clear at the top to allow proper ventilation.\n", "Generally, no. That's kind of the whole point a of 1U server: if it needed extra space (even for cooling) they'd give it a bigger chassis and call it 2U. In some designs, where the airflow is controlled and only the rack is supposed to be cooled, the gap is even counter-productive, as it allows for the warm air from the back to flow and mix with the cool air in the front, reducing cooling efficiency. Even when you have gaps for logical groupings, you're supposed to plug them with blank panels to the control the airflow.\nUnfortunately, in practice some whitebox vendors occasionally push too hard for that 1U designation, and you'll find that if you stack too many too close together without the occasional gap for airflow you have issues. This isn't a problem with good quality servers and an adequate cooling design, but the bottom end of the market might surprise you.\n", "The last two places I worked have large datacenters and they stack all their servers and appliances with no gaps. The servers have plenty of cooling with their internal fans. It is also recommended to run the rack on a raised floor with perforated tiles in the front of the rack and A/C air return above the rear of the racks for circulation.\n" ]
[ 13, 4, 4, 3, 1 ]
[]
[]
[ "hosting", "rack" ]
stackoverflow_0000056786_hosting_rack.txt
Q: Getting Information From Master File Table on Windows I need to get some information that is contained in the MFT on a Windows machine, and I'm hoping that there is some super-secret API for getting this information. I need to be able to get to this information programmatically, and because of legal concerns I might not be able to use the tools provided by the company formally known as sysinternals. My other option (which I really don't want to have to do) is to get the start sector of the MFT with DeviceIoControl, and manually parse through the information. Anyway, in particular, what I really need to get out of the Master File Table is the logical sectors used to hold the data that is associated with a file. A: There is a documented API for getting info on file positions on disk since Windows 2000. Look for DeviceIoControl function with FSCTL_GET_RETRIEVAL_POINTERS control code on MSDN: http://msdn.microsoft.com/en-us/library/aa364572(VS.85).aspx The API has been provided for writing custom disk defragmenters and consists of several other control codes.
Getting Information From Master File Table on Windows
I need to get some information that is contained in the MFT on a Windows machine, and I'm hoping that there is some super-secret API for getting this information. I need to be able to get to this information programmatically, and because of legal concerns I might not be able to use the tools provided by the company formally known as sysinternals. My other option (which I really don't want to have to do) is to get the start sector of the MFT with DeviceIoControl, and manually parse through the information. Anyway, in particular, what I really need to get out of the Master File Table is the logical sectors used to hold the data that is associated with a file.
[ "There is a documented API for getting info on file positions on disk since Windows 2000. Look for DeviceIoControl function with FSCTL_GET_RETRIEVAL_POINTERS control code on MSDN:\nhttp://msdn.microsoft.com/en-us/library/aa364572(VS.85).aspx\nThe API has been provided for writing custom disk defragmenters and consists of several other control codes.\n" ]
[ 2 ]
[]
[]
[ "filesystems", "windows" ]
stackoverflow_0000057007_filesystems_windows.txt
Q: How do I make dynamic content with dynamic navigation? I'm creating an ASP.NET web site where all pages hang off a database-driven tree-hierarchy. Pages typically present HTML content. But, some will execute programming. Examples: a "contact us" form a report generator How should I represent/reference the programming within the database? Should I have a varchar value of a Web User Control (.ascx) name? Or a Web Form (.aspx) name? Something else? Or should it just be an integer or other such ID in a dictionary within my application? Can I make an ASP.NET Site Map Provider with this structure? See more information here: Which is the best database schema for my navigation? A: You might consider inserting placeholders like <my:contact-us-form/> in the database on specific pages; that way the database can describe all the static text content instead of completely replacing that database-driven content with an .ascx control. A: Our development team has had success with defining the name of a Web User Control in the database. Upon page load it checks too see what controls to dynamically load from the database. We use Web User Controls instead of Web Forms in order to ensure we can use the control on any page. You can also dynamically build a site map using ASP.Net's provider. CodeProject has a good example.
How do I make dynamic content with dynamic navigation?
I'm creating an ASP.NET web site where all pages hang off a database-driven tree-hierarchy. Pages typically present HTML content. But, some will execute programming. Examples: a "contact us" form a report generator How should I represent/reference the programming within the database? Should I have a varchar value of a Web User Control (.ascx) name? Or a Web Form (.aspx) name? Something else? Or should it just be an integer or other such ID in a dictionary within my application? Can I make an ASP.NET Site Map Provider with this structure? See more information here: Which is the best database schema for my navigation?
[ "You might consider inserting placeholders like <my:contact-us-form/> in the database on specific pages; that way the database can describe all the static text content instead of completely replacing that database-driven content with an .ascx control.\n", "Our development team has had success with defining the name of a Web User Control in the database. Upon page load it checks too see what controls to dynamically load from the database. \nWe use Web User Controls instead of Web Forms in order to ensure we can use the control on any page.\nYou can also dynamically build a site map using ASP.Net's provider. CodeProject has a good example.\n" ]
[ 2, 1 ]
[]
[]
[ "asp.net", "database", "database_design", "web_applications" ]
stackoverflow_0000056770_asp.net_database_database_design_web_applications.txt
Q: Team System get-latest-version on checkout We used to use SourceSafe, and one thing I liked about it was that when you checked out a file, it automatically got you its latest version. Now we work with Team System 2005, and it doesn't work that way - you have to "get latest version" before you start working on a file that you've checked out. Is there a way to configure Team System (2005) to automatically get the latest version when checking out a file? A: There's a Visual Studio Add-in for this that someone wrote: http://blogs.microsoft.co.il/blogs/srlteam/archive/2007/03/24/TFS-GetLatest-version-on-check_2D00_out-Add_2D00_In.aspx A: Are you sure you want that? It means that when you check out a file, it will be out of sync with the rest of your files. Your project may not build or function properly until you update all files. A: @Vaibhav: Thanks a lot! @Jay Bazuzi: I understand what you're saying, but for me it's very important that if a developer is working on a file, it be the lastest version of that file. Otherwise the check in introduces a lot of problems. If for some reason, as a result of the getting latest version, the project doesn't compile, then by all means get the latest version of the whole project. For the way our team works - often check-ins - this is good. If you made changes you want to keep - shelve them.
Team System get-latest-version on checkout
We used to use SourceSafe, and one thing I liked about it was that when you checked out a file, it automatically got you its latest version. Now we work with Team System 2005, and it doesn't work that way - you have to "get latest version" before you start working on a file that you've checked out. Is there a way to configure Team System (2005) to automatically get the latest version when checking out a file?
[ "There's a Visual Studio Add-in for this that someone wrote: \nhttp://blogs.microsoft.co.il/blogs/srlteam/archive/2007/03/24/TFS-GetLatest-version-on-check_2D00_out-Add_2D00_In.aspx\n", "Are you sure you want that?\nIt means that when you check out a file, it will be out of sync with the rest of your files. Your project may not build or function properly until you update all files.\n", "@Vaibhav: Thanks a lot!\n@Jay Bazuzi: I understand what you're saying, but for me it's very important that if a developer is working on a file, it be the lastest version of that file. Otherwise the check in introduces a lot of problems. If for some reason, as a result of the getting latest version, the project doesn't compile, then by all means get the latest version of the whole project. For the way our team works - often check-ins - this is good. If you made changes you want to keep - shelve them.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "tfs" ]
stackoverflow_0000056325_tfs.txt
Q: File system info - how to query it? Is there a way to access file system info via some type of Windows API? If not what other methods are available to a user mode developer? A: Not very clean, but you can use DeviceIoControl() Open volume as a file, pass resulting handle to DeviceIoControl() together with control code. Check MSDN for control codes, there is something like "read journal record". A: In another post, someone recommended this : Keeping an Eye on Your NTFS Drives: the Windows 2000 Change Journal Explained. It explains how to use the NTFS Filesystem with C++ through Windows 2000. The implementation might have changed.
File system info - how to query it?
Is there a way to access file system info via some type of Windows API? If not what other methods are available to a user mode developer?
[ "Not very clean, but you can use DeviceIoControl()\nOpen volume as a file, pass resulting handle to DeviceIoControl() together with control code. Check MSDN for control codes, there is something like \"read journal record\".\n", "In another post, someone recommended this : Keeping an Eye on Your NTFS Drives: the Windows 2000 Change Journal Explained.\nIt explains how to use the NTFS Filesystem with C++ through Windows 2000.\nThe implementation might have changed.\n" ]
[ 2, 0 ]
[]
[]
[ "c++", "filesystems", "winapi", "windows" ]
stackoverflow_0000056741_c++_filesystems_winapi_windows.txt
Q: How do I put a link to a webpage in a JScript Alert dialog box? I would like to put a link to a webpage in an alert dialog box so that I can give a more detailed description of how to fix the error that makes the dialog box get created. How can I make the dialog box show something like this: There was an error. Go to this page to fix it. wwww.TheWebPageToFix.com Thanks. A: You can't. Alert boxes don't support html. You should display the error as part of the page, it's nicer than JS alerts anyway. A: If you really wanted to, you could override the default behavior of the alert() function. Not saying you should do this. Here's an example that uses the YUI library, but you don't have to use YUI to do it: YUI-based alert box - replace your ugly JavaScript alert box A: You can't - but here are some options: window.open() - make your own dialog Use prompt() and instruct the user to copy the url Use JavaScript to just navigate them to the url directly (maybe after using confirm() to ask them) Include a div on your page with a [FIX IT] button and unhide it Use JavaScript to put a fix it URL into the user's clipboard (not recommended) A: You could try asking them if they wish to visit via window.prompt: if(window.prompt('Do you wish to visit the following website?','http://www.google.ca')) location.href='http://www.google.ca/'; Also, Internet Explorer supports modal dialogs so you could try showing one of those: if (window.showModalDialog) window.showModalDialog("mypage.html","popup","dialogWidth:255px;dialogHeight:250px"); else window.open("mypage.html","name","height=255,width=250,toolbar=no,directories=no,status=no,menubar=no,scrollbars=no,resizable=no,modal=yes"); A: Or use window.open and put the link there. A: Even if you could, alert() boxes are generally modal - so any page opened from one would have to open in a new window. Annoying! A: alert("There was an error. Got to this page to fix it.\nwww.TheWebPageToFix.com"); That's the best you can do from a JavaScript alert(). Your alternative option is to try and open a new tiny window that looks like a dialog. With IE you can open it modal.
How do I put a link to a webpage in a JScript Alert dialog box?
I would like to put a link to a webpage in an alert dialog box so that I can give a more detailed description of how to fix the error that makes the dialog box get created. How can I make the dialog box show something like this: There was an error. Go to this page to fix it. wwww.TheWebPageToFix.com Thanks.
[ "You can't. Alert boxes don't support html. You should display the error as part of the page, it's nicer than JS alerts anyway.\n", "If you really wanted to, you could override the default behavior of the alert() function. Not saying you should do this.\nHere's an example that uses the YUI library, but you don't have to use YUI to do it: \nYUI-based alert box - replace your ugly JavaScript alert box\n", "You can't - but here are some options:\n\nwindow.open() - make your own dialog\nUse prompt() and instruct the user to copy the url\nUse JavaScript to just navigate them to the url directly (maybe after using confirm() to ask them)\nInclude a div on your page with a [FIX IT] button and unhide it\nUse JavaScript to put a fix it URL into the user's clipboard (not recommended)\n\n", "You could try asking them if they wish to visit via window.prompt:\nif(window.prompt('Do you wish to visit the following website?','http://www.google.ca'))\n location.href='http://www.google.ca/';\n\nAlso, Internet Explorer supports modal dialogs so you could try showing one of those:\nif (window.showModalDialog)\n window.showModalDialog(\"mypage.html\",\"popup\",\"dialogWidth:255px;dialogHeight:250px\");\nelse\n window.open(\"mypage.html\",\"name\",\"height=255,width=250,toolbar=no,directories=no,status=no,menubar=no,scrollbars=no,resizable=no,modal=yes\");\n\n", "Or use window.open and put the link there.\n", "Even if you could, alert() boxes are generally modal - so any page opened from one would have to open in a new window. Annoying! \n", "alert(\"There was an error. Got to this page to fix it.\\nwww.TheWebPageToFix.com\");\n\nThat's the best you can do from a JavaScript alert(). Your alternative option is to try and open a new tiny window that looks like a dialog. With IE you can open it modal.\n" ]
[ 9, 6, 6, 5, 2, 2, 2 ]
[]
[]
[ "alert", "javascript" ]
stackoverflow_0000057202_alert_javascript.txt
Q: How do you set focus to the HTML5 canvas element? I'm using the HTML5 <canvas> element in Firefox 2.0.0.16 and in Safari 3.1.2, both on my iMac. (I've tried this in Firefox 3.0 on Windows as well, also to no avail.) The tag looks like this: <td> <canvas id="display" width="500px" height="500px"> </canvas> </td> I have a button to "activate" some functionality that interacts with the canvas. That button's onclick() event calls a function. In that function I have the following line: document.getElementById("display").focus(); This does not work. Firebug reports no error. But the focus still remains where it was. I can click on the canvas or tab towards the canvas and focus will be lost from the other elements, but apparently never be gained on by the canvas (The canvas's onfocus() event never fires). I find this odd. Is it that the canvas simply cannot get focus, or am I missing something here? Any insight would be appreciated. Thank you. A: Give the canvas a tab index: <canvas id="display" width="500px" height="500px" tabindex="1"> </canvas>
How do you set focus to the HTML5 canvas element?
I'm using the HTML5 <canvas> element in Firefox 2.0.0.16 and in Safari 3.1.2, both on my iMac. (I've tried this in Firefox 3.0 on Windows as well, also to no avail.) The tag looks like this: <td> <canvas id="display" width="500px" height="500px"> </canvas> </td> I have a button to "activate" some functionality that interacts with the canvas. That button's onclick() event calls a function. In that function I have the following line: document.getElementById("display").focus(); This does not work. Firebug reports no error. But the focus still remains where it was. I can click on the canvas or tab towards the canvas and focus will be lost from the other elements, but apparently never be gained on by the canvas (The canvas's onfocus() event never fires). I find this odd. Is it that the canvas simply cannot get focus, or am I missing something here? Any insight would be appreciated. Thank you.
[ "Give the canvas a tab index:\n <canvas id=\"display\"\n width=\"500px\"\n height=\"500px\"\n tabindex=\"1\">\n </canvas>\n\n" ]
[ 28 ]
[]
[]
[ "canvas", "focus", "javascript" ]
stackoverflow_0000056771_canvas_focus_javascript.txt
Q: Using Small (1-10 Items) Instance-Level Collections in Java While creating classes in Java I often find myself creating instance-level collections that I know ahead of time will be very small - less than 10 items in the collection. But I don't know the number of items ahead of time so I typically opt for a dynamic collection (ArrayList, Vector, etc). class Foo { ArrayList<Bar> bars = new ArrayList<Bar>(10); } A part of me keeps nagging at me that it's wasteful to use complex dynamic collections for something this small in size. Is there a better way of implementing something like this? Or is this the norm? Note, I'm not hit with any (noticeable) performance penalties or anything like that. This is just me wondering if there isn't a better way to do things. A: The ArrayList class in Java has only two data members, a reference to an Object[] array and a size—which you need anyway if you don't use an ArrayList. So the only advantage to not using an ArrayList is saving one object allocation, which is unlikely ever to be a big deal. If you're creating and disposing of many, many instances of your container class (and by extension your ArrayList instance) every second, you might have a slight problem with garbage collection churn—but that's something to worry about if it ever occurs. Garbage collection is typically the least of your worries. A: For the sake of keeping things simple, I think this is pretty much a non-issue. Your implementation is flexible enough that if the requirements change in the future, you aren't forced into a refactoring. Also, adding more logic to your code for a hybrid solution just isn't worth it taking into account your small data set and the high-quality of Java's Collection API. A: Google Collections has collections optimized for immutable/small number of elements. See Lists.asList API as an example. A: The overhead is very small. It is possible to write a hybrid array list that has fields for the first few items, and then falls back to using an array for longer list. You can avoid the overhead of the list object entirely by using an array. To go even further hardcore, you can declare the field as Object, and avoid the array altogether for a single item. If memory really is a problem, you might want to forget about using object instances at the low-level. Instead use a larger data structure at a larger level of granularity.
Using Small (1-10 Items) Instance-Level Collections in Java
While creating classes in Java I often find myself creating instance-level collections that I know ahead of time will be very small - less than 10 items in the collection. But I don't know the number of items ahead of time so I typically opt for a dynamic collection (ArrayList, Vector, etc). class Foo { ArrayList<Bar> bars = new ArrayList<Bar>(10); } A part of me keeps nagging at me that it's wasteful to use complex dynamic collections for something this small in size. Is there a better way of implementing something like this? Or is this the norm? Note, I'm not hit with any (noticeable) performance penalties or anything like that. This is just me wondering if there isn't a better way to do things.
[ "The ArrayList class in Java has only two data members, a reference to an Object[] array and a size—which you need anyway if you don't use an ArrayList. So the only advantage to not using an ArrayList is saving one object allocation, which is unlikely ever to be a big deal.\nIf you're creating and disposing of many, many instances of your container class (and by extension your ArrayList instance) every second, you might have a slight problem with garbage collection churn—but that's something to worry about if it ever occurs. Garbage collection is typically the least of your worries.\n", "For the sake of keeping things simple, I think this is pretty much a non-issue. Your implementation is flexible enough that if the requirements change in the future, you aren't forced into a refactoring. Also, adding more logic to your code for a hybrid solution just isn't worth it taking into account your small data set and the high-quality of Java's Collection API.\n", "Google Collections has collections optimized for immutable/small number of elements. See Lists.asList API as an example.\n", "The overhead is very small. It is possible to write a hybrid array list that has fields for the first few items, and then falls back to using an array for longer list.\nYou can avoid the overhead of the list object entirely by using an array. To go even further hardcore, you can declare the field as Object, and avoid the array altogether for a single item.\nIf memory really is a problem, you might want to forget about using object instances at the low-level. Instead use a larger data structure at a larger level of granularity.\n" ]
[ 10, 3, 2, 1 ]
[]
[]
[ "collections", "java" ]
stackoverflow_0000057145_collections_java.txt
Q: How can you find out where the style for a ASP .Net web page element came from? I have a quandary. My web application (C#, .Net 3.0, etc) has Themes, CSS sheets and, of course, inline style definitions. Now that's alot of chefs adding stuff to the soup. All of this results, not surprisingly, in my pages having bizarre styling on occasion. I am sure that all these styles are applied in a hierarchical method (although I am not sure of that order). The issue is that each style is applied as a "transparent" layer which just masks what it is applying. This is, I feel, a good idea as you can specifiy styles for the whole and then one-off them as needed. Unfortunately I can't tell from which layer the style actually came from. I could solve this issue by explicitly expressing the style at all layers but that gets bulky and hard to manage and the page(s) works 80% of the time. I just need to figure out where that squirrelly 20% came from. A: IMHO, Firebug is going to be your best bet. It will tell you which file the style came from and you can click on the filename to be transported instantly to the relevant line in the file. Note: You can hit ctrl+shift+C on any page to select and inspect an element with the mouse. A: Here's a quick sreencast of how to use Firebug to find out from where an element is getting it's style. http://screencast.com/t/oFpuDUoJ0 A: in Firefox use the DOM inspector, firebug, or inspect this. in IE, use the IE dev toolbar (or, maybe better, Firebug Lite) In Google Chrome, use the built-in "inspect element" functionality A: Using the IE Developer Toolbar you can select an element (either by "Select element by click" or clicking on its node in the DOM tree view) and in the Current Style pane, right click on a row and select "Trace Style". The other tools have a similar feature.
How can you find out where the style for a ASP .Net web page element came from?
I have a quandary. My web application (C#, .Net 3.0, etc) has Themes, CSS sheets and, of course, inline style definitions. Now that's alot of chefs adding stuff to the soup. All of this results, not surprisingly, in my pages having bizarre styling on occasion. I am sure that all these styles are applied in a hierarchical method (although I am not sure of that order). The issue is that each style is applied as a "transparent" layer which just masks what it is applying. This is, I feel, a good idea as you can specifiy styles for the whole and then one-off them as needed. Unfortunately I can't tell from which layer the style actually came from. I could solve this issue by explicitly expressing the style at all layers but that gets bulky and hard to manage and the page(s) works 80% of the time. I just need to figure out where that squirrelly 20% came from.
[ "IMHO, Firebug is going to be your best bet. It will tell you which file the style came from and you can click on the filename to be transported instantly to the relevant line in the file. \nNote: You can hit ctrl+shift+C on any page to select and inspect an element with the mouse.\n", "Here's a quick sreencast of how to use Firebug to find out from where an element is getting it's style.\nhttp://screencast.com/t/oFpuDUoJ0\n", "in Firefox use the DOM inspector, firebug, or inspect this.\nin IE, use the IE dev toolbar (or, maybe better, Firebug Lite)\nIn Google Chrome, use the built-in \"inspect element\" functionality\n", "Using the IE Developer Toolbar you can select an element (either by \"Select element by click\" or clicking on its node in the DOM tree view) and in the Current Style pane, right click on a row and select \"Trace Style\".\nThe other tools have a similar feature.\n" ]
[ 5, 4, 2, 2 ]
[ "The key to solving a complex CSS issue is to work out what is causing the weird appearance. The easiest way to find is to selectively comment out stylesheets until you find the one where commenting it out fixes the problem. Then enable the stylesheet and selectively comment out rules until you find the one causing the problem. If you need to know what takes precedence over what, the details of the cascade in CSS is detailed \nhere and unlike the implementation of individual rules, this is fairly consistent across browsers.\nHowever, it is much better if you avoid inline styles entirely and have a set of well-crafted stylesheets, each of which has a logical function and all of whose rules you understand. for the same reason you don't put your server-side code in a random order in random files - us\n" ]
[ -1 ]
[ "asp.net", "c#", "css", "themes" ]
stackoverflow_0000054337_asp.net_c#_css_themes.txt
Q: Batch renaming of files with international chars on Windows XP I have a whole bunch of files with filenames using our lovely Swedish letters å å and ö. For various reasons I now need to convert these to an [a-zA-Z] range. Just removing anything outside this range is fairly easy. The thing that's causing me trouble is that I'd like to replace å with a, ö with o and so on. This is charset troubles at their worst. I have a set of test files: files\Copy of New Text Documen åäö t.txt files\fofo.txt files\New Text Document.txt files\worstcase åäöÅÄÖéÉ.txt I'm basing my script on this line, piping it's results into various commands for %%X in (files\*.txt) do (echo %%X) The wierd thing is that if I print the results of this (the plain for-loop that is) into a file I get this output: files\Copy of New Text Documen †„” t.txt files\fofo.txt files\New Text Document.txt files\worstcase †„”Ž™‚.txt So something wierd is happening to my filenames before they even reach the other tools (I've been trying to do this using a sed port for Windows from something called GnuWin32 but no luck so far) and doing the replace on these characters doesn't help either. How would you solve this problem? I'm open to any type of tools, commandline or otherwise… EDIT: This is a one time problem, so I'm looking for a quick 'n ugly fix A: You might have more luck in cmd.exe if you opened it in UNICODE mode. Use "cmd /U". Others have proposed using a real programming language. That's fine, especially if you have a language you are very comfortable with. My friend on the C# team says that C# 3.0 (with Linq) is well-suited to whipping up quick, small programs like this. He has stopped writing batch files most of the time. Personally, I would choose PowerShell. This problem can be solved right on the command line, and in a single line. I'll EDIT: it's not one line, but it's not a lot of code, either. Also, it looks like StackOverflow doesn't like the syntax "$_.Name", and renders the _ as &#95. $mapping = @{ "å" = "a" "ä" = "a" "ö" = "o" } Get-ChildItem -Recurse . *.txt | Foreach-Object { $newname = $_.Name foreach ($l in $mapping.Keys) { $newname = $newname.Replace( $l, $mapping[$l] ) $newname = $newname.Replace( $l.ToUpper(), $mapping[$l].ToUpper() ) } Rename-Item -WhatIf $_.FullName $newname # remove the -WhatIf when you're ready to do it for real. } A: You can use this code (Python) Rename international files # -*- coding: cp1252 -*- import os, shutil base_dir = "g:\\awk\\" # Base Directory (includes subdirectories) char_table_1 = "áéíóúñ" char_table_2 = "aeioun" adirs = os.walk (base_dir) for adir in adirs: dir = adir[0] + "\\" # Directory # print "\nDir : " + dir for file in adir[2]: # List of files if os.access(dir + file, os.R_OK): file2 = file for i in range (0, len(char_table_1)): file2 = file2.replace (char_table_1[i], char_table_2[i]) if file2 <> file: # Different, rename print dir + file, " => ", file2 shutil.move (dir + file, dir + file2) ### You have to change your encoding and your char tables (I tested this script with Spanish files and works fine). You can comment the "move" line to check if it's working ok, and remove the comment later to do the renaming. A: I would write this in C++, C#, or Java -- environments where I know for certain that you can get the Unicode characters out of a path properly. It's always uncertain with command-line tools, especially out of Cygwin. Then the code is a simple find/replace or regex/replace. If you can name a language it would be easy to write the code. A: I'd write a vbscript (WSH) to scan the directories, then send the filenames to a function that breaks up the filenames into their individual letters, then does a SELECT CASE on the Swedish ones and replaces them with the ones you want. Or, instead of doing that the function could just drop it thru a bunch of REPLACE() functions, reassigning the output to the input string. At the end it then renames the file with the new value.
Batch renaming of files with international chars on Windows XP
I have a whole bunch of files with filenames using our lovely Swedish letters å å and ö. For various reasons I now need to convert these to an [a-zA-Z] range. Just removing anything outside this range is fairly easy. The thing that's causing me trouble is that I'd like to replace å with a, ö with o and so on. This is charset troubles at their worst. I have a set of test files: files\Copy of New Text Documen åäö t.txt files\fofo.txt files\New Text Document.txt files\worstcase åäöÅÄÖéÉ.txt I'm basing my script on this line, piping it's results into various commands for %%X in (files\*.txt) do (echo %%X) The wierd thing is that if I print the results of this (the plain for-loop that is) into a file I get this output: files\Copy of New Text Documen †„” t.txt files\fofo.txt files\New Text Document.txt files\worstcase †„”Ž™‚.txt So something wierd is happening to my filenames before they even reach the other tools (I've been trying to do this using a sed port for Windows from something called GnuWin32 but no luck so far) and doing the replace on these characters doesn't help either. How would you solve this problem? I'm open to any type of tools, commandline or otherwise… EDIT: This is a one time problem, so I'm looking for a quick 'n ugly fix
[ "You might have more luck in cmd.exe if you opened it in UNICODE mode. Use \"cmd /U\".\nOthers have proposed using a real programming language. That's fine, especially if you have a language you are very comfortable with. My friend on the C# team says that C# 3.0 (with Linq) is well-suited to whipping up quick, small programs like this. He has stopped writing batch files most of the time.\nPersonally, I would choose PowerShell. This problem can be solved right on the command line, and in a single line. I'll\nEDIT: it's not one line, but it's not a lot of code, either. Also, it looks like StackOverflow doesn't like the syntax \"$_.Name\", and renders the _ as &#95.\n$mapping = @{ \n \"å\" = \"a\"\n \"ä\" = \"a\"\n \"ö\" = \"o\"\n}\n\nGet-ChildItem -Recurse . *.txt | Foreach-Object { \n $newname = $_.Name \n foreach ($l in $mapping.Keys) {\n $newname = $newname.Replace( $l, $mapping[$l] )\n $newname = $newname.Replace( $l.ToUpper(), $mapping[$l].ToUpper() )\n }\n Rename-Item -WhatIf $_.FullName $newname # remove the -WhatIf when you're ready to do it for real.\n}\n\n", "You can use this code (Python)\nRename international files\n# -*- coding: cp1252 -*-\n\nimport os, shutil\n\nbase_dir = \"g:\\\\awk\\\\\" # Base Directory (includes subdirectories)\nchar_table_1 = \"áéíóúñ\"\nchar_table_2 = \"aeioun\"\n\nadirs = os.walk (base_dir)\n\nfor adir in adirs:\n dir = adir[0] + \"\\\\\" # Directory\n # print \"\\nDir : \" + dir\n\n for file in adir[2]: # List of files\n if os.access(dir + file, os.R_OK):\n file2 = file\n for i in range (0, len(char_table_1)):\n file2 = file2.replace (char_table_1[i], char_table_2[i])\n\n if file2 <> file:\n # Different, rename\n print dir + file, \" => \", file2\n shutil.move (dir + file, dir + file2)\n\n###\n\nYou have to change your encoding and your char tables (I tested this script with Spanish files and works fine). You can comment the \"move\" line to check if it's working ok, and remove the comment later to do the renaming.\n", "I would write this in C++, C#, or Java -- environments where I know for certain that you can get the Unicode characters out of a path properly. It's always uncertain with command-line tools, especially out of Cygwin.\nThen the code is a simple find/replace or regex/replace. If you can name a language it would be easy to write the code.\n", "I'd write a vbscript (WSH) to scan the directories, then send the filenames to a function that breaks up the filenames into their individual letters, then does a SELECT CASE on the Swedish ones and replaces them with the ones you want. Or, instead of doing that the function could just drop it thru a bunch of REPLACE() functions, reassigning the output to the input string. At the end it then renames the file with the new value.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "batch_file", "file", "rename", "utf_8", "windows" ]
stackoverflow_0000056913_batch_file_file_rename_utf_8_windows.txt