content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Browser scrollbar I have a website that is perfectely centered aligned. The CSS code works fine. The problem doesn't really have to do with CSS. I have headers for each page that perfectely match eachother. However, when the content gets larger, Opera and FireFox show a scrollbar at the left so you can scroll to the content not on the screen. This makes my site jump a few pixels to the left. Thus the headers are not perfectely aligned anymore. IE always has a scrollbar, so the site never jumps around in IE. Does anyone know a JavaScript/CSS/HTML solution for this problem? A: I use html { overflow-y: scroll; } To standardize the scrollbar behavior in IE and FF A: FWIW: I use html { height: 101%; } to force scrollbars to always appear in Firefox. A: Are you aligning with percentage widths or fixed widths? I'm also guessing you're applying a background to the body - I've had this problem myself. It'll be much easier to help you if you upload the page so we can see the source code however. A: #middle { position: relative; margin: 0px auto 0px auto; width: 1000px; max-width: 1000px; } is my centered DIV A: Well you don't need the position: relative; - it should work fine without it. I take it that div has to be 1000px wide? It would still be a lot easier to answer this with the actual website.
Browser scrollbar
I have a website that is perfectely centered aligned. The CSS code works fine. The problem doesn't really have to do with CSS. I have headers for each page that perfectely match eachother. However, when the content gets larger, Opera and FireFox show a scrollbar at the left so you can scroll to the content not on the screen. This makes my site jump a few pixels to the left. Thus the headers are not perfectely aligned anymore. IE always has a scrollbar, so the site never jumps around in IE. Does anyone know a JavaScript/CSS/HTML solution for this problem?
[ "I use \nhtml { overflow-y: scroll; }\n\nTo standardize the scrollbar behavior in IE and FF\n", "FWIW: I use\nhtml { height: 101%; }\n\nto force scrollbars to always appear in Firefox.\n", "Are you aligning with percentage widths or fixed widths? I'm also guessing you're applying a background to the body - I've had this problem myself.\nIt'll be much easier to help you if you upload the page so we can see the source code however.\n", " #middle \n { \nposition: relative;\nmargin: 0px auto 0px auto; \nwidth: 1000px; \nmax-width: 1000px;\n}\n\nis my centered DIV\n", "Well you don't need the position: relative; - it should work fine without it.\nI take it that div has to be 1000px wide? It would still be a lot easier to answer this with the actual website.\n" ]
[ 9, 2, 0, 0, 0 ]
[]
[]
[ "browser" ]
stackoverflow_0000026879_browser.txt
Q: LINQ to SQL: How to write a 'Like' select? I've got the following SQL: select * from transaction_log where stoptime like '%2008%' How do I write this in LINQ to SQL syntax? A: If you want to use the literal method, it's like this: var query = from l in transaction_log where SqlMethods.Like(l.stoptime, "%2008%") select l; Another option is: var query = from l in transaction_log where l.stoptime.Contains("2008") select l; If it's a DateTime: var query = from l in transaction_log where l.stoptime.Year = 2008 select l; That method is in the System.Data.Linq.SqlClient namespace A: from x in context.Table where x.Contains("2008") select x A: If stoptime data type is string, you can use .Contains() function, and also .StartsWith() and .EndsWith(). A: If you use the contains to method then you are doing a LIKE '%somestring%'. If you use a startswith method then it is the same as 'somestring%'. Finally, endswith is the same as using '%somestring'. To summarize, contains will find any pattern in the string but startswith and endswith will help you find matches at the beginning and end of the word. A: The really interesting point is, that .NET creates queries like "Select * from table where name like '%test%'" when you use "from x in context.Table where x.Contains("test") select x" which is quite impressing A: Thanks--good answers. This is, in fact, a DateTime type; I had to typecast "stoptime" as: var query = from p in dbTransSummary.Transaction_Logs where ( (DateTime) p.StopTime).Year == dtRollUpDate.Year select Minor point. It works great!
LINQ to SQL: How to write a 'Like' select?
I've got the following SQL: select * from transaction_log where stoptime like '%2008%' How do I write this in LINQ to SQL syntax?
[ "If you want to use the literal method, it's like this:\nvar query = from l in transaction_log\n where SqlMethods.Like(l.stoptime, \"%2008%\")\n select l;\n\nAnother option is:\nvar query = from l in transaction_log\n where l.stoptime.Contains(\"2008\")\n select l;\n\nIf it's a DateTime:\nvar query = from l in transaction_log\n where l.stoptime.Year = 2008\n select l;\n\nThat method is in the System.Data.Linq.SqlClient namespace\n", "from x in context.Table where x.Contains(\"2008\") select x\n\n", "If stoptime data type is string, you can use .Contains() function, and also .StartsWith() and .EndsWith().\n", "If you use the contains to method then you are doing a LIKE '%somestring%'. If you use a startswith method then it is the same as 'somestring%'. Finally, endswith is the same as using '%somestring'.\nTo summarize, contains will find any pattern in the string but startswith and endswith will help you find matches at the beginning and end of the word.\n", "The really interesting point is, that .NET creates queries like \"Select * from table where name like '%test%'\" when you use \"from x in context.Table where x.Contains(\"test\") select x\" which is quite impressing\n", "Thanks--good answers.\nThis is, in fact, a DateTime type; I had to typecast \"stoptime\" as:\nvar query = from p in dbTransSummary.Transaction_Logs\n where ( (DateTime) p.StopTime).Year == dtRollUpDate.Year\n select\n\nMinor point. It works great!\n" ]
[ 30, 1, 1, 0, 0, 0 ]
[]
[]
[ "linq_to_sql" ]
stackoverflow_0000091986_linq_to_sql.txt
Q: How can I split a string using regex to return a list of values? How can I take the string foo[]=1&foo[]=5&foo[]=2 and return a collection with the values 1,5,2 in that order. I am looking for an answer using regex in C#. Thanks A: In C# you can use capturing groups private void RegexTest() { String input = "foo[]=1&foo[]=5&foo[]=2"; String pattern = @"foo\[\]=(\d+)"; Regex regex = new Regex(pattern); foreach (Match match in regex.Matches(input)) { Console.Out.WriteLine(match.Groups[1]); } } A: I don't know C#, but... In java: String[] nums = String.split(yourString, "&?foo[]"); The second argument in the String.split() method is a regex telling the method where to split the String. A: Use the Regex.Split() method with an appropriate regex. This will split on parts of the string that match the regular expression and return the results as a string[]. Assuming you want all the values in your querystring without checking if they're numeric, (and without just matching on names like foo[]) you could use this: "&?[^&=]+=" string[] values = Regex.Split(“foo[]=1&foo[]=5&foo[]=2”, "&?[^&=]+="); Incidentally, if you're playing with regular expressions the site http://gskinner.com/RegExr/ is fantastic (I'm just a fan). A: I'd use this particular pattern: string re = @"foo\[\]=(?<value>\d+)"; So something like (not tested): Regex reValues = new Regex(re,RegexOptions.Compiled); List<integer> values = new List<integer>(); foreach (Match m in reValues.Matches(...putInputStringHere...) { values.Add((int) m.Groups("value").Value); } A: Assuming you're dealing with numbers this pattern should match: /=(\d+)&?/ A: This should do: using System.Text.RegularExpressions; Regex.Replace(s, !@"^[0-9]*$”, ""); Where s is your String where you want the numbers to be extracted. A: Just make sure to escape the ampersand like so: /=(\d+)\&/ A: Here's an alternative solution using the built-in string.Split function: string x = "foo[]=1&foo[]=5&foo[]=2"; string[] separator = new string[2] { "foo[]=", "&" }; string[] vals = x.Split(separator, StringSplitOptions.RemoveEmptyEntries);
How can I split a string using regex to return a list of values?
How can I take the string foo[]=1&foo[]=5&foo[]=2 and return a collection with the values 1,5,2 in that order. I am looking for an answer using regex in C#. Thanks
[ "In C# you can use capturing groups\n private void RegexTest()\n {\n String input = \"foo[]=1&foo[]=5&foo[]=2\";\n String pattern = @\"foo\\[\\]=(\\d+)\";\n\n Regex regex = new Regex(pattern);\n\n foreach (Match match in regex.Matches(input))\n {\n Console.Out.WriteLine(match.Groups[1]);\n }\n }\n\n", "I don't know C#, but...\nIn java:\nString[] nums = String.split(yourString, \"&?foo[]\");\n\nThe second argument in the String.split() method is a regex telling the method where to split the String.\n", "Use the Regex.Split() method with an appropriate regex. This will split on parts of the string that match the regular expression and return the results as a string[]. \nAssuming you want all the values in your querystring without checking if they're numeric, (and without just matching on names like foo[]) you could use this: \"&?[^&=]+=\" \nstring[] values = Regex.Split(“foo[]=1&foo[]=5&foo[]=2”, \"&?[^&=]+=\");\n\nIncidentally, if you're playing with regular expressions the site http://gskinner.com/RegExr/ is fantastic (I'm just a fan).\n", "I'd use this particular pattern:\nstring re = @\"foo\\[\\]=(?<value>\\d+)\";\n\nSo something like (not tested):\nRegex reValues = new Regex(re,RegexOptions.Compiled);\nList<integer> values = new List<integer>();\n\nforeach (Match m in reValues.Matches(...putInputStringHere...)\n{\n values.Add((int) m.Groups(\"value\").Value);\n}\n\n", "Assuming you're dealing with numbers this pattern should match:\n/=(\\d+)&?/\n\n", "This should do:\nusing System.Text.RegularExpressions;\n\nRegex.Replace(s, !@\"^[0-9]*$”, \"\");\n\nWhere s is your String where you want the numbers to be extracted.\n", "Just make sure to escape the ampersand like so:\n/=(\\d+)\\&/\n\n", "Here's an alternative solution using the built-in string.Split function:\nstring x = \"foo[]=1&foo[]=5&foo[]=2\";\nstring[] separator = new string[2] { \"foo[]=\", \"&\" };\nstring[] vals = x.Split(separator, StringSplitOptions.RemoveEmptyEntries);\n\n" ]
[ 4, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "c#", "regex" ]
stackoverflow_0000093983_c#_regex.txt
Q: How can I add pulldowns and checkboxes in a MS Outlook email? I want to create a small survey in an email message. The user are to respond using free form text boxes, check boxes , or pre-defined drop downlist . I see applications that claim to be able to do that. my needs are not that elaborate. Just a few questions that need to be asked A: In Outlook 2007 there is functionality to create polls (Voting) which may satisfy your needs: This feature requires you to use a Microsoft Exchange Server 2000, Exchange Server 2003, or Exchange Server 2007 account. A demonstration is provided here. A: You can simply include this as a normal HTML form in a mime part. See http://abiglime.com/webmaster/articles/cgi/010698.htm for how to do that. However, many email clients will not display this. For example, in Thunderbird, there are settings for displaying message: "Original HTML", "Simple HTML", "Plain text". It will only display a form if it is set to "Original HTML". Additionally, you may get security warnings from some email clients when trying to do the actual post from your email message over to the web site (I'm not sure about that as I've never tried). I can see the appeal of making a survey easy to use in an email, but you should at least provide alternate links to access the survey on a website for users that can't see the form. And be sure to test this using a wide variety of email clients, eg: Thunderbird, Outlook, Outlook Express, Gmail, Yahoo, MSN/Hotmail,... A: Cant you use HTML to make it work? A: You can create a custom form within outlook that contains the controls you want. Use that form when creating a new email message. That will work.
How can I add pulldowns and checkboxes in a MS Outlook email?
I want to create a small survey in an email message. The user are to respond using free form text boxes, check boxes , or pre-defined drop downlist . I see applications that claim to be able to do that. my needs are not that elaborate. Just a few questions that need to be asked
[ "In Outlook 2007 there is functionality to create polls (Voting) which may satisfy your needs:\n\nThis feature requires you to use a Microsoft Exchange Server 2000, Exchange Server 2003, or Exchange Server 2007 account.\n\nA demonstration is provided here.\n", "You can simply include this as a normal HTML form in a mime part. See http://abiglime.com/webmaster/articles/cgi/010698.htm for how to do that.\nHowever, many email clients will not display this. For example, in Thunderbird, there are settings for displaying message: \"Original HTML\", \"Simple HTML\", \"Plain text\". It will only display a form if it is set to \"Original HTML\".\nAdditionally, you may get security warnings from some email clients when trying to do the actual post from your email message over to the web site (I'm not sure about that as I've never tried). \nI can see the appeal of making a survey easy to use in an email, but you should at least provide alternate links to access the survey on a website for users that can't see the form. And be sure to test this using a wide variety of email clients, eg: Thunderbird, Outlook, Outlook Express, Gmail, Yahoo, MSN/Hotmail,... \n", "Cant you use HTML to make it work?\n", "You can create a custom form within outlook that contains the controls you want. Use that form when creating a new email message. That will work.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "drop_down_menu", "outlook", "survey" ]
stackoverflow_0000094634_drop_down_menu_outlook_survey.txt
Q: Asp XML Parsing I am new to asp and have a deadline in the next few days. i receive the following xml from within a webservice response. print("<?xml version="1.0" encoding="UTF-8"?> <user_data> <execution_status>0</execution_status> <row_count>1</row_count> <txn_id>stuetd678</txn_id> <person_info> <attribute name="firstname">john</attribute> <attribute name="lastname">doe</attribute> <attribute name="emailaddress">[email protected]</attribute> </person_info> </user_data>"); How can i parse this xml into asp attributes? Any help is greatly appreciated Thanks Damien On more analysis, some soap stuff is also returned as the aboce response is from a web service call. can i still use lukes code below? A: You need to read about MSXML parser. Here is a link to a good all-in-one example http://oreilly.com/pub/h/466 Some reading on XPath will help as well. You could get all the information you need in MSDN. Stealing the code from Luke excellent reply for aggregation purposes: Dim oXML, oNode, sKey, sValue Set oXML = Server.CreateObject("MSXML2.DomDocument.6.0") 'creating the parser object oXML.LoadXML(sXML) 'loading the XML from the string For Each oNode In oXML.SelectNodes("/user_data/person_info/attribute") sKey = oNode.GetAttribute("name") sValue = oNode.Text Select Case sKey Case "execution_status" ... 'do something with the tag value Case else ... 'unknown tag End Select Next Set oXML = Nothing A: By ASP I assume you mean Classic ASP? Try: Dim oXML, oNode, sKey, sValue Set oXML = Server.CreateObject("MSXML2.DomDocument.4.0") oXML.LoadXML(sXML) For Each oNode In oXML.SelectNodes("/user_data/person_info/attribute") sKey = oNode.GetAttribute("name") sValue = oNode.Text ' Do something with these values here Next Set oXML = Nothing The above code assumes you have your XML in a variable called sXML. If you are consuming this via an ServerXMLHttp request, you should be able to use the ResponseXML property of your object in place of oXML above and skip the LoadXML step altogether. A: You could try loading the xml into the xmldocument object and then parse it using it's methods.
Asp XML Parsing
I am new to asp and have a deadline in the next few days. i receive the following xml from within a webservice response. print("<?xml version="1.0" encoding="UTF-8"?> <user_data> <execution_status>0</execution_status> <row_count>1</row_count> <txn_id>stuetd678</txn_id> <person_info> <attribute name="firstname">john</attribute> <attribute name="lastname">doe</attribute> <attribute name="emailaddress">[email protected]</attribute> </person_info> </user_data>"); How can i parse this xml into asp attributes? Any help is greatly appreciated Thanks Damien On more analysis, some soap stuff is also returned as the aboce response is from a web service call. can i still use lukes code below?
[ "You need to read about MSXML parser. Here is a link to a good all-in-one example http://oreilly.com/pub/h/466\nSome reading on XPath will help as well. You could get all the information you need in MSDN.\nStealing the code from Luke excellent reply for aggregation purposes:\nDim oXML, oNode, sKey, sValue\n\nSet oXML = Server.CreateObject(\"MSXML2.DomDocument.6.0\") 'creating the parser object\noXML.LoadXML(sXML) 'loading the XML from the string\n\nFor Each oNode In oXML.SelectNodes(\"/user_data/person_info/attribute\")\n sKey = oNode.GetAttribute(\"name\")\n sValue = oNode.Text\n Select Case sKey\n Case \"execution_status\"\n ... 'do something with the tag value\n Case else\n ... 'unknown tag\n End Select\nNext\n\nSet oXML = Nothing\n\n", "By ASP I assume you mean Classic ASP? Try:\nDim oXML, oNode, sKey, sValue\n\nSet oXML = Server.CreateObject(\"MSXML2.DomDocument.4.0\")\noXML.LoadXML(sXML)\n\nFor Each oNode In oXML.SelectNodes(\"/user_data/person_info/attribute\")\n sKey = oNode.GetAttribute(\"name\")\n sValue = oNode.Text\n ' Do something with these values here\nNext\n\nSet oXML = Nothing\n\nThe above code assumes you have your XML in a variable called sXML. If you are consuming this via an ServerXMLHttp request, you should be able to use the ResponseXML property of your object in place of oXML above and skip the LoadXML step altogether.\n", "You could try loading the xml into the xmldocument object and then parse it using it's methods.\n" ]
[ 9, 6, 0 ]
[]
[]
[ "asp_classic", "parsing", "xml" ]
stackoverflow_0000094689_asp_classic_parsing_xml.txt
Q: Regex to match name1.name2[.name3] I am trying to validate user id's matching the example: smith.jack or smith.jack.s In other words, any number of non-whitespace characters (except dot), followed by exactly one dot, followed by any number of non-whitespace characters (except dot), optionally followed by exactly one dot followed by any number of non-whitespace characters (except dot). I have come up with several variations that work fine except for allowing consecutive dots! For example, the following Regex ^([\S][^.]*[.]{1}[\S][^.]*|[\S][^.]*[.]{1}[\S][^.]*[.]{1}[\S][^.]*)$ matches "smith.jack" and "smith.jack.s" but also matches "smith..jack" "smith..jack.s" ! My gosh, it even likes a dot as a first character. It seems like it would be so simple to code, but it isn't. I am using .NET, btw. Frustrating. A: that helps? /^[^\s\.]+(?:\.[^\s\.]+)*$/ or, in extended format, with comments (ruby-style) / ^ # start of line [^\s\.]+ # one or more non-space non-dot (?: # non-capturing group \. # dot something [^\s\.]+ # one or more non-space non-dot )* # zero or more times $ # end of line /x you're not clear on how many times you can have dot-something, but you can replace the * with {1,3} or something, to specify how many repetitions are allowed. i should probably make it clear that the slashes are the literal regex delimiter in ruby (and perl and js, etc). A: You are using the * duplication, which allows for 0 iterations of the given component. You should be using plus, and putting the final .[^.]+ into a group followed by ? to represent the possibility of an extra set. Might not have the perfect syntax, but something similar to the following should work. ^[^.\s]+[.][^.\s]+([.][^.\s]+)?$ Or in simple terms, any non-zero number of non-whitespace non-dot characters, followed by a dot, followed by any non-zero number of non-whitespace non-dot characters, optionally followed by a dot, followed by any non-zero number of non-whitespace non-dot characters. A: ^([^.\s]+)\.([^.\s]+)(?:\.([^.\s]+))?$ A: I'm not familiar with .NET's regexes. This will do what you want in Perl. /^\w+\.\w+(?:\.\w+)?$/ If .NET doesn't support the non-capturing (?:xxx) syntax, use this instead: /^\w+\.\w+(\.\w+)?$/ Note: I'm assuming that when you say "non-whitespace, non-dot" you really mean "word characters." A: I realise this has already been solved, but I find Regexpal extremely helpful for prototyping regex's. The site has a load of simple explanations of the basics and lets you see what matches as you adjust the expression. A: [^\s.]+\.[^\s.]+(\.[^\s.]+)? BTW what you asked for allows "." and ".." A: I think you'd benefit from using + which means "1 or more", instead of * meaning "any number including zero". A: (^.)+|(([^.]+)[.]([^.]+))+ But this would match x.y.z.a.b.c and from your description, I am not sure if this is sufficiently restrictive. BTW: feel free to modify if I made a silly mistake (I haven't used .NET, but have done plently of regexs) A: [^.\s]+\.[^.\s]+(\.([^\s.]+?)? has unmatched paren. If corrected to [^.\s]+\.[^.\s]+(\.([^\s.]+?))? is still too liberal. Matches a.b. as well as a.b.c.d. and .a.b If corrected to [^.\s]+\.[^.\s]+(\.([^\s.]+?)?) doesn't match a.b A: ^([^.\W]+)\.?([^.\W]+)\.?([^.\W]+)$ This should capture as described, group the parts of the id and stop duplicate periods A: I took a slightly different approach. I figured you really just wanted a string of non-space characters followed by only one dot, but that dot is optional (for the last entry). Then you wanted this repeated. ^([^\s\.]+\.?)+$ Right now, this means you have to have at least one string of characters, e.g. 'smith' to match. You, of course could limit it to only allow one to three repetitions with ^([^\s\.]+\.?){1,3}$ I hope that helps. A: RegexBuddy Is a good (non-free) tool for regex stuff
Regex to match name1.name2[.name3]
I am trying to validate user id's matching the example: smith.jack or smith.jack.s In other words, any number of non-whitespace characters (except dot), followed by exactly one dot, followed by any number of non-whitespace characters (except dot), optionally followed by exactly one dot followed by any number of non-whitespace characters (except dot). I have come up with several variations that work fine except for allowing consecutive dots! For example, the following Regex ^([\S][^.]*[.]{1}[\S][^.]*|[\S][^.]*[.]{1}[\S][^.]*[.]{1}[\S][^.]*)$ matches "smith.jack" and "smith.jack.s" but also matches "smith..jack" "smith..jack.s" ! My gosh, it even likes a dot as a first character. It seems like it would be so simple to code, but it isn't. I am using .NET, btw. Frustrating.
[ "that helps?\n/^[^\\s\\.]+(?:\\.[^\\s\\.]+)*$/\n\nor, in extended format, with comments (ruby-style)\n/\n ^ # start of line\n [^\\s\\.]+ # one or more non-space non-dot\n (?: # non-capturing group\n \\. # dot something\n [^\\s\\.]+ # one or more non-space non-dot\n )* # zero or more times\n $ # end of line\n/x\n\nyou're not clear on how many times you can have dot-something, but you can replace the * with {1,3} or something, to specify how many repetitions are allowed.\ni should probably make it clear that the slashes are the literal regex delimiter in ruby (and perl and js, etc).\n", "You are using the * duplication, which allows for 0 iterations of the given component.\nYou should be using plus, and putting the final .[^.]+ into a group followed by ? to represent the possibility of an extra set.\nMight not have the perfect syntax, but something similar to the following should work.\n^[^.\\s]+[.][^.\\s]+([.][^.\\s]+)?$\nOr in simple terms, any non-zero number of non-whitespace non-dot characters, followed by a dot, followed by any non-zero number of non-whitespace non-dot characters, optionally followed by a dot, followed by any non-zero number of non-whitespace non-dot characters.\n", "^([^.\\s]+)\\.([^.\\s]+)(?:\\.([^.\\s]+))?$\n", "I'm not familiar with .NET's regexes. This will do what you want in Perl.\n/^\\w+\\.\\w+(?:\\.\\w+)?$/\n\nIf .NET doesn't support the non-capturing (?:xxx) syntax, use this instead:\n/^\\w+\\.\\w+(\\.\\w+)?$/\n\nNote: I'm assuming that when you say \"non-whitespace, non-dot\" you really mean \"word characters.\"\n", "I realise this has already been solved, but I find Regexpal extremely helpful for prototyping regex's. The site has a load of simple explanations of the basics and lets you see what matches as you adjust the expression.\n", "[^\\s.]+\\.[^\\s.]+(\\.[^\\s.]+)?\n\nBTW what you asked for allows \".\" and \"..\"\n", "I think you'd benefit from using + which means \"1 or more\", instead of * meaning \"any number including zero\".\n", "(^.)+|(([^.]+)[.]([^.]+))+\n\nBut this would match x.y.z.a.b.c and from your description, I am not sure if this is sufficiently restrictive.\nBTW: feel free to modify if I made a silly mistake (I haven't used .NET, but have done plently of regexs)\n", "[^.\\s]+\\.[^.\\s]+(\\.([^\\s.]+?)? \n\nhas unmatched paren. If corrected to \n[^.\\s]+\\.[^.\\s]+(\\.([^\\s.]+?))?\n\nis still too liberal. Matches a.b. as well as a.b.c.d. and .a.b\nIf corrected to\n[^.\\s]+\\.[^.\\s]+(\\.([^\\s.]+?)?)\n\ndoesn't match a.b\n", "^([^.\\W]+)\\.?([^.\\W]+)\\.?([^.\\W]+)$\n\nThis should capture as described, group the parts of the id and stop duplicate periods\n", "I took a slightly different approach. I figured you really just wanted a string of non-space characters followed by only one dot, but that dot is optional (for the last entry). Then you wanted this repeated.\n^([^\\s\\.]+\\.?)+$\n\nRight now, this means you have to have at least one string of characters, e.g. 'smith' to match. You, of course could limit it to only allow one to three repetitions with\n^([^\\s\\.]+\\.?){1,3}$\n\nI hope that helps.\n", "RegexBuddy Is a good (non-free) tool for regex stuff\n" ]
[ 6, 2, 2, 2, 2, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "regex" ]
stackoverflow_0000087902_regex.txt
Q: where can I find vim-enhanced resources? I recently installed vim-enhanced , but I can't find any article/tutorial related to it.All I could find is a page that briefly describes it's new features , along with several RPM's to download . What exactly does it have to offer to scripting languages that regular vi/vim can't ? Thanks A: According to this, vim-enhanced is just vim "with the perl, python, tcl, and cscope options compiled in." You should be able to find everything you need to know about these compile options in the documentation. A: If you're new to vim, then run vimtutor You may also want to start out by reading :help and learning how to use the help system. In particular :help topic (control-D) and :help topic (more useful if you have :set wildmenu) will help you find topics in vim's built-in help (which is notably superior to trying to Google for things). If all else fails there's also :helpgrep topic and then use the quickfix buffer to see the hits (:cn, :cp, :cl, etc). The #vim channel on Freenode IRC network is also a good place to get help.
where can I find vim-enhanced resources?
I recently installed vim-enhanced , but I can't find any article/tutorial related to it.All I could find is a page that briefly describes it's new features , along with several RPM's to download . What exactly does it have to offer to scripting languages that regular vi/vim can't ? Thanks
[ "According to this, vim-enhanced is just vim \"with the perl, python, tcl, and cscope options compiled in.\" You should be able to find everything you need to know about these compile options in the documentation.\n", "If you're new to vim, then run vimtutor\nYou may also want to start out by reading :help and learning how to use the help system. In particular :help topic (control-D) and :help topic (more useful if you have :set wildmenu) will help you find topics in vim's built-in help (which is notably superior to trying to Google for things). If all else fails there's also :helpgrep topic and then use the quickfix buffer to see the hits (:cn, :cp, :cl, etc).\nThe #vim channel on Freenode IRC network is also a good place to get help.\n" ]
[ 5, 0 ]
[]
[]
[ "editor", "linux", "vi", "vim" ]
stackoverflow_0000074821_editor_linux_vi_vim.txt
Q: How to use system_user in audit trigger but still use connection pooling? I would like to do both of the following things: use audit triggers on my database tables to identify which user updated what; use connection pooling to improve performance For #1, I use 'system_user' in the database trigger to identify the user making the change, but this prevent me from doing #2 which requires a generic connection string. Is there a way that I can get the best of both of these worlds? ASP.NET/SQL Server 2005 A: Unfortunately, no. Identifying the user just from the database connection AND sharing database connections between users are mutually exclusive. A: Store the user from your web application in the database and let your triggers go off that stored data. It might even be better to let the web app handle writing all logging information to the database.
How to use system_user in audit trigger but still use connection pooling?
I would like to do both of the following things: use audit triggers on my database tables to identify which user updated what; use connection pooling to improve performance For #1, I use 'system_user' in the database trigger to identify the user making the change, but this prevent me from doing #2 which requires a generic connection string. Is there a way that I can get the best of both of these worlds? ASP.NET/SQL Server 2005
[ "Unfortunately, no. Identifying the user just from the database connection AND sharing database connections between users are mutually exclusive.\n", "Store the user from your web application in the database and let your triggers go off that stored data. It might even be better to let the web app handle writing all logging information to the database.\n" ]
[ 1, 1 ]
[]
[]
[ "asp.net", "connection", "pool", "sql", "sql_server" ]
stackoverflow_0000093848_asp.net_connection_pool_sql_sql_server.txt
Q: Parser error when using ScriptManager I have an ASP.NET page which has a script manager on it. <form id="form1" runat="server"> <div> <asp:ScriptManager EnablePageMethods="true" ID="scriptManager2" runat="server"> </asp:ScriptManager> </div> </form> The page overrides an abstract property to return the ScriptManager in order to enable the base page to use it: public partial class ReportWebForm : ReportPageBase { protected override ScriptManager ScriptManager { get { return scriptManager2; } } ... } And the base page: public abstract class ReportPageBase : Page { protected abstract ScriptManager ScriptManager { get; } ... } When I run the project, I get the following parser error: Parser Error Message: The base class includes the field 'scriptManager2', but its type (System.Web.UI.ScriptManager) is not compatible with the type of control (System.Web.UI.ScriptManager). How can I solve this? Update: The script manager part of the designer file is: protected global::System.Web.UI.ScriptManager scriptManager; A: I can compile your code sample fine, you should check your designer file to make sure everything is ok. EDIT: the only other thing I can think of is that this is some sort of reference problem. Is your System.Web.Extensions reference using the correct version for your targeted framework? (should be 3.5.0.0 for .net 3.5 and 1.0.6xxx for 2.0) A: I found out that my referenced System.Web.Extensions (v3.5.sth) library did not have the same version with the reference in web.config (v.1.0.6sth). Replacing the dll (3.5) with the old version of System.Web.Extensions solved the problem.
Parser error when using ScriptManager
I have an ASP.NET page which has a script manager on it. <form id="form1" runat="server"> <div> <asp:ScriptManager EnablePageMethods="true" ID="scriptManager2" runat="server"> </asp:ScriptManager> </div> </form> The page overrides an abstract property to return the ScriptManager in order to enable the base page to use it: public partial class ReportWebForm : ReportPageBase { protected override ScriptManager ScriptManager { get { return scriptManager2; } } ... } And the base page: public abstract class ReportPageBase : Page { protected abstract ScriptManager ScriptManager { get; } ... } When I run the project, I get the following parser error: Parser Error Message: The base class includes the field 'scriptManager2', but its type (System.Web.UI.ScriptManager) is not compatible with the type of control (System.Web.UI.ScriptManager). How can I solve this? Update: The script manager part of the designer file is: protected global::System.Web.UI.ScriptManager scriptManager;
[ "I can compile your code sample fine, you should check your designer file to make sure everything is ok.\nEDIT: the only other thing I can think of is that this is some sort of reference problem. Is your System.Web.Extensions reference using the correct version for your targeted framework? (should be 3.5.0.0 for .net 3.5 and 1.0.6xxx for 2.0)\n", "I found out that my referenced System.Web.Extensions (v3.5.sth) library did not have the same version with the reference in web.config (v.1.0.6sth). Replacing the dll (3.5) with the old version of System.Web.Extensions solved the problem.\n" ]
[ 5, 1 ]
[]
[]
[ "asp.net", "asp.net_ajax", "scriptmanager" ]
stackoverflow_0000094632_asp.net_asp.net_ajax_scriptmanager.txt
Q: Stateful Web Services I'm building a java/spring application, and i may need to incorporate a stateful web service call. Any opinions if i should totally run away from a stateful services call, or it can be done and is enterprise ready? A: Statefulness runs counter to the basic architecture of HTTP (ask Roy Fielding), and reduces scalability. A: Stateful web services are a pain to maintain. The mechanism I have seen for them is to have the first call return an id (basically a transaction id) that is used in subsequent calls. A problem with that is that the web service isn't really stateful so it has to load all the information that it needs from some other data store for each call.
Stateful Web Services
I'm building a java/spring application, and i may need to incorporate a stateful web service call. Any opinions if i should totally run away from a stateful services call, or it can be done and is enterprise ready?
[ "Statefulness runs counter to the basic architecture of HTTP (ask Roy Fielding), and reduces scalability.\n", "Stateful web services are a pain to maintain. The mechanism I have seen for them is to have the first call return an id (basically a transaction id) that is used in subsequent calls. A problem with that is that the web service isn't really stateful so it has to load all the information that it needs from some other data store for each call. \n" ]
[ 6, 5 ]
[]
[]
[ "stateful", "web_services" ]
stackoverflow_0000094660_stateful_web_services.txt
Q: Templates spread across multiple files C++ seems to be rather grouchy when declaring templates across multiple files. More specifically, when working with templated classes, the linker expect all method definitions for the class in a single compiler object file. When you take into account headers, other declarations, inheritance, etc., things get really messy. Are there any general advice or workarounds for organizing or redistributing templated member definitions across multiple files? A: Are there any general advice or workarounds for organizing or redistributing templated member definitions across multiple files? Yes; don't. The C++ spec permits a compiler to be able to "see" the entire template (declaration and definition) at the point of instantiation, and (due to the complexities of any implementation) most compilers retain this requirement. The upshot is that #inclusion of any template header must also #include any and all source required to instantiate the template. The easiest way to deal with this is to dump everything into the header, inline where posible, out-of-line where necessary. If you really regard this as an unacceptable affront, a common option is to split the template into the usual header/implementation pair, and then #include the implementation file at the end of the header. C++'s "export" feature may or may not provide another workaround. The feature is poorly supported and poorly defined; although it in principle should permit some kind of separate compilation of templates, it doesn't necessarily obviate the demand that the compiler be able to see the entire template body. A: Across how many files? If you just want to separate class definitions from implementation then try this article in the C++ faqs. That's about the only way I know of that works at the moment, but some IDEs (Eclipse CDT for example) won't link this method properly and you may get a lot of errors. However, writing your own makefiles or using Visual C++ has always worked for me :-) A: When/if your compiler supports C++0x, the extern keyword can be used to separate template declarations from definitions. See here for a brief explanation. Also, section 6.3, "The Separation Model," of C++ Templates: The Complete Guide by David Vandevoorde and Nicolai M. Josuttis describes other options.
Templates spread across multiple files
C++ seems to be rather grouchy when declaring templates across multiple files. More specifically, when working with templated classes, the linker expect all method definitions for the class in a single compiler object file. When you take into account headers, other declarations, inheritance, etc., things get really messy. Are there any general advice or workarounds for organizing or redistributing templated member definitions across multiple files?
[ "\nAre there any general advice or workarounds for organizing or redistributing templated member definitions across multiple files?\n\nYes; don't.\nThe C++ spec permits a compiler to be able to \"see\" the entire template (declaration and definition) at the point of instantiation, and (due to the complexities of any implementation) most compilers retain this requirement. The upshot is that #inclusion of any template header must also #include any and all source required to instantiate the template.\nThe easiest way to deal with this is to dump everything into the header, inline where posible, out-of-line where necessary.\nIf you really regard this as an unacceptable affront, a common option is to split the template into the usual header/implementation pair, and then #include the implementation file at the end of the header.\nC++'s \"export\" feature may or may not provide another workaround. The feature is poorly supported and poorly defined; although it in principle should permit some kind of separate compilation of templates, it doesn't necessarily obviate the demand that the compiler be able to see the entire template body.\n", "Across how many files? If you just want to separate class definitions from implementation then try this article in the C++ faqs. That's about the only way I know of that works at the moment, but some IDEs (Eclipse CDT for example) won't link this method properly and you may get a lot of errors. However, writing your own makefiles or using Visual C++ has always worked for me :-)\n", "When/if your compiler supports C++0x, the extern keyword can be used to separate template declarations from definitions.\nSee here for a brief explanation.\nAlso, section 6.3, \"The Separation Model,\" of C++ Templates: The Complete Guide by David Vandevoorde and Nicolai M. Josuttis describes other options.\n\n" ]
[ 27, 5, 3 ]
[]
[]
[ "c++", "templates" ]
stackoverflow_0000036039_c++_templates.txt
Q: What is the best method of inter-process communication between Java and .NET 3.5? A third-party application reads some Java code from an XML file, and runs it when a certain event happens. In Java, I want to tell a .NET 3.5 application, running on the same machine, that this event occurred. The total data transferred each time is probably a few characters. What is the best way of using Java to tell the .NET process that something happened? Java doesn't seem to support Named Pipes on Windows, .NET doesn't natively support memory-mapping, and any solution involving web services or RMI is overkill. A: If you don't want the full overhead of RMI, you could do direct socket communcation between the two by opening ports and talking to eachother. You'd have to have some way to have both processes agree on which ports to use, how to handshake/etc, but would be simpler than RMI. ETA: it looks like you can use named pipes from java, you just can't create them? So if the .NET process would create it, you could read/write to it with java. Java just sees it as a file?
What is the best method of inter-process communication between Java and .NET 3.5?
A third-party application reads some Java code from an XML file, and runs it when a certain event happens. In Java, I want to tell a .NET 3.5 application, running on the same machine, that this event occurred. The total data transferred each time is probably a few characters. What is the best way of using Java to tell the .NET process that something happened? Java doesn't seem to support Named Pipes on Windows, .NET doesn't natively support memory-mapping, and any solution involving web services or RMI is overkill.
[ "If you don't want the full overhead of RMI, you could do direct socket communcation between the two by opening ports and talking to eachother. You'd have to have some way to have both processes agree on which ports to use, how to handshake/etc, but would be simpler than RMI.\nETA: it looks like you can use named pipes from java, you just can't create them? So if the .NET process would create it, you could read/write to it with java. Java just sees it as a file?\n" ]
[ 4 ]
[]
[]
[ ".net", "c#", "ipc", "java" ]
stackoverflow_0000094882_.net_c#_ipc_java.txt
Q: How do I stub data for designers when using Expression Blend and Visual Studio? We are trying out Visual Studio 2008 and Expression Blend on a new project. The goal is to clearly define the role of the developer and designer as separate, but reap the benefit of the developer being able to directly consume the XAML produced by the designer. For the most part this has worked great, and I really like the possibilities. One difficulty we have come across though is designing against DataBindings. In many cases, the GUI does not populate rows, or other data structures unless the application is run, and a database call is made. Consequently the designer does not have access to the visual layout of the GUI. What I would like to do, is somehow create some simple stubbed or mocked data that the designer can use to work on the design. The big goal is to have that stubbed data show up in Expression Blend, but then be applied to the real collection at runtime. Has anyone found a solid method of doing this? A: I would suggest reading this blog. The final method seems to work well, your test data shows up in Blend very nicely. Just keep in mind that you have to compile the DLL before it will display the data. A: I would look into creating XML data islands which emulate the structure of the objects you will eventually bind the UI to. This way your designer can bind the root element of the page (or user control, etc.) to the top level of your fake XML data island and all the relative paths will stay the same when you swap that data island out for the real DataContext binding. there will be some degree of refactoring to attach to the real object when you are ready, but that is why your developers should at least know enough XAML to know how to modify the bindings properly. it looks like the commenter above me has a link to an example of this.
How do I stub data for designers when using Expression Blend and Visual Studio?
We are trying out Visual Studio 2008 and Expression Blend on a new project. The goal is to clearly define the role of the developer and designer as separate, but reap the benefit of the developer being able to directly consume the XAML produced by the designer. For the most part this has worked great, and I really like the possibilities. One difficulty we have come across though is designing against DataBindings. In many cases, the GUI does not populate rows, or other data structures unless the application is run, and a database call is made. Consequently the designer does not have access to the visual layout of the GUI. What I would like to do, is somehow create some simple stubbed or mocked data that the designer can use to work on the design. The big goal is to have that stubbed data show up in Expression Blend, but then be applied to the real collection at runtime. Has anyone found a solid method of doing this?
[ "I would suggest reading this blog. The final method seems to work well, your test data shows up in Blend very nicely. Just keep in mind that you have to compile the DLL before it will display the data.\n", "I would look into creating XML data islands which emulate the structure of the objects you will eventually bind the UI to. This way your designer can bind the root element of the page (or user control, etc.) to the top level of your fake XML data island and all the relative paths will stay the same when you swap that data island out for the real DataContext binding. \nthere will be some degree of refactoring to attach to the real object when you are ready, but that is why your developers should at least know enough XAML to know how to modify the bindings properly.\nit looks like the commenter above me has a link to an example of this.\n" ]
[ 4, 0 ]
[]
[]
[ "expression_blend", "visual_studio_2008", "wpf", "xaml" ]
stackoverflow_0000066486_expression_blend_visual_studio_2008_wpf_xaml.txt
Q: XML Notepad 2007 breaks MS Access 2007 Help I tried scouring the web for help on this issue, but there are so many generic words in there, that I couldn't find much of anything that was relevant. I have MS Office 2007 installed on Vista and later installed XML Notepad 2007 (also a Microsoft product). It seems that the MS Access help system is using some sort of XML format that XML Notepad took control of. Now, whenever I open help in Access, the little help window opens and instead of displaying content, attempts to download the content with XML Notepad. Grrr.... Is there a fix for this? A: Okay, I found the answer and I'm a little embarrassed by it. In fact, my question was pretty much off the mark. First, there is no involvement in this problem with XML Notepad 2007. It didn't hijack a file extension or make a registry entry or anything else like that. It's a great little program if you just want to open and examine an XML file. I use it kinda the same way I use notepad for text files. I just want a quick look and I don't need to weight or wait of a full ide at the moment. What causes the help application to attempt to download a file called browse0.access.xml, is to be in offline mode. If you open up the table of contents, all the content is available except the home page which must require an internet connection. To correct the issue, click the "offline" word in the lower right corner of the application and select "show content from Office Online". That should get it back to it's normal state. A: Do a repair on your Office installation. That, or remove XMl Notepad (it's not that good imho).
XML Notepad 2007 breaks MS Access 2007 Help
I tried scouring the web for help on this issue, but there are so many generic words in there, that I couldn't find much of anything that was relevant. I have MS Office 2007 installed on Vista and later installed XML Notepad 2007 (also a Microsoft product). It seems that the MS Access help system is using some sort of XML format that XML Notepad took control of. Now, whenever I open help in Access, the little help window opens and instead of displaying content, attempts to download the content with XML Notepad. Grrr.... Is there a fix for this?
[ "Okay, I found the answer and I'm a little embarrassed by it. In fact, my question was pretty much off the mark.\nFirst, there is no involvement in this problem with XML Notepad 2007. It didn't hijack a file extension or make a registry entry or anything else like that. It's a great little program if you just want to open and examine an XML file. I use it kinda the same way I use notepad for text files. I just want a quick look and I don't need to weight or wait of a full ide at the moment.\nWhat causes the help application to attempt to download a file called browse0.access.xml, is to be in offline mode. If you open up the table of contents, all the content is available except the home page which must require an internet connection.\nTo correct the issue, click the \"offline\" word in the lower right corner of the application and select \"show content from Office Online\". That should get it back to it's normal state.\n", "Do a repair on your Office installation. That, or remove XMl Notepad (it's not that good imho).\n" ]
[ 2, 0 ]
[]
[]
[ "ms_office", "xml" ]
stackoverflow_0000085923_ms_office_xml.txt
Q: How to verify existence of a table with a given ID in a word doc in C# VSTO 3 I want to check for the existence of a table with a given ID in a word document in C# (VS 2008) Visual Studio Tools for Office (version 3). Obviously I can iterate through the document's Tables collection and check every ID, but this seems inefficient; the document will end up having a few dozen tables after I'm done with it, and while I know that's not a lot, looping through the collection seems sloppy. The Tables collection is only indexed by integer id, not by the string ID assigned to the table, so I cant just use an index, and there's no apparent Exists method of the document or tables collection. I thought of casting the Tables collection to an IQueryable using AsQueryable(), but I don't know how to go about doing this in such a way that I could query it by ID. Pointers to docs or sample code would be appreciated, or if there's a better way to go about it, I'm all for that, too A: I don't think there's a better way to do it. Any solution including IQueryable would presumably need to iterate the collection internally so wouldn't be any faster. Performance is unlikely to be a problem anyway, so I wouldn't worry about the inefficiency. If you are doing it a lot, you could provide a wrapper that iterates once through the tables and generates a dictionary that you subsequently use.
How to verify existence of a table with a given ID in a word doc in C# VSTO 3
I want to check for the existence of a table with a given ID in a word document in C# (VS 2008) Visual Studio Tools for Office (version 3). Obviously I can iterate through the document's Tables collection and check every ID, but this seems inefficient; the document will end up having a few dozen tables after I'm done with it, and while I know that's not a lot, looping through the collection seems sloppy. The Tables collection is only indexed by integer id, not by the string ID assigned to the table, so I cant just use an index, and there's no apparent Exists method of the document or tables collection. I thought of casting the Tables collection to an IQueryable using AsQueryable(), but I don't know how to go about doing this in such a way that I could query it by ID. Pointers to docs or sample code would be appreciated, or if there's a better way to go about it, I'm all for that, too
[ "I don't think there's a better way to do it. Any solution including IQueryable would presumably need to iterate the collection internally so wouldn't be any faster.\nPerformance is unlikely to be a problem anyway, so I wouldn't worry about the inefficiency.\nIf you are doing it a lot, you could provide a wrapper that iterates once through the tables and generates a dictionary that you subsequently use.\n" ]
[ 1 ]
[]
[]
[ "c#", "iqueryable", "ms_word", "officedev", "vsto" ]
stackoverflow_0000093506_c#_iqueryable_ms_word_officedev_vsto.txt
Q: How do you priortize multiple triggers of a table? I have a couple of triggers on a table that I want to keep separate and would like to priortize them. I could have just one trigger and do the logic there, but I was wondering if there was an easier/logical way of accomplishing this of having it in a pre-defined order ? A: Use sp_settriggerorder. You can specify the first and last trigger to fire depending on the operation. sp_settriggerorder on MSDN From the above link: A. Setting the firing order for a DML trigger The following example specifies that trigger uSalesOrderHeader be the first trigger to fire after an UPDATE operation occurs on the Sales.SalesOrderHeader table. USE AdventureWorks; GO sp_settriggerorder @triggername= 'Sales.uSalesOrderHeader', @order='First', @stmttype = 'UPDATE'; B. Setting the firing order for a DDL trigger The following example specifies that trigger ddlDatabaseTriggerLog be the first trigger to fire after an ALTER_TABLE event occurs in the AdventureWorks database. USE AdventureWorks; GO sp_settriggerorder @triggername= 'ddlDatabaseTriggerLog', @order='First', @stmttype = 'ALTER_TABLE', @namespace = 'DATABASE'; A: See here. A: You can use sp_settriggerorder to define the order of each trigger on a table. However, I would argue that you'd be much better off having a single trigger that does multiple things. This is particularly so if the order is important, since that importance will not be very obvious if you have multiple triggers. Imagine someone trying to support the database months/years down the track. Of course there are likely to be cases where you need to have multiple triggers or it really is better design, but I'd start assuming you should have one and work from there. A: Rememebr if you change the trigger order, someone else could come by later and rearrange it again. And where would you document what the trigger order should be so a maintenance developer knows not to mess with the order or things will break? If two trigger tasks definitely must be performed in a specific order, the only safe route is to put them in the same trigger.
How do you priortize multiple triggers of a table?
I have a couple of triggers on a table that I want to keep separate and would like to priortize them. I could have just one trigger and do the logic there, but I was wondering if there was an easier/logical way of accomplishing this of having it in a pre-defined order ?
[ "Use sp_settriggerorder. You can specify the first and last trigger to fire depending on the operation.\nsp_settriggerorder on MSDN\nFrom the above link:\nA. Setting the firing order for a DML trigger\nThe following example specifies that trigger uSalesOrderHeader be the first trigger to fire after an UPDATE operation occurs on the Sales.SalesOrderHeader table.\n\nUSE AdventureWorks;\nGO\nsp_settriggerorder \n @triggername= 'Sales.uSalesOrderHeader', \n @order='First', \n @stmttype = 'UPDATE';\n\nB. Setting the firing order for a DDL trigger\nThe following example specifies that trigger ddlDatabaseTriggerLog be the first trigger to fire after an ALTER_TABLE event occurs in the AdventureWorks database.\n\nUSE AdventureWorks;\nGO\nsp_settriggerorder \n @triggername= 'ddlDatabaseTriggerLog', \n @order='First', \n @stmttype = 'ALTER_TABLE', \n @namespace = 'DATABASE';\n\n", "See here.\n", "You can use sp_settriggerorder to define the order of each trigger on a table.\nHowever, I would argue that you'd be much better off having a single trigger that does multiple things. This is particularly so if the order is important, since that importance will not be very obvious if you have multiple triggers. Imagine someone trying to support the database months/years down the track. Of course there are likely to be cases where you need to have multiple triggers or it really is better design, but I'd start assuming you should have one and work from there.\n", "Rememebr if you change the trigger order, someone else could come by later and rearrange it again. And where would you document what the trigger order should be so a maintenance developer knows not to mess with the order or things will break? If two trigger tasks definitely must be performed in a specific order, the only safe route is to put them in the same trigger.\n" ]
[ 7, 1, 1, 1 ]
[]
[]
[ "sql", "sql_server_2005", "triggers", "tsql" ]
stackoverflow_0000094959_sql_sql_server_2005_triggers_tsql.txt
Q: Regex to replace Boolean with bool I am working on a C++ code base that was recently moved from X/Motif to Qt. I am trying to write a Perl script that will replace all occurrences of Boolean (from X) with bool. The script just does a simple replacement. s/\bBoolean\b/bool/g There are a few conditions. 1) We have CORBA in our code and \b matches CORBA::Boolean which should not be changed. 2) It should not match if it was found as a string (i.e. "Boolean") Updated: For #1, I used lookbehind s/(?<!:)\bBoolean\b/bool/g; For #2, I used lookahead. s/(?<!:)\bBoolean\b(?!")/bool/g</pre> This will most likely work for my situation but how about the following improvements? 3) Do not match if in the middle of a string (thanks nohat). 4) Do not match if in a comment. (// or /**/) A: s/[^:]\bBoolean\b(?!")/bool/g This does not match strings where Boolean is at that the beginning of the line becuase [^:] is "match a character that is not :". A: Watch out with that quote-matching lookahead assertion. That'll only match if Boolean is the last part of a string, but not in the middle of the string. You'll need to match an even number of quote marks preceding the match if you want to be sure you're not in a string (assuming no multi-line strings and no escaped embedded quote marks). A: s/[^:]\bBoolean\b[^"]/bool/g Edit: Rats, beaten again. +1 for beating me, good sir. A: #define Boolean bool Let the preprocesser take care of this. Every time you see a Boolean you can either manually fix it or hope a regex doesn't make a mistake. Depending on how many macros you use you can you could dump the out of cpp. A: To fix condition 1 try: s/[^:]\bBoolean\b(?!")/bool/g The [^:] says to match any character other than ":". A: 3) Do not match if in the middle of a string (thanks nohat). You can perhaps write a reg ex to check ".*Boolean.*". But what if you have quote(") inside the string? So, you have more work to not exclude (\") pattern. 4) Do not match if in a comment. (// or /* */) For '//', you can have a regex to exclude //.* But, better could be to first put a regex to compare the whole line for the // comments ((.*)(//.*)) and then apply replacement only on $1 (first matching pattern). For /* */, it is more complex as this is multiline pattern. One approach can be to first run whole of you code to match multiline comments and then take out only the parts not matching ... something like ... (.*)(/*.**/)(.*). But, the actual regex would be even more complex as you would have not one but more of multi-line comments. Now, what if you have /* or */ inside // block? (I dont know why would you have it.. but Murphy's law says that you can have it). There is obviously some way out but my idea is to emphasize how bad-looking the regex will become. My suggestion here would be to use some lexical tool for C++ and replace the token Boolean with bool. Your thoughts? A: In order to avoid writing a full C parser in perl, you're trying to strike a balance. Depending on how much needs changing, I would be inclined to do something like a very restrictive s/// and then anything that still matches /Boolean/ gets written to an exception file for human decision making. That way you're not trying to parse the C middle strings, multi-line comment, conditional compiled out text, etc. that could be present. A: … … Do not match if in the middle of a string (thanks nohat). Do not match if in a comment. (// or /**/) No can do with a simple regex. For that, you need to actually look at every single character left-to-right and decide what kind of thing it is, at least well enough to tell apart comments from multi-line comments from strings from other stuff, and then you need to see if the “other stuff” part contains things you want to change. Now, I don’t know the exact syntactical rules for comments and strings in C++ so the following is going to be imprecise and completely undebugged, but it’ll give you an idea of the complexity you’re up against. my $line_comment = qr! (?> // .* \n? ) !x; my $multiline_comment = qr! (?> /\* [^*]* (?: \* (?: [^/*] [^*]* )? )* )* \*/ ) !x; my $string = qr! (?> " [^"\\]* (?: \\ . [^"\\]* )* " ) !x; my $boolean_type = qr! (?<!:) \b Boolean \b !x; $code =~ s{ \G ( $line_comment | $multiline_comment | $string | ( $boolean_type ) | . ) }{ defined $2 ? 'bool' : $1 }gex; Please don’t ask me to explain this in all its intricacies, it would take me a day and another. Just buy and read Jeff Friedl’s Mastering Regular Expressions if you want to understand exactly what is going on here. A: The "'Boolean' in the middle of a string" part sounds a bit unlikely, I'd check first if there is any occurrence of it in the code with something like m/"[^"]*Boolean[^"]*"/ And if there is none or a few, just ignore that case.
Regex to replace Boolean with bool
I am working on a C++ code base that was recently moved from X/Motif to Qt. I am trying to write a Perl script that will replace all occurrences of Boolean (from X) with bool. The script just does a simple replacement. s/\bBoolean\b/bool/g There are a few conditions. 1) We have CORBA in our code and \b matches CORBA::Boolean which should not be changed. 2) It should not match if it was found as a string (i.e. "Boolean") Updated: For #1, I used lookbehind s/(?<!:)\bBoolean\b/bool/g; For #2, I used lookahead. s/(?<!:)\bBoolean\b(?!")/bool/g</pre> This will most likely work for my situation but how about the following improvements? 3) Do not match if in the middle of a string (thanks nohat). 4) Do not match if in a comment. (// or /**/)
[ "\ns/[^:]\\bBoolean\\b(?!\")/bool/g\n\nThis does not match strings where Boolean is at that the beginning of the line becuase [^:] is \"match a character that is not :\".\n", "Watch out with that quote-matching lookahead assertion. That'll only match if Boolean is the last part of a string, but not in the middle of the string. You'll need to match an even number of quote marks preceding the match if you want to be sure you're not in a string (assuming no multi-line strings and no escaped embedded quote marks).\n", "s/[^:]\\bBoolean\\b[^\"]/bool/g\n\nEdit: Rats, beaten again. +1 for beating me, good sir.\n", "#define Boolean bool\n\nLet the preprocesser take care of this. Every time you see a Boolean you can either manually fix it or hope a regex doesn't make a mistake. Depending on how many macros you use you can you could dump the out of cpp.\n", "To fix condition 1 try:\ns/[^:]\\bBoolean\\b(?!\")/bool/g\n\nThe [^:] says to match any character other than \":\". \n", "\n3) Do not match if in the middle of a string (thanks nohat).\n\nYou can perhaps write a reg ex to check \".*Boolean.*\". But what if you have quote(\") inside the string? So, you have more work to not exclude (\\\") pattern. \n\n4) Do not match if in a comment. (// or /* */)\n\nFor '//', you can have a regex to exclude //.* But, better could be to first put a regex to compare the whole line for the // comments ((.*)(//.*)) and then apply replacement only on $1 (first matching pattern).\nFor /* */, it is more complex as this is multiline pattern. One approach can be to first run whole of you code to match multiline comments and then take out only the parts not matching ... something like ... (.*)(/*.**/)(.*). But, the actual regex would be even more complex as you would have not one but more of multi-line comments.\nNow, what if you have /* or */ inside // block? (I dont know why would you have it.. but Murphy's law says that you can have it). There is obviously some way out but my idea is to emphasize how bad-looking the regex will become.\nMy suggestion here would be to use some lexical tool for C++ and replace the token Boolean with bool. Your thoughts?\n", "In order to avoid writing a full C parser in perl, you're trying to strike a balance. Depending on how much needs changing, I would be inclined to do something like a very restrictive s/// and then anything that still matches /Boolean/ gets written to an exception file for human decision making. That way you're not trying to parse the C middle strings, multi-line comment, conditional compiled out text, etc. that could be present.\n", "\n\n…\n…\nDo not match if in the middle of a string (thanks nohat).\nDo not match if in a comment. (// or /**/)\n\n\nNo can do with a simple regex. For that, you need to actually look at every single character left-to-right and decide what kind of thing it is, at least well enough to tell apart comments from multi-line comments from strings from other stuff, and then you need to see if the “other stuff” part contains things you want to change.\nNow, I don’t know the exact syntactical rules for comments and strings in C++ so the following is going to be imprecise and completely undebugged, but it’ll give you an idea of the complexity you’re up against.\nmy $line_comment = qr! (?> // .* \\n? ) !x;\nmy $multiline_comment = qr! (?> /\\* [^*]* (?: \\* (?: [^/*] [^*]* )? )* )* \\*/ ) !x;\nmy $string = qr! (?> \" [^\"\\\\]* (?: \\\\ . [^\"\\\\]* )* \" ) !x;\nmy $boolean_type = qr! (?<!:) \\b Boolean \\b !x;\n\n$code =~ s{ \\G (\n $line_comment\n | $multiline_comment\n | $string\n | ( $boolean_type )\n | .\n) }{\n defined $2 ? 'bool' : $1\n}gex;\n\nPlease don’t ask me to explain this in all its intricacies, it would take me a day and another. Just buy and read Jeff Friedl’s Mastering Regular Expressions if you want to understand exactly what is going on here.\n", "The \"'Boolean' in the middle of a string\" part sounds a bit unlikely, I'd check first if there is any occurrence of it in the code with something like\nm/\"[^\"]*Boolean[^\"]*\"/\n\nAnd if there is none or a few, just ignore that case.\n" ]
[ 3, 2, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "perl", "regex" ]
stackoverflow_0000035178_perl_regex.txt
Q: Method to generate pdf from access+vb6 or just sql 2005? The setup: Multiple computers using an adp file to access a sql 2005 database. Most don't have a pdf distiller. An access form (plain form, not crystal) is created that needs to be saved as a pdf. The only way I can think of is send a request from access to the sql server for a web page. Something like: "http://sqlserver/generatepdf.php?id=123" I'm trying to avoid the web page 'middle man'. Is there a way to generate the pdf in T-SQL? Anyone have any other ideas. I'm not looking for code, just methdology ideas. Thank you A: Save the form as a report, then use Access MVP Stephen Lebans free A2000ReportToPDF utility to convert it to a pdf file. http://www.lebans.com/reporttopdf.htm If they have Access 2007 they can download and install the free Microsoft Office 2007 Add-in to save documents as PDF or XPS. http://www.microsoft.com/downloads/details.aspx?FamilyId=4D951911-3E7E-4AE6-B059-A2E79ED87041&displaylang=en A: Microsoft's ReportViewer client can generate pdfs natively. It works inside of web pages and windows forms/wpf apps. You can programmatically trigger the export as well. The only downside is that you'll need to basically redo your form as a report. A: I must admit that I did not get it: you want to export an Access form and its data into a PDF file? Your form is basically graphics, not text, nor report. Do you mean that you want this form to be included as (for example) a .png file inside a PDF file or do you want it to be a full PDF file inheriting objects from the original form and allowing things such as text search and so on?
Method to generate pdf from access+vb6 or just sql 2005?
The setup: Multiple computers using an adp file to access a sql 2005 database. Most don't have a pdf distiller. An access form (plain form, not crystal) is created that needs to be saved as a pdf. The only way I can think of is send a request from access to the sql server for a web page. Something like: "http://sqlserver/generatepdf.php?id=123" I'm trying to avoid the web page 'middle man'. Is there a way to generate the pdf in T-SQL? Anyone have any other ideas. I'm not looking for code, just methdology ideas. Thank you
[ "Save the form as a report, then use Access MVP Stephen Lebans free A2000ReportToPDF utility to convert it to a pdf file.\nhttp://www.lebans.com/reporttopdf.htm\nIf they have Access 2007 they can download and install the free Microsoft Office 2007 Add-in to save documents as PDF or XPS.\nhttp://www.microsoft.com/downloads/details.aspx?FamilyId=4D951911-3E7E-4AE6-B059-A2E79ED87041&displaylang=en\n", "Microsoft's ReportViewer client can generate pdfs natively.\nIt works inside of web pages and windows forms/wpf apps. You can programmatically trigger the export as well. The only downside is that you'll need to basically redo your form as a report.\n", "I must admit that I did not get it: you want to export an Access form and its data into a PDF file? Your form is basically graphics, not text, nor report. Do you mean that you want this form to be included as (for example) a .png file inside a PDF file or do you want it to be a full PDF file inheriting objects from the original form and allowing things such as text search and so on?\n" ]
[ 2, 0, 0 ]
[]
[]
[ "adp", "ms_access", "pdf", "sql_server_2005" ]
stackoverflow_0000094707_adp_ms_access_pdf_sql_server_2005.txt
Q: How do I shrink an array in Perl? How do I make an array shorter in Perl? I read some webpages indicating that I can assign: $#ARRAY = 42; I read that the use of $# is deprecated. I need a solution that will work for an array of arrays, too. This didn't work: $#$ARRAY[$i] = 42; A: I'm not aware of assigning $#ARRAY being deprecated; perldoc perldata from 5.10.0 certainly says nothing about it. It is the fastest way to truncate an array. If you want something a little more readable, use splice: splice @ARRAY, 43; (Note 43 instead of 42 - $#ARRAY gets you the last index of the array, whereas splice taks the length of the array instead). As for working on arrays of arrays, I assume you mean being able to truncate a nested array via a reference? In that case, you want: $#{$ARRAY->[7]} = 42; or splice @{$ARRAY->[7]}, 43; A: Your options are near limitless (I've outlined five approaches here) but your strategy will be dictated by exactly what your specific needs and goals are. (all examples will convert @array to have no more than $N elements) [EDIT] As others have pointed out, the way suggested in the original question is actually not deprecated, and it provides the fastest, tersest, but not necessarily the most readable solution. It also has the side effect of expanding an array of fewer than $N elements with empty elements: $#array = $N-1; Least code: #best for trimming down large arrays into small arrays @array = $array[0..($N-1)]; Most efficient for trimming a small number off of a large array: #This is a little less expensive and clearer splice(@array, $n, @#array); Undesirable in almost all cases, unless you really love delete(): #this is the worst solution yet because it requires resizing after the delete while($N-1 < $#array) { delete(array[$i]); } Useful if you need the remainder of the list in reverse order: #this is better than deleting because there is no resize while($N-1 < $#array) { pop @array; #or, "push $array2, pop @array;" for the reverse order remainder } Useful for saving time in long run: #don't put more values into the array than you actually want A: The $# variable is deprecated, but the $#array feature is not. To use the $#array syntax on an arbitrary expression that yields an array reference, do $#{ EXPR }. See the invaluable: http://perlmonks.org/?node=References+quick+reference A: You essentially gave the canonical answer yourself. You shorten an array by setting the last index: $#Array = 42 The $#Foo notation for denoting the last index in the array is absolutely not deprecated. Similarly, assigning to it will not be deprecated either. Quoting the perldata documentation: The length of an array is a scalar value. You may find the length of array @days by evaluating $#days, as in csh. However, this isn’t the length of the array; it’s the subscript of the last element, which is a different value since there is ordinarily a 0th element. Assigning to $#days actually changes the length of the array. Shortening an array this way destroys intervening values. Lengthening an array that was previously shortened does not recover values that were in those elements. (It used to do so in Perl 4, but we had to break this to make sure destructors were called when expected.) A: $#array is the last index of the array. $#$array would be the last index of an array pointed at by $array. $#$array[$i] means you're trying to index a scalar--can't be done. $#{$array[3]} properly resolves the subscripting of the main array before we try to reference the last index. Used alone $#{$array[3]} = 9; assigns a length of 9 to the autovivified array at $array[3]. When in doubt, use Data::Dumper: use Data::Dumper; $#{$array[3]} = 5; $#array = 10; print Dumper( \@array, $array ), "\n"; A: $#{$ARRAY[$i]} = 42; A: You could do splice @array, $length; #or splice @{$arrays[$i]}, $length; A: There are two ways of interpreting the question. How to reduce the length of the array? How to reduce the amount of memory consumed by the array? Most of the answers so far focus on the former. In my view, the best answer to that is the splice function. For example, to remove 10 elements from the end: splice @array, -10; However, because of how Perl manages memory for arrays, the only way to ensure that an array takes less memory is to copy it to a new array (and let the memory of the old array be reclaimed). For this, I would tend to think about using a slice operation. E.g., to remove 10 elements: @new = @old[ 0 .. $#old - 10 ] Here's a comparison of different approaches for a 500 element array (using 2104 bytes): original: length 500 => size 2104 pound: length 490 => size 2208 splice: length 490 => size 2104 delete: length 490 => size 2104 slice: length 490 => size 2064 You can see that only the slice operation (copied to a new array) has a smaller size than the original. Here's the code I used for this analysis: use strict; use warnings; use 5.010; use Devel::Size qw/size/; my @original = (1 .. 500); show( 'original', \@original ); my @pound = @original; $#pound = $#pound - 10; show( 'pound', \@pound ); my @splice = @original; splice(@splice,-10); show( 'splice', \@splice); my @delete = @original; delete @delete[ -10 .. -1 ]; show( 'delete', \@delete ); my @slice = @original[0 .. $#original - 10]; show( 'slice', \@slice); sub show { my ($name, $ref) = @_; printf( "%10s: length %4d => size %d\n", $name, scalar @$ref, size($ref)); }
How do I shrink an array in Perl?
How do I make an array shorter in Perl? I read some webpages indicating that I can assign: $#ARRAY = 42; I read that the use of $# is deprecated. I need a solution that will work for an array of arrays, too. This didn't work: $#$ARRAY[$i] = 42;
[ "I'm not aware of assigning $#ARRAY being deprecated; perldoc perldata from 5.10.0 certainly says nothing about it. It is the fastest way to truncate an array.\nIf you want something a little more readable, use splice:\nsplice @ARRAY, 43;\n\n(Note 43 instead of 42 - $#ARRAY gets you the last index of the array, whereas splice taks the length of the array instead).\nAs for working on arrays of arrays, I assume you mean being able to truncate a nested array via a reference? In that case, you want:\n$#{$ARRAY->[7]} = 42;\n\nor\nsplice @{$ARRAY->[7]}, 43;\n\n", "Your options are near limitless (I've outlined five approaches here) but your strategy will be dictated by exactly what your specific needs and goals are. (all examples will convert @array to have no more than $N elements)\n\n[EDIT]\nAs others have pointed out, the way suggested in the original question is actually not deprecated, and it provides the fastest, tersest, but not necessarily the most readable solution. It also has the side effect of expanding an array of fewer than $N elements with empty elements:\n$#array = $N-1;\n\n\nLeast code: \n#best for trimming down large arrays into small arrays\n@array = $array[0..($N-1)];\n\nMost efficient for trimming a small number off of a large array:\n#This is a little less expensive and clearer\nsplice(@array, $n, @#array);\n\nUndesirable in almost all cases, unless you really love delete():\n#this is the worst solution yet because it requires resizing after the delete\nwhile($N-1 < $#array)\n{\n delete(array[$i]);\n}\n\nUseful if you need the remainder of the list in reverse order:\n#this is better than deleting because there is no resize\nwhile($N-1 < $#array)\n{\n pop @array;\n #or, \"push $array2, pop @array;\" for the reverse order remainder\n}\n\nUseful for saving time in long run:\n#don't put more values into the array than you actually want\n\n", "The $# variable is deprecated, but the $#array feature is not.\nTo use the $#array syntax on an arbitrary expression that yields an array reference, do $#{ EXPR }.\nSee the invaluable: http://perlmonks.org/?node=References+quick+reference\n", "You essentially gave the canonical answer yourself. You shorten an array by setting the last index:\n$#Array = 42\n\nThe $#Foo notation for denoting the last index in the array is absolutely not deprecated. Similarly, assigning to it will not be deprecated either. Quoting the perldata documentation:\n\nThe length of an array is a scalar value. You may find the length of\n array @days by evaluating $#days, as in csh. However, this isn’t the\n length of the array; it’s the subscript of the last element, which is a\n different value since there is ordinarily a 0th element. Assigning to\n $#days actually changes the length of the array. Shortening an array\n this way destroys intervening values. Lengthening an array that was\n previously shortened does not recover values that were in those\n elements. (It used to do so in Perl 4, but we had to break this to\n make sure destructors were called when expected.)\n\n", "\n$#array is the last index of the array. \n$#$array would be the last index of an array pointed at by $array. \n$#$array[$i] means you're trying to index a scalar--can't be done. $#{$array[3]} properly resolves the subscripting of the main array before we try to reference the last index. \nUsed alone \n$#{$array[3]} = 9;\nassigns a length of 9 to the autovivified array at $array[3].\nWhen in doubt, use Data::Dumper:\nuse Data::Dumper;\n$#{$array[3]} = 5;\n$#array = 10;\nprint Dumper( \\@array, $array ), \"\\n\";\n\n\n", "$#{$ARRAY[$i]} = 42;\n", "You could do\nsplice @array, $length;\n#or\nsplice @{$arrays[$i]}, $length;\n\n", "There are two ways of interpreting the question.\n\nHow to reduce the length of the array?\nHow to reduce the amount of memory consumed by the array?\n\nMost of the answers so far focus on the former. In my view, the best answer to that is the splice function. For example, to remove 10 elements from the end:\nsplice @array, -10;\n\nHowever, because of how Perl manages memory for arrays, the only way to ensure that an array takes less memory is to copy it to a new array (and let the memory of the old array be reclaimed). For this, I would tend to think about using a slice operation. E.g., to remove 10 elements:\n@new = @old[ 0 .. $#old - 10 ]\n\nHere's a comparison of different approaches for a 500 element array (using 2104 bytes):\n original: length 500 => size 2104\n pound: length 490 => size 2208\n splice: length 490 => size 2104\n delete: length 490 => size 2104\n slice: length 490 => size 2064\n\nYou can see that only the slice operation (copied to a new array) has a smaller size than the original.\nHere's the code I used for this analysis:\nuse strict;\nuse warnings;\nuse 5.010;\nuse Devel::Size qw/size/;\n\nmy @original = (1 .. 500);\nshow( 'original', \\@original );\n\nmy @pound = @original;\n$#pound = $#pound - 10;\nshow( 'pound', \\@pound );\n\nmy @splice = @original;\nsplice(@splice,-10);\nshow( 'splice', \\@splice);\n\nmy @delete = @original;\ndelete @delete[ -10 .. -1 ];\nshow( 'delete', \\@delete );\n\nmy @slice = @original[0 .. $#original - 10];\nshow( 'slice', \\@slice);\n\nsub show {\n my ($name, $ref) = @_;\n printf( \"%10s: length %4d => size %d\\n\", $name, scalar @$ref, size($ref));\n}\n\n" ]
[ 20, 10, 7, 5, 2, 0, 0, 0 ]
[]
[]
[ "arrays", "perl" ]
stackoverflow_0000092847_arrays_perl.txt
Q: Best container for double-indexing What is the best way (in C++) to set up a container allowing for double-indexing? Specifically, I have a list of objects, each indexed by a key (possibly multiple per key). This implies a multimap. The problem with this, however, is that it means a possibly worse-than-linear lookup to find the location of an object. I'd rather avoid duplication of data, so having each object maintain it's own coordinate and have to move itself in the map would be bad (not to mention that moving your own object may indirectly call your destructor whilst in a member function!). I would rather some container that maintains an index both by object pointer and coordinate, and that the objects themselves guarantee stable references/pointers. Then each object could store an iterator to the index (including the coordinate), sufficiently abstracted, and know where it is. Boost.MultiIndex seems like the best idea, but it's very scary and I don't wany my actual objects to need to be const. What would you recommend? EDIT: Boost Bimap seems nice, but does it provide stable indexing? That is, if I change the coordinate, references to other elements must remain valid. The reason I want to use pointers for indexing is because objects have otherwise no intrinsic ordering, and a pointer can remain constant while the object changes (allowing its use in a Boost MultiIndex, which, IIRC, does provide stable indexing). A: I'm making several assumptions based on your writeup: Keys are cheap to copy and compare There should be only one copy of the object in the system The same key may refer to many objects, but only one object corresponds to a given key (one-to-many) You want to be able to efficiently look up which objects correspond to a given key, and which key corresponds to a given object I'd suggest: Use a linked list or some other container to maintain a global list of all objects in the system. The objects are allocated on the linked list. Create one std::multimap<Key, Object *> that maps keys to object pointers, pointing to the single canonical location in the linked list. Do one of: Create one std::map<Object *, Key> that allows looking up the key attached to a particular object. Make sure your code updates this map when the key is changed. (This could also be a std::multimap if you need a many-to-many relationship.) Add a member variable to the Object that contains the current Key (allowing O(1) lookups). Make sure your code updates this variable when the key is changed. Since your writeup mentioned "coordinates" as the keys, you might also be interested in reading the suggestions at Fastest way to find if a 3D coordinate is already used. A: Its difficult to understand what exactly you are doing with it, but it seems like boost bimap is what you want. It's basically boost multi-index except a specific use case, and easier to use. It allows fast lookup based on the first element or the second element. Why are you looking up the location of an object in a map by its address? Use the abstraction and let it do all the work for you. Just a note: iteration over all elements in a map is O(N) so it would be guaranteed O(N) (not worse) to look up the way you are thinking of doing it. A: One option would be to use two std::maps that referenced shared_ptrs. Something like this may get you going: template<typename T, typename K1, typename K2> class MyBiMap { public: typedef boost::shared_ptr<T> ptr_type; void insert(const ptr_type& value, const K1& key1, const K2& key2) { _map1.insert(std::make_pair(key1, value)); _map2.insert(std::make_pair(key2, value)); } ptr_type find1(const K1& key) { std::map<K1, ptr_type >::const_iterator itr = _map1.find(key); if (itr == _map1.end()) throw std::exception("Unable to find key"); return itr->second; } ptr_type find2(const K2& key) { std::map<K2, ptr_type >::const_iterator itr = _map2.find(key); if (itr == _map2.end()) throw std::exception("Unable to find key"); return itr->second; } private: std::map<K1, ptr_type > _map1; std::map<K2, ptr_type > _map2; }; Edit: I just noticed the multimap requirement, this still expresses the idea so I'll leave it.
Best container for double-indexing
What is the best way (in C++) to set up a container allowing for double-indexing? Specifically, I have a list of objects, each indexed by a key (possibly multiple per key). This implies a multimap. The problem with this, however, is that it means a possibly worse-than-linear lookup to find the location of an object. I'd rather avoid duplication of data, so having each object maintain it's own coordinate and have to move itself in the map would be bad (not to mention that moving your own object may indirectly call your destructor whilst in a member function!). I would rather some container that maintains an index both by object pointer and coordinate, and that the objects themselves guarantee stable references/pointers. Then each object could store an iterator to the index (including the coordinate), sufficiently abstracted, and know where it is. Boost.MultiIndex seems like the best idea, but it's very scary and I don't wany my actual objects to need to be const. What would you recommend? EDIT: Boost Bimap seems nice, but does it provide stable indexing? That is, if I change the coordinate, references to other elements must remain valid. The reason I want to use pointers for indexing is because objects have otherwise no intrinsic ordering, and a pointer can remain constant while the object changes (allowing its use in a Boost MultiIndex, which, IIRC, does provide stable indexing).
[ "I'm making several assumptions based on your writeup:\n\nKeys are cheap to copy and compare\nThere should be only one copy of the object in the system\nThe same key may refer to many objects, but only one object corresponds to a given key (one-to-many)\nYou want to be able to efficiently look up which objects correspond to a given key, and which key corresponds to a given object\n\nI'd suggest:\n\nUse a linked list or some other container to maintain a global list of all objects in the system. The objects are allocated on the linked list.\nCreate one std::multimap<Key, Object *> that maps keys to object pointers, pointing to the single canonical location in the linked list.\nDo one of:\n\n\nCreate one std::map<Object *, Key> that allows looking up the key attached to a particular object. Make sure your code updates this map when the key is changed. (This could also be a std::multimap if you need a many-to-many relationship.)\nAdd a member variable to the Object that contains the current Key (allowing O(1) lookups). Make sure your code updates this variable when the key is changed.\n\n\nSince your writeup mentioned \"coordinates\" as the keys, you might also be interested in reading the suggestions at Fastest way to find if a 3D coordinate is already used.\n", "Its difficult to understand what exactly you are doing with it, but it seems like boost bimap is what you want. It's basically boost multi-index except a specific use case, and easier to use. It allows fast lookup based on the first element or the second element. Why are you looking up the location of an object in a map by its address? Use the abstraction and let it do all the work for you. Just a note: iteration over all elements in a map is O(N) so it would be guaranteed O(N) (not worse) to look up the way you are thinking of doing it.\n", "One option would be to use two std::maps that referenced shared_ptrs. Something like this may get you going:\ntemplate<typename T, typename K1, typename K2>\nclass MyBiMap\n{\npublic:\n typedef boost::shared_ptr<T> ptr_type;\n\n void insert(const ptr_type& value, const K1& key1, const K2& key2)\n {\n _map1.insert(std::make_pair(key1, value));\n _map2.insert(std::make_pair(key2, value));\n }\n\n ptr_type find1(const K1& key)\n {\n std::map<K1, ptr_type >::const_iterator itr = _map1.find(key);\n if (itr == _map1.end())\n throw std::exception(\"Unable to find key\");\n return itr->second;\n }\n\n ptr_type find2(const K2& key)\n {\n std::map<K2, ptr_type >::const_iterator itr = _map2.find(key);\n if (itr == _map2.end())\n throw std::exception(\"Unable to find key\");\n return itr->second;\n }\n\nprivate:\n std::map<K1, ptr_type > _map1;\n std::map<K2, ptr_type > _map2;\n};\n\nEdit: I just noticed the multimap requirement, this still expresses the idea so I'll leave it.\n" ]
[ 5, 2, 2 ]
[]
[]
[ "c++", "containers", "stl" ]
stackoverflow_0000094755_c++_containers_stl.txt
Q: Non-iterative / Non-looping Way To Calculate Effective Date? I have a table called OffDays, where weekends and holiday dates are kept. I have a table called LeadTime where amount of time (in days) for a product to be manufactured is stored. Finally I have a table called Order where a product and the order date is kept. Is it possible to query when a product will be finished manufacturing without using stored procedures or loops? For example: OffDays has 2008-01-10, 2008-01-11, 2008-01-14. LeadTime has 5 for product 9. Order has 2008-01-09 for product 9. The calculation I'm looking for is this: 2008-01-09 1 2008-01-10 x 2008-01-11 x 2008-01-12 2 2008-01-13 3 2008-01-14 x 2008-01-15 4 2008-01-16 5 I'm wondering if it's possible to have a query return 2008-01-16 without having to use a stored procedure, or calculate it in my application code. Edit (why no stored procs / loops): The reason I can't use stored procedures is that they are not supported by the database. I can only add extra tables / data. The application is a third party reporting tool where I can only control the SQL query. Edit (how i'm doing it now): My current method is that I have an extra column in the order table to hold the calculated date, then a scheduled task / cron job runs the calculation on all the orders every hour. This is less than ideal for several reasons. A: The best approach is to use a Calendar table. See http://web.archive.org/web/20070611150639/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-calendar-table.html. Then your query could look something like: SELECT c.dt, l.*, o.*, c.* FROM [statistics].dbo.[calendar] c, [order] o JOIN lead l ON l.leadId = o.leadId WHERE c.isWeekday = 1 AND c.isHoliday =0 AND o.orderId = 1 AND l.leadDays = ( SELECT COUNT(*) FROM [statistics].dbo.Calendar c2 WHERE c2.dt >= o.startDate AND c2.dt <= c.dt AND c2.isWeekday=1 AND c2.isHoliday=0 ) Hope that helps, RB. A: You can generate a table of working days in advance. WDId | WDDate -----+----------- 4200 | 2008-01-08 4201 | 2008-01-09 4202 | 2008-01-12 4203 | 2008-01-13 4204 | 2008-01-16 4205 | 2008-01-17 Then do a query such as SELECT DeliveryDay.WDDate FROM WorkingDay OrderDay, WorkingDay DeliveryDay, LeadTime, Order where DeliveryDay.WDId = OrderDay.WDId + LeadTime.LTDays AND OrderDay.WDDate = '' AND LeadTime.ProductId = Order.ProductId AND Order.OrderId = 1234 You would need a stored procedure with a loop to generate the WorkingDays table, but not for regular queries. It's also fewer round trips to the server than if you use application code to count the days. A: Just calculate it in application code ... much easier and you won't have to write a really ugly query in your sql A: here's one way - using the dateadd function. I need to take this answer off the table. This isn't going to work properly for long lead times. It was simply adding the # of off days found in the lead time and pushing the date out. This will cause a problem when more off days show up in the new range. -- Setup test create table #odays (offd datetime) create table #leadtime (pid int , ltime int) create table [#order] (pid int, odate datetime) insert into #odays select '1/10/8' insert into #odays select '1/11/8' insert into #odays select '1/14/8' insert into #Leadtime values (3,5) insert into #leadtime values (9, 5) insert into #order values( 9, '1/9/8') select dateadd(dd, (select count(*)-1 from #odays where offd between odate and (select odate+ltime from #order o left join #leadtime l on o.pid = l.pid where l.pid = 9 ) ), odate+ltime) from #order o left join #leadtime l on o.pid = l.pid where o.pid = 9 A: Why are you against using loops? //some pseudocode int leadtime = 5; date order = 2008-01-09; date finishdate = order; while (leadtime > 0) { finishdate.addDay(); if (!IsOffday(finishdate)) leadtime--; } return finishdate; this seems like a too simple function to try to find a non-looping way. A: Hmm.. one solution could be to store a table of dates with an offset based on a count of non-off days from the beginning of the year. Lets say jan. 2 is an off day. 1/1/08 would have an offset of 1 (or 0 if you like to start from 0). 1/3/08 would have an offset of 2, because the count skips 1/2/08. From there its a simple calculation. Get the offset of the order date, add the lead time, then do a lookup on the calculated offset to get the end date. A: One way (without creating another table) is using a sort of ceiling function: for each offdate, find out how many "on dates" come before it, relative to the order date, in a subquery. Then take the highest number that's less than the lead time. Use the date corresponding to that, plus the remainder. This code may be specific to PostgreSQL, sorry if that's not what you're using. CREATE DATABASE test; CREATE TABLE offdays ( offdate date NOT NULL, CONSTRAINT offdays_pkey PRIMARY KEY (offdate) ); insert into offdays (offdate) values ('2008-01-10'); insert into offdays (offdate) values ('2008-01-11'); insert into offdays (offdate) values ('2008-01-14'); insert into offdays (offdate) values ('2008-01-18'); -- just for testing CREATE TABLE product ( id integer NOT NULL, CONSTRAINT product_pkey PRIMARY KEY (id) ); insert into product (id) values (9); CREATE TABLE leadtime ( product integer NOT NULL, leaddays integer NOT NULL, CONSTRAINT leadtime_pkey PRIMARY KEY (product), CONSTRAINT leadtime_product_fkey FOREIGN KEY (product) REFERENCES product (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); insert into leadtime (product, leaddays) values (9, 5); CREATE TABLE "order" ( product integer NOT NULL, "start" date NOT NULL, CONSTRAINT order_pkey PRIMARY KEY (product), CONSTRAINT order_product_fkey FOREIGN KEY (product) REFERENCES product (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); insert into "order" (product, "start") values (9, '2008-01-09'); -- finally, the query: select e.product, offdate + (leaddays - ondays)::integer as "end" from ( select c.product, offdate, (select (a.offdate - c."start") - count(b.offdate) from offdays b where b.offdate < a.offdate) as ondays, d.leaddays from offdays a, "order" c inner join leadtime d on d.product = c.product ) e where leaddays >= ondays order by "end" desc limit 1; A: This is PostgreSQL syntax but it should be easy to translate to other SQL dialect --Sample data create table offdays(datum date); insert into offdays(datum) select to_date('2008-01-10','yyyy-MM-dd') UNION select to_date('2008-01-11','yyyy-MM-dd') UNION select to_date('2008-01-14','yyyy-MM-dd') UNION select to_date('2008-01-20','yyyy-MM-dd') UNION select to_date('2008-01-21','yyyy-MM-dd') UNION select to_date('2008-01-26','yyyy-MM-dd'); create table leadtime (product_id integer , lead_time integer); insert into leadtime(product_id,lead_time) values (9,5); create table myorder (order_id integer,product_id integer, datum date); insert into myorder(order_id,product_id,datum) values (1,9,to_date('2008-01-09','yyyy-MM-dd')); insert into myorder(order_id,product_id,datum) values (2,9,to_date('2008-01-16','yyyy-MM-dd')); insert into myorder(order_id,product_id,datum) values (3,9,to_date('2008-01-23','yyyy-MM-dd')); --Query select order_id,min(finished_date) FROM (select mo.order_id,(mo.datum+lead_time+count(od2.*)::integer-1) as finished_date from myorder mo join leadtime lt on (mo.product_id=lt.product_id) join offdays od1 on (mo.datum<od1.datum) left outer join offdays od2 on (mo.datum<od2.datum and od2.datum<od1.datum) group by mo.order_id,mo.datum,lt.lead_time,od1.datum having (mo.datum+lead_time+count(od2.*)::integer-1) < od1.datum) tmp group by 1; --Results : 1 2008.01.16 2 2008.01.22 This will not return result for orders that would be finished after last date in offdays table (order number 3), so you must take care to insert offdays on time.It is assumed that orders do not start on offdays.
Non-iterative / Non-looping Way To Calculate Effective Date?
I have a table called OffDays, where weekends and holiday dates are kept. I have a table called LeadTime where amount of time (in days) for a product to be manufactured is stored. Finally I have a table called Order where a product and the order date is kept. Is it possible to query when a product will be finished manufacturing without using stored procedures or loops? For example: OffDays has 2008-01-10, 2008-01-11, 2008-01-14. LeadTime has 5 for product 9. Order has 2008-01-09 for product 9. The calculation I'm looking for is this: 2008-01-09 1 2008-01-10 x 2008-01-11 x 2008-01-12 2 2008-01-13 3 2008-01-14 x 2008-01-15 4 2008-01-16 5 I'm wondering if it's possible to have a query return 2008-01-16 without having to use a stored procedure, or calculate it in my application code. Edit (why no stored procs / loops): The reason I can't use stored procedures is that they are not supported by the database. I can only add extra tables / data. The application is a third party reporting tool where I can only control the SQL query. Edit (how i'm doing it now): My current method is that I have an extra column in the order table to hold the calculated date, then a scheduled task / cron job runs the calculation on all the orders every hour. This is less than ideal for several reasons.
[ "The best approach is to use a Calendar table. \nSee http://web.archive.org/web/20070611150639/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-calendar-table.html.\nThen your query could look something like:\nSELECT c.dt, l.*, o.*, c.*\n FROM [statistics].dbo.[calendar] c, \n [order] o JOIN\n lead l ON l.leadId = o.leadId\n WHERE c.isWeekday = 1 \n AND c.isHoliday =0 \n AND o.orderId = 1\n AND l.leadDays = ( \n SELECT COUNT(*) \n FROM [statistics].dbo.Calendar c2 \n WHERE c2.dt >= o.startDate\n AND c2.dt <= c.dt \n AND c2.isWeekday=1 \n AND c2.isHoliday=0 \n )\n\nHope that helps,\nRB.\n", "You can generate a table of working days in advance.\nWDId | WDDate\n-----+-----------\n4200 | 2008-01-08\n4201 | 2008-01-09\n4202 | 2008-01-12\n4203 | 2008-01-13\n4204 | 2008-01-16\n4205 | 2008-01-17\n\nThen do a query such as\nSELECT DeliveryDay.WDDate FROM WorkingDay OrderDay, WorkingDay DeliveryDay, LeadTime, Order where DeliveryDay.WDId = OrderDay.WDId + LeadTime.LTDays AND OrderDay.WDDate = '' AND LeadTime.ProductId = Order.ProductId AND Order.OrderId = 1234\n\nYou would need a stored procedure with a loop to generate the WorkingDays table, but not for regular queries. It's also fewer round trips to the server than if you use application code to count the days.\n", "Just calculate it in application code ... much easier and you won't have to write a really ugly query in your sql\n", "here's one way - using the dateadd function. \nI need to take this answer off the table. This isn't going to work properly for long lead times. It was simply adding the # of off days found in the lead time and pushing the date out. This will cause a problem when more off days show up in the new range. \n-- Setup test\ncreate table #odays (offd datetime)\ncreate table #leadtime (pid int , ltime int)\ncreate table [#order] (pid int, odate datetime)\n\n\ninsert into #odays \nselect '1/10/8'\ninsert into #odays \nselect '1/11/8'\ninsert into #odays \nselect '1/14/8'\n\n\ninsert into #Leadtime\nvalues (3,5)\ninsert into #leadtime\nvalues (9, 5)\n\ninsert into #order \nvalues( 9, '1/9/8')\n\nselect dateadd(dd, \n(select count(*)-1 \n from #odays \n where offd between odate and \n (select odate+ltime \n from #order o \n left join #leadtime l \n on o.pid = l.pid \n where l.pid = 9\n )\n ),\n odate+ltime) \n from #order o \n left join #leadtime l \n on o.pid = l.pid \n where o.pid = 9\n\n", "Why are you against using loops?\n//some pseudocode\nint leadtime = 5;\ndate order = 2008-01-09;\ndate finishdate = order;\nwhile (leadtime > 0) {\nfinishdate.addDay();\nif (!IsOffday(finishdate)) leadtime--;\n}\nreturn finishdate;\nthis seems like a too simple function to try to find a non-looping way.\n", "Hmm.. one solution could be to store a table of dates with an offset based on a count of non-off days from the beginning of the year. Lets say jan. 2 is an off day. 1/1/08 would have an offset of 1 (or 0 if you like to start from 0). 1/3/08 would have an offset of 2, because the count skips 1/2/08. From there its a simple calculation. Get the offset of the order date, add the lead time, then do a lookup on the calculated offset to get the end date.\n", "One way (without creating another table) is using a sort of ceiling function: for each offdate, find out how many \"on dates\" come before it, relative to the order date, in a subquery. Then take the highest number that's less than the lead time. Use the date corresponding to that, plus the remainder.\nThis code may be specific to PostgreSQL, sorry if that's not what you're using.\nCREATE DATABASE test;\nCREATE TABLE offdays\n(\n offdate date NOT NULL,\n CONSTRAINT offdays_pkey PRIMARY KEY (offdate)\n);\ninsert into offdays (offdate) values ('2008-01-10');\ninsert into offdays (offdate) values ('2008-01-11');\ninsert into offdays (offdate) values ('2008-01-14');\ninsert into offdays (offdate) values ('2008-01-18'); -- just for testing\nCREATE TABLE product\n(\n id integer NOT NULL,\n CONSTRAINT product_pkey PRIMARY KEY (id)\n);\ninsert into product (id) values (9);\nCREATE TABLE leadtime\n(\n product integer NOT NULL,\n leaddays integer NOT NULL,\n CONSTRAINT leadtime_pkey PRIMARY KEY (product),\n CONSTRAINT leadtime_product_fkey FOREIGN KEY (product)\n REFERENCES product (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n);\ninsert into leadtime (product, leaddays) values (9, 5);\nCREATE TABLE \"order\"\n(\n product integer NOT NULL,\n \"start\" date NOT NULL,\n CONSTRAINT order_pkey PRIMARY KEY (product),\n CONSTRAINT order_product_fkey FOREIGN KEY (product)\n REFERENCES product (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n);\ninsert into \"order\" (product, \"start\") values (9, '2008-01-09');\n\n-- finally, the query:\n\nselect e.product, offdate + (leaddays - ondays)::integer as \"end\"\nfrom\n(\n select c.product, offdate, (select (a.offdate - c.\"start\") - count(b.offdate) from offdays b where b.offdate < a.offdate) as ondays, d.leaddays\n from offdays a, \"order\" c\n inner join leadtime d on d.product = c.product\n) e\nwhere leaddays >= ondays\norder by \"end\" desc\nlimit 1;\n\n", "This is PostgreSQL syntax but it should be easy to translate to other SQL dialect\n--Sample data\ncreate table offdays(datum date);\n\ninsert into offdays(datum)\nselect to_date('2008-01-10','yyyy-MM-dd') UNION \nselect to_date('2008-01-11','yyyy-MM-dd') UNION \nselect to_date('2008-01-14','yyyy-MM-dd') UNION \nselect to_date('2008-01-20','yyyy-MM-dd') UNION\nselect to_date('2008-01-21','yyyy-MM-dd') UNION\nselect to_date('2008-01-26','yyyy-MM-dd');\n\ncreate table leadtime (product_id integer , lead_time integer);\ninsert into leadtime(product_id,lead_time) values (9,5);\n\ncreate table myorder (order_id integer,product_id integer, datum date);\ninsert into myorder(order_id,product_id,datum) \nvalues (1,9,to_date('2008-01-09','yyyy-MM-dd'));\ninsert into myorder(order_id,product_id,datum) \nvalues (2,9,to_date('2008-01-16','yyyy-MM-dd'));\ninsert into myorder(order_id,product_id,datum) \nvalues (3,9,to_date('2008-01-23','yyyy-MM-dd'));\n\n--Query\nselect order_id,min(finished_date)\nFROM \n (select mo.order_id,(mo.datum+lead_time+count(od2.*)::integer-1) as finished_date\n from \n myorder mo\n join leadtime lt on (mo.product_id=lt.product_id)\n join offdays od1 on (mo.datum<od1.datum)\n left outer join offdays od2 on (mo.datum<od2.datum and od2.datum<od1.datum)\n group by mo.order_id,mo.datum,lt.lead_time,od1.datum\n having (mo.datum+lead_time+count(od2.*)::integer-1) < od1.datum) tmp\ngroup by 1; \n\n--Results :\n1 2008.01.16\n2 2008.01.22\n\nThis will not return result for orders that would be finished after last date in offdays table (order number 3), so you must take care to insert offdays on time.It is assumed that orders do not start on offdays.\n" ]
[ 2, 2, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "date", "sql" ]
stackoverflow_0000092699_date_sql.txt
Q: DNS- Route DNS for subfolder to different server? LEt's say I want to have a subfolder called- http://www.foo.com/news/ but I actually want that news folder on a different server. I realize it can be done easily with subdomains, but I was really hoping for the subfolder thing. Is it possible? How? A: The only real way to it is with a reverse proxy ( Or a webserver acting as a reverse proxy ) between you and the outside world that knows what IP address each folder is in. Its not possible to just make something, for example have google.com appear at http://foobar.com/google/ because the browser won't route to the IP address ( lack of information ). You can fake that effect with a fullpage IFrame or Other frameset system, but that's rather dodgy. If you are using apache, you can set this up with mod_proxy. More details can be found here: Mod_Proxy(1.3) Manual Mod_Proxy(2.0) Manual Apache Tutor.org guide A: For Apache the following entries in httpd.conf are needed: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyPass /news http://newsserver.domain.com/news ProxyPassreverse / http://newsserver.domain.com/ A: Yes, there is a setting in IIS which lets you point a subfolder to a different site. So make the sub folder a virtual directory on your site, and then in the properties of the virtual directory choose the option for 'A redirection to a URL'... in it specify your other site. Of course, this is assuming your are using IIS. There should be something similar available to use in whatever web server you are using. A: It can't be done with DNS because the domain name is only the *.example.com of the address. It can be done by configuring a proxy on your www machine to pass all requests for /news to another server. It's very easy to do with apache but I don't remember all the details at this moment. A: DNS resolution happens at the domain level. DNS doesn't have any knowledge of URLs or folders so your name will always point to the same server. You can make that server actually retrieve the information from another one or redirect to another one but that's not very satisfactory I'd say.
DNS- Route DNS for subfolder to different server?
LEt's say I want to have a subfolder called- http://www.foo.com/news/ but I actually want that news folder on a different server. I realize it can be done easily with subdomains, but I was really hoping for the subfolder thing. Is it possible? How?
[ "The only real way to it is with a reverse proxy ( Or a webserver acting as a reverse proxy ) between you and the outside world that knows what IP address each folder is in. \nIts not possible to just make something, for example have google.com appear at http://foobar.com/google/ because the browser won't route to the IP address ( lack of information ). \nYou can fake that effect with a fullpage IFrame or Other frameset system, but that's rather dodgy.\nIf you are using apache, you can set this up with mod_proxy. More details can be found here: \n\nMod_Proxy(1.3) Manual\nMod_Proxy(2.0) Manual\nApache Tutor.org guide\n\n", "For Apache the following entries in httpd.conf are needed:\nLoadModule proxy_module modules/mod_proxy.so\nLoadModule proxy_http_module modules/mod_proxy_http.so\nProxyPass /news http://newsserver.domain.com/news\nProxyPassreverse / http://newsserver.domain.com/\n", "Yes, there is a setting in IIS which lets you point a subfolder to a different site. So make the sub folder a virtual directory on your site, and then in the properties of the virtual directory choose the option for 'A redirection to a URL'... in it specify your other site.\nOf course, this is assuming your are using IIS. There should be something similar available to use in whatever web server you are using.\n", "It can't be done with DNS because the domain name is only the *.example.com of the address.\nIt can be done by configuring a proxy on your www machine to pass all requests for /news to another server. It's very easy to do with apache but I don't remember all the details at this moment.\n", "DNS resolution happens at the domain level. DNS doesn't have any knowledge of URLs or folders so your name will always point to the same server. You can make that server actually retrieve the information from another one or redirect to another one but that's not very satisfactory I'd say.\n" ]
[ 7, 3, 1, 1, 0 ]
[]
[]
[ "dns", "reverse_proxy", "subdirectory" ]
stackoverflow_0000094820_dns_reverse_proxy_subdirectory.txt
Q: How do I format the message used to perform an HTTP post from VBScript / ASP to a WCF service and get a response? DOING THE POST IS NOT THE PROBLEM! Formatting the message so that I get a response is the problem. Ideally I'd be able to construct a message and use WinHTTP to perform a post to a WCF service (hosted in IIS) and get a response, but so far I've been unable to construct something that works properly. Does anyone have an example of doing this that is straightforward? In the 2.0 Web Service world this was as easy as putting a setting in the web.config to get the service to respond to a post and then calling the appropriate web method with the right parameters. There seems to be no analogue for this in the WCF world. As of now there is no option for me to convert the consumer (the vbscript end) into .NET. Assume at this point that at the endpoint I can convert to using whatever bindings are available right up to whatever is supported in .NET 3.5, but at the same time if this can be done using WsHttpBinding or BasicHttpBinding then the proper answer to this would be to describe how to format the message for either of those bindings in the context of VBScript or if there is no way to do that then just say, you can't do it. If this can be done using WebHTTPBinding then I have not found a way to make it happen as I've already investigated the WebInvoke attribute and been unable to create a test from VBScript to WCF that worked properly. Assume that the posted data type is a string and the response is also a string. Also this question is not WinHTTP related. I already know how to perform the post using WinHTTP it's the construction of the message that the WCF service will respond to that is the problem. While I could use something other than WinHTTP to perform the post from ASP over to the WCF service such as XMLHTTP I still have the problem of constructing an XML message that the WCF service will respond to. I've tried variations on this and still am unable to fathom what sort of format I need to use to make this happen. I know theoretically that all the WCF service needs is a properly formatted message. I'm just unable to construct the message properly and usually while everyone has some suggestion on how to send the message I have yet to see someone give an actual example of what the proper message format would be in this situation since everyone is so used to using .NET to send the message and it's all done for you in that context. A: You don't specify one thing: What binding are you exposing your service as? If you're using WsHttpBinding or BasicHttpBinding, then there's no simple "http post" you can do, because you need to include at the very least the entire SOAP envelope (with the right SOAP version and potentially security headers and so forth). If you're using (or can migrate to) .NET 3.5, then there's a new binding explicitly created to support scenarios like this, where you want your service to be exposed not as a SOAP service but as fully REST-like service, or simply as XML/JSON over HTTP. It's called the WebHttpBinding. There are many options you can tweak, but it's very likely you might just be able to add a new endpoint with webHttpBinding and have that working almost right away. This might give you a head-start on the new programming model: http://msdn.microsoft.com/en-us/library/bb412169.aspx A: This is the simplest code I've got for doing a background HTTP post in ASP Set objXML = CreateObject("MSXML2.ServerXMLHTTP.6.0") objXML.open "POST", url, false objXML.setRequestHeader "Content-Type", "application/x-www-form-urlencoded" objXML.send("key="& Server.URLEncode(xmlvalue)) Set responseXML = objXML.responseXML Set objXML = nothing This just requires that you have the MSXML objects installed on your server. I use this for all kinds of things including an XML-RPC server/client in ASP. edit: Re-read your question and if you are set on that specific way then this won't help, but if you are really just looking for a way to access your webservice this would work as long as you construct your XML to post correctly. A: I wote some code a while ago for an excel macro which reads a XML file, posts the contents to a URL then saves the result. Sub ExportToHTTPPOST() Dim sURL, sExtraParams Const ForReading = 1, ForWriting = 2, ForAppending = 3 Set rs = CreateObject("Scripting.FileSystemObject") Set r = rs.OpenTextFile("y:\test.xml", ForReading) Set Ws = CreateObject("Scripting.FileSystemObject") Set w = Ws.OpenTextFile("Y:\test2.xml", ForWriting, True) Do Until r.AtEndOfStream sData = sData & r.readline Loop sURL = "http://MyServer/MyWebApp.asp" sData = "payload=" & sData Set objHTTP = New WinHttp.WinHttpRequest objHTTP.Open "POST", sURL, False objHTTP.setRequestHeader "Content-Type", "application/x-www-form-urlencoded" objHTTP.send sData w.writeline objHTTP.ResponseText Set objHTTP = Nothing w.Close r.Close End Sub
How do I format the message used to perform an HTTP post from VBScript / ASP to a WCF service and get a response?
DOING THE POST IS NOT THE PROBLEM! Formatting the message so that I get a response is the problem. Ideally I'd be able to construct a message and use WinHTTP to perform a post to a WCF service (hosted in IIS) and get a response, but so far I've been unable to construct something that works properly. Does anyone have an example of doing this that is straightforward? In the 2.0 Web Service world this was as easy as putting a setting in the web.config to get the service to respond to a post and then calling the appropriate web method with the right parameters. There seems to be no analogue for this in the WCF world. As of now there is no option for me to convert the consumer (the vbscript end) into .NET. Assume at this point that at the endpoint I can convert to using whatever bindings are available right up to whatever is supported in .NET 3.5, but at the same time if this can be done using WsHttpBinding or BasicHttpBinding then the proper answer to this would be to describe how to format the message for either of those bindings in the context of VBScript or if there is no way to do that then just say, you can't do it. If this can be done using WebHTTPBinding then I have not found a way to make it happen as I've already investigated the WebInvoke attribute and been unable to create a test from VBScript to WCF that worked properly. Assume that the posted data type is a string and the response is also a string. Also this question is not WinHTTP related. I already know how to perform the post using WinHTTP it's the construction of the message that the WCF service will respond to that is the problem. While I could use something other than WinHTTP to perform the post from ASP over to the WCF service such as XMLHTTP I still have the problem of constructing an XML message that the WCF service will respond to. I've tried variations on this and still am unable to fathom what sort of format I need to use to make this happen. I know theoretically that all the WCF service needs is a properly formatted message. I'm just unable to construct the message properly and usually while everyone has some suggestion on how to send the message I have yet to see someone give an actual example of what the proper message format would be in this situation since everyone is so used to using .NET to send the message and it's all done for you in that context.
[ "You don't specify one thing: What binding are you exposing your service as? If you're using WsHttpBinding or BasicHttpBinding, then there's no simple \"http post\" you can do, because you need to include at the very least the entire SOAP envelope (with the right SOAP version and potentially security headers and so forth).\nIf you're using (or can migrate to) .NET 3.5, then there's a new binding explicitly created to support scenarios like this, where you want your service to be exposed not as a SOAP service but as fully REST-like service, or simply as XML/JSON over HTTP. It's called the WebHttpBinding.\nThere are many options you can tweak, but it's very likely you might just be able to add a new endpoint with webHttpBinding and have that working almost right away.\nThis might give you a head-start on the new programming model: http://msdn.microsoft.com/en-us/library/bb412169.aspx\n", "This is the simplest code I've got for doing a background HTTP post in ASP\nSet objXML = CreateObject(\"MSXML2.ServerXMLHTTP.6.0\")\nobjXML.open \"POST\", url, false\nobjXML.setRequestHeader \"Content-Type\", \"application/x-www-form-urlencoded\"\nobjXML.send(\"key=\"& Server.URLEncode(xmlvalue))\nSet responseXML = objXML.responseXML\nSet objXML = nothing\n\nThis just requires that you have the MSXML objects installed on your server. I use this for all kinds of things including an XML-RPC server/client in ASP.\nedit: Re-read your question and if you are set on that specific way then this won't help, but if you are really just looking for a way to access your webservice this would work as long as you construct your XML to post correctly.\n", "I wote some code a while ago for an excel macro which reads a XML file, posts the contents to a URL then saves the result.\nSub ExportToHTTPPOST()\nDim sURL, sExtraParams\nConst ForReading = 1, ForWriting = 2, ForAppending = 3\n\nSet rs = CreateObject(\"Scripting.FileSystemObject\")\nSet r = rs.OpenTextFile(\"y:\\test.xml\", ForReading)\n\nSet Ws = CreateObject(\"Scripting.FileSystemObject\")\nSet w = Ws.OpenTextFile(\"Y:\\test2.xml\", ForWriting, True)\n\nDo Until r.AtEndOfStream\n\n sData = sData & r.readline\n\nLoop\n\nsURL = \"http://MyServer/MyWebApp.asp\"\n\nsData = \"payload=\" & sData\n\n Set objHTTP = New WinHttp.WinHttpRequest\n\n objHTTP.Open \"POST\", sURL, False\n objHTTP.setRequestHeader \"Content-Type\", \"application/x-www-form-urlencoded\"\n objHTTP.send sData\n w.writeline objHTTP.ResponseText\n Set objHTTP = Nothing\n w.Close\n r.Close\nEnd Sub\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "asp_classic", "vbscript", "wcf" ]
stackoverflow_0000076925_asp_classic_vbscript_wcf.txt
Q: Database integration tests When you are doing integration tests with either just your data access layer or the majority of the application stack. What is the best way prevent multiple tests from clashing with each other if they are run on the same database? A: Transactions. What the ruby on rails unit test framework does is this: Load all fixture data. For each test: BEGIN TRANSACTION # Yield control to user code ROLLBACK TRANSACTION End for each This means that Any changes your test makes to the database won't affect other threads while it's in-progress The next test's data isn't polluted by prior tests This is about a zillion times faster than manually reloading data for each test. I for one think this is pretty cool A: For simple database applications I find using SQLite invaluable. It allows you to have a unique and standalone database for each test. However it does only work if you're using simple generic SQL functionality or can easily hide the slight differences between SQLite and your production database system behind a class, but I've always found that to be fairly easy in the SQL applications I've developed. A: Just to add to Free Wildebeest's answer I have also used HSQLDB to do a similar type testing where each test gets a clean instance of the DB. A: I wanted to accept both Free Wildebeest's and Orion Edwards' answers but it would not let me. The reason I wanted to do this is that I'd come to the conclusion that these were the two main ways to do it, but which one to chose depends on the individual case (mostly the size of the database). A: Also run the tests at different times, so that they do not impact the performance or validity of each other. A: While not as clever as the Rails unit test framework in one of the other answers here, creating distinct data per test or group of tests is another way of doing it. The level of tediousness with this solution depends on the number of test cases you have and how dependant they are on one another. The tediousness will hold true if you have one database per test or group of dependant tests. When running the test suite, you load the data at the start, run the test suite, unload/compare results making sure the actual result meets the expected result. If not, do the cycle again. Load, run suite, unload/compare.
Database integration tests
When you are doing integration tests with either just your data access layer or the majority of the application stack. What is the best way prevent multiple tests from clashing with each other if they are run on the same database?
[ "Transactions.\nWhat the ruby on rails unit test framework does is this:\nLoad all fixture data.\n\nFor each test:\n\n BEGIN TRANSACTION\n\n # Yield control to user code\n\n ROLLBACK TRANSACTION\n\nEnd for each\n\nThis means that\n\nAny changes your test makes to the database won't affect other threads while it's in-progress\nThe next test's data isn't polluted by prior tests\nThis is about a zillion times faster than manually reloading data for each test.\n\nI for one think this is pretty cool\n", "For simple database applications I find using SQLite invaluable. It allows you to have a unique and standalone database for each test.\nHowever it does only work if you're using simple generic SQL functionality or can easily hide the slight differences between SQLite and your production database system behind a class, but I've always found that to be fairly easy in the SQL applications I've developed.\n", "Just to add to Free Wildebeest's answer I have also used HSQLDB to do a similar type testing where each test gets a clean instance of the DB. \n", "I wanted to accept both Free Wildebeest's and Orion Edwards' answers but it would not let me. The reason I wanted to do this is that I'd come to the conclusion that these were the two main ways to do it, but which one to chose depends on the individual case (mostly the size of the database).\n", "Also run the tests at different times, so that they do not impact the performance or validity of each other.\n", "While not as clever as the Rails unit test framework in one of the other answers here, creating distinct data per test or group of tests is another way of doing it. The level of tediousness with this solution depends on the number of test cases you have and how dependant they are on one another. The tediousness will hold true if you have one database per test or group of dependant tests.\nWhen running the test suite, you load the data at the start, run the test suite, unload/compare results making sure the actual result meets the expected result. If not, do the cycle again. Load, run suite, unload/compare.\n" ]
[ 10, 9, 1, 0, 0, 0 ]
[]
[]
[ "database_testing", "integration_testing", "tdd" ]
stackoverflow_0000061718_database_testing_integration_testing_tdd.txt
Q: NAnt best practices I have here 300 lines long NAnt file and it is quite a messy. I am wondering if there is a any style guide for writing NAnt scripts and what are the best practices for doing so. Any tips? A: I'm not aware of any published style guide, but I can certainly share my experience. You can use many of the same techniques used in other programming environments, such as making the code modular and splitting it across multiple files. In the environment that I have set up, each project is laid out like so: "[ProjectName]\Common" contains a common build file that is linked to nearly all of my projects. I also have a set of common subversion targets stored in a file there. The "Common" subdirectory is actually an svn:external, so it's automatically kept in sync across multiple projects. In the Common.build file, there are lots of environmental properties, plus some reusable filesets, some reusable targets, and a "StartUp" target that is used by each projects "StartUp" target. "[ProjectName]\Project.build" contains all of that projects specific properties and filesets, some of which override the settings from Common.build. This file also contains a "StartUp" target which sets up some runtime settings like assembly version information and any dependent paths. It also executes the "Startup" target from Common.build. This file includes the Common.build file. "[ProjectName][AssemblyName].build" contains all of the settings and targets specific to an individual assembly. This file includes the Project.build, which in turn includes the Common.build. This hierarchy works well in our situation, which has us building a trunk version and several branch versions of a product on a continuous integration server. As it stands now, the only differences between the scripts for building the trunk version and any one of the branches are only a handful of lines.
NAnt best practices
I have here 300 lines long NAnt file and it is quite a messy. I am wondering if there is a any style guide for writing NAnt scripts and what are the best practices for doing so. Any tips?
[ "I'm not aware of any published style guide, but I can certainly share my experience. You can use many of the same techniques used in other programming environments, such as making the code modular and splitting it across multiple files. In the environment that I have set up, each project is laid out like so:\n\"[ProjectName]\\Common\" contains a common build file that is linked to nearly all of my projects. I also have a set of common subversion targets stored in a file there. The \"Common\" subdirectory is actually an svn:external, so it's automatically kept in sync across multiple projects. In the Common.build file, there are lots of environmental properties, plus some reusable filesets, some reusable targets, and a \"StartUp\" target that is used by each projects \"StartUp\" target.\n\"[ProjectName]\\Project.build\" contains all of that projects specific properties and filesets, some of which override the settings from Common.build. This file also contains a \"StartUp\" target which sets up some runtime settings like assembly version information and any dependent paths. It also executes the \"Startup\" target from Common.build. This file includes the Common.build file.\n\"[ProjectName][AssemblyName].build\" contains all of the settings and targets specific to an individual assembly. This file includes the Project.build, which in turn includes the Common.build.\nThis hierarchy works well in our situation, which has us building a trunk version and several branch versions of a product on a continuous integration server. As it stands now, the only differences between the scripts for building the trunk version and any one of the branches are only a handful of lines.\n" ]
[ 5 ]
[]
[]
[ ".net", "coding_style", "nant" ]
stackoverflow_0000094173_.net_coding_style_nant.txt
Q: Export variable from C++ static library I have a static library written in C++ and I have a structure describing data format, i.e. struct Format{ long fmtId; long dataChunkSize; long headerSize; Format(long, long, long); bool operator==(Format const & other) const; }; Some of data formats are widely used, like {fmtId=0, dataChunkSize=128, headerSize=0} and {fmtId=0, dataChunkSize=256, headerSize=0} Some data structure classes receive format in constructor. I'd like to have some sort of shortcuts for those widely used formats, like a couple of global Format members gFmt128, gFmt256 that I can pass by reference. I instantiate them in a .cpp file like Format gFmt128(0, 128, 0); and in .h there is extern Format gFmt128; also, I declare Format const & Format::Fmt128(){return gFmt128;} and try to use it in the main module. But if I try and do it in the main module that uses the lib, the linker complains about unresolved external gFmt128. How can I make my library 'export' those global vars, so I can use them from other modules? A: Don't use the static keyword on global declarations. Here is an article explain the visibility of variables with/without static. The static gives globals internal linkage, that is, only visible in the translation unit they are declared in. A: Are they defined in .cpp file as well? Roughly, it should look like: struct Format { [...] static Format gFmt128; }; // Format.cpp Format Format::gFmt128 = { 0, 128, 0 } A: You need to declare your Format objects as extern not static A: Morhveus, I tried this out, too. My linker rather says it has the gFmt128 symbol already defined. This is indeed the behaviour I would expect: the compiler adds the function body to both the library and the client object since it's defined in the include file. The only way I get unresolved externals is by not adding the static library to the objects-to-be-linked not defining the symbol gFmt128 in the static library's source file I'm puzzled... How come we see something different? Can you explain what happens?
Export variable from C++ static library
I have a static library written in C++ and I have a structure describing data format, i.e. struct Format{ long fmtId; long dataChunkSize; long headerSize; Format(long, long, long); bool operator==(Format const & other) const; }; Some of data formats are widely used, like {fmtId=0, dataChunkSize=128, headerSize=0} and {fmtId=0, dataChunkSize=256, headerSize=0} Some data structure classes receive format in constructor. I'd like to have some sort of shortcuts for those widely used formats, like a couple of global Format members gFmt128, gFmt256 that I can pass by reference. I instantiate them in a .cpp file like Format gFmt128(0, 128, 0); and in .h there is extern Format gFmt128; also, I declare Format const & Format::Fmt128(){return gFmt128;} and try to use it in the main module. But if I try and do it in the main module that uses the lib, the linker complains about unresolved external gFmt128. How can I make my library 'export' those global vars, so I can use them from other modules?
[ "Don't use the static keyword on global declarations. Here is an article explain the visibility of variables with/without static. The static gives globals internal linkage, that is, only visible in the translation unit they are declared in.\n", "Are they defined in .cpp file as well? Roughly, it should look like:\nstruct Format\n{\n [...]\n static Format gFmt128;\n};\n// Format.cpp\nFormat Format::gFmt128 = { 0, 128, 0 }\n\n", "You need to declare your Format objects as extern not static\n", "Morhveus, I tried this out, too. My linker rather says it has the gFmt128 symbol already defined. This is indeed the behaviour I would expect: the compiler adds the function body to both the library and the client object since it's defined in the include file.\nThe only way I get unresolved externals is by \n\nnot adding the static library to the objects-to-be-linked\nnot defining the symbol gFmt128 in the static library's source file\n\nI'm puzzled... How come we see something different? Can you explain what happens?\n" ]
[ 7, 2, 2, 1 ]
[]
[]
[ "c++", "export" ]
stackoverflow_0000091420_c++_export.txt
Q: ColdFusion Template Request count optimization In ColdFusion, under Request Tuning in the administrator, how do I determine what is an optimal number (or at least a good guess) for the Maximum Number of Simultaneous Template Requests? Environment: CF8 Standard IIS 6 Win2k3 SQL2k5 on a separate box A: The way of finding the right number of requests is load testing. That is, measuring changes in throughput under load when you vary the request number. Any significant change would require retesting. But I suspect most folks are going to baulk at that amount of work. I think a good rule of thumb is about 8 threads per CPU (core). In terms of efficiency, the lower the thread count (up to a point) the less swapping will be going on as the CPU processes your requests. If your pages execute very quickly then a lower number of requests is optimal. If you have longer running requests, and especially if you have requests that are waiting on third-parties (like a database) then increasing the number of working threads will improve your throughput. That is, if your CPU is not tied up processing stuff you can afford to have more simultaneous requests working on the tasks at hand. Although its a little bit dated, many of the principles on request tuning in Grant Straker's book on CF Performance & Troubleshooting would be worthwhile. A: I would say at least 8 per core, not per CPU. And I think 8 is a little low given modern CPU cores, I would say at least 12.
ColdFusion Template Request count optimization
In ColdFusion, under Request Tuning in the administrator, how do I determine what is an optimal number (or at least a good guess) for the Maximum Number of Simultaneous Template Requests? Environment: CF8 Standard IIS 6 Win2k3 SQL2k5 on a separate box
[ "The way of finding the right number of requests is load testing. That is, measuring changes in throughput under load when you vary the request number. Any significant change would require retesting. But I suspect most folks are going to baulk at that amount of work.\nI think a good rule of thumb is about 8 threads per CPU (core).\nIn terms of efficiency, the lower the thread count (up to a point) the less swapping will be going on as the CPU processes your requests. If your pages execute very quickly then a lower number of requests is optimal. \nIf you have longer running requests, and especially if you have requests that are waiting on third-parties (like a database) then increasing the number of working threads will improve your throughput. That is, if your CPU is not tied up processing stuff you can afford to have more simultaneous requests working on the tasks at hand.\nAlthough its a little bit dated, many of the principles on request tuning in Grant Straker's book on CF Performance & Troubleshooting would be worthwhile.\n", "I would say at least 8 per core, not per CPU. And I think 8 is a little low given modern CPU cores, I would say at least 12.\n" ]
[ 6, 1 ]
[]
[]
[ "administration", "coldfusion" ]
stackoverflow_0000087904_administration_coldfusion.txt
Q: Is there any built-in way to convert an integer to a string (of any base) in C#? Convert.ToString() only allows base values of 2, 8, 10, and 16 for some odd reason; is there some obscure way of providing any base between 2 and 16? A: Probably to eliminate someone typing a 7 instead of an 8, since the uses for arbitrary bases are few (But not non-existent). Here is an example method that can do arbitrary base conversions. You can use it if you like, no restrictions. string ConvertToBase(int value, int toBase) { if (toBase < 2 || toBase > 36) throw new ArgumentException("toBase"); if (value < 0) throw new ArgumentException("value"); if (value == 0) return "0"; //0 would skip while loop string AlphaCodes = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"; string retVal = ""; while (value > 0) { retVal = AlphaCodes[value % toBase] + retVal; value /= toBase; } return retVal; } Untested, but you should be able to figure it out from here. A: //untested -- public domain // if you do a lot of conversions, using StringBuilder will be // much, much more efficient with memory and time than using string // alone. string toStringWithBase(int number, int base) { if(0==number) //handle corner case return "0"; if(base < 2) return "ERROR: Base less than 2"; StringBuilder buffer = new StringBuilder(); bool negative = (number < 0) ? true : false; if(negative) { number=-number; buffer.Append('-'); } int digits=0; int factor=1; int runningTotal=number; while(number > 0) { number = number/base; digits++; factor*=base; } factor = factor/base; while(factor >= 1) { int remainder = (number/factor) % base; Char out = '0'+remainder; if(remainder > 9) out = 'A' + remainder - 10; buffer.Append(out); factor = factor/base; } return buffer.ToString }
Is there any built-in way to convert an integer to a string (of any base) in C#?
Convert.ToString() only allows base values of 2, 8, 10, and 16 for some odd reason; is there some obscure way of providing any base between 2 and 16?
[ "Probably to eliminate someone typing a 7 instead of an 8, since the uses for arbitrary bases are few (But not non-existent).\nHere is an example method that can do arbitrary base conversions. You can use it if you like, no restrictions.\nstring ConvertToBase(int value, int toBase)\n{\n if (toBase < 2 || toBase > 36) throw new ArgumentException(\"toBase\");\n if (value < 0) throw new ArgumentException(\"value\");\n\n if (value == 0) return \"0\"; //0 would skip while loop\n\n string AlphaCodes = \"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ\";\n\n string retVal = \"\";\n\n while (value > 0)\n {\n retVal = AlphaCodes[value % toBase] + retVal;\n value /= toBase;\n }\n\n return retVal;\n}\n\nUntested, but you should be able to figure it out from here.\n", "//untested -- public domain\n// if you do a lot of conversions, using StringBuilder will be \n// much, much more efficient with memory and time than using string\n// alone.\n\nstring toStringWithBase(int number, int base)\n { \n if(0==number) //handle corner case\n return \"0\";\n if(base < 2)\n return \"ERROR: Base less than 2\";\n\n StringBuilder buffer = new StringBuilder(); \n\n bool negative = (number < 0) ? true : false;\n if(negative)\n {\n number=-number;\n buffer.Append('-');\n }\n\n int digits=0;\n int factor=1;\n\n int runningTotal=number;\n while(number > 0)\n {\n number = number/base;\n digits++;\n factor*=base;\n }\n factor = factor/base;\n\n while(factor >= 1)\n {\n int remainder = (number/factor) % base;\n\n Char out = '0'+remainder;\n if(remainder > 9)\n out = 'A' + remainder - 10;\n buffer.Append(out);\n factor = factor/base;\n }\n\n return buffer.ToString\n }\n\n" ]
[ 6, 0 ]
[ "string foo = Convert.ToString(myint,base);\n\nhttp://msdn.microsoft.com/en-us/library/14kwkz77.aspx\nEDIT: My bad, this will throw an argument exception unless you pass in the specified bases (2, 8, 10, and 16)\nYour probably SOL if you want to use a different base (but why???).\n" ]
[ -3 ]
[ "base", "c#" ]
stackoverflow_0000095105_base_c#.txt
Q: What is the value of href attribute in openid.server link tag if Techorati OpenID is hosted at my site? I want to log in to Stack Overflow with Techorati OpenID hosted at my site. https://stackoverflow.com/users/login has some basic information. I understood that I should change <link rel="openid.delegate" href="http://yourname.x.com" /> to <link rel="openid.delegate" href="http://technorati.com/people/technorati/USERNAME/" /> but if I change <link rel="openid.server" href="http://x.com/server" /> to <link rel="openid.server" href="http://technorati.com/server" /> or <link rel="openid.server" href="http://technorati.com/" /> it does not work. A: From http://blog.blogupp.com/2008/06/get-openid-fied-and-discover-new-web.html <link rel="openid.server" href="http://technorati.com/openid/" A: A general way to find out the answer to this question is to load the page you want to delegate to (http://technorati.com/people/technorati/USERNAME in this case), look at the source, and find the server tag used there. If there are openid2 tags, you should copy those as well.
What is the value of href attribute in openid.server link tag if Techorati OpenID is hosted at my site?
I want to log in to Stack Overflow with Techorati OpenID hosted at my site. https://stackoverflow.com/users/login has some basic information. I understood that I should change <link rel="openid.delegate" href="http://yourname.x.com" /> to <link rel="openid.delegate" href="http://technorati.com/people/technorati/USERNAME/" /> but if I change <link rel="openid.server" href="http://x.com/server" /> to <link rel="openid.server" href="http://technorati.com/server" /> or <link rel="openid.server" href="http://technorati.com/" /> it does not work.
[ "From http://blog.blogupp.com/2008/06/get-openid-fied-and-discover-new-web.html\n<link rel=\"openid.server\" href=\"http://technorati.com/openid/\"\n\n", "A general way to find out the answer to this question is to load the page you want to delegate to (http://technorati.com/people/technorati/USERNAME in this case), look at the source, and find the server tag used there.\nIf there are openid2 tags, you should copy those as well.\n" ]
[ 1, 1 ]
[]
[]
[ "openid" ]
stackoverflow_0000091357_openid.txt
Q: Setting DataGridViewRow.Height slow I have noticed that setting row height in DataGridView control is slow. Is there a way to make it faster? A: What's caused similar layout delays for myself was related to the AutoSizeRowsMode and AutoSizeColumnsMode DataGridView1.AutoSizeRowsMode = None will likely fix it. Also try ColumnHeadersHeightSizeMode to None and AllowUserToResizeRows to False. A: If you can, try setting the height before you bind the control. If you can't do that, try making the control hidden before setting the height. A: This works in most cases but I'm not sure if this is what you are looking for... Try setting up the RowTemplate and use that to set the rows height. // my test to specify a size for a datagridview row dataGridView1.Columns.Add(new DataGridViewTextBoxColumn { Name = "ColumnNameGoesHere" }); dataGridView1.RowTemplate.Height = 50; for (var x = 0; x <= 10000; x++) { dataGridView1.Rows.Add(x.ToString()); } Here is also a nice page on Windows Forms Programming Best Practices for Scaling the Windows Forms DataGridView Control which you may find to be handy: http://msdn.microsoft.com/en-us/library/ha5xt0d9.aspx
Setting DataGridViewRow.Height slow
I have noticed that setting row height in DataGridView control is slow. Is there a way to make it faster?
[ "What's caused similar layout delays for myself was related to\nthe AutoSizeRowsMode and AutoSizeColumnsMode\nDataGridView1.AutoSizeRowsMode = None\n\nwill likely fix it.\nAlso try ColumnHeadersHeightSizeMode to None and AllowUserToResizeRows to False.\n", "If you can, try setting the height before you bind the control.\nIf you can't do that, try making the control hidden before setting the height.\n", "This works in most cases but I'm not sure if this is what you are looking for...\nTry setting up the RowTemplate and use that to set the rows height. \n // my test to specify a size for a datagridview row\n dataGridView1.Columns.Add(new DataGridViewTextBoxColumn { Name = \"ColumnNameGoesHere\" });\n dataGridView1.RowTemplate.Height = 50;\n for (var x = 0; x <= 10000; x++)\n {\n dataGridView1.Rows.Add(x.ToString());\n }\n\nHere is also a nice page on Windows Forms Programming\nBest Practices for Scaling the Windows Forms DataGridView Control which you may find to be handy: http://msdn.microsoft.com/en-us/library/ha5xt0d9.aspx\n" ]
[ 3, 1, 0 ]
[]
[]
[ ".net", "c#", "datagridview" ]
stackoverflow_0000095074_.net_c#_datagridview.txt
Q: What tools and techniques do you use to fix browser memory leaks? I am trying to fix memory leaks in IE 7. Using Drip for investigations but it is not helping much when most dynamically generated DOM elements do not have unique ids. Tips? A: You should try the Javascript Memory Leak detector developed internally at Microsoft. A: Well, Your best bet is to understand what causes them, so you can look critically at your code, identify patterns that may cause a leak, and then avoid or refactor around them. Here's a couple of links to get you started, both very informative: http://www-128.ibm.com/developerworks/web/library/wa-memleak/ http://msdn.microsoft.com/en-us/library/bb250448.aspx A: Just remember that memory leaks are really about you not cleaning up after yourself. All you need is a little organization. In the past, I have created my own proxy object for attaching events to DOM elements. It uses my javascript library's api to actually set and remove events. The proxy itself just keeps track of all of the references so that I can call a method on it to have it clean up all of my potential memory leaks. For my purposes, I was able to just call a single deconstructor on the page that would clean up the leaks for the entire page when the user was leaving the page. You may have to be more granular but the technique is the same.
What tools and techniques do you use to fix browser memory leaks?
I am trying to fix memory leaks in IE 7. Using Drip for investigations but it is not helping much when most dynamically generated DOM elements do not have unique ids. Tips?
[ "You should try the Javascript Memory Leak detector developed internally at Microsoft.\n", "Well, Your best bet is to understand what causes them, so you can look critically at your code, identify patterns that may cause a leak, and then avoid or refactor around them.\nHere's a couple of links to get you started, both very informative:\n\nhttp://www-128.ibm.com/developerworks/web/library/wa-memleak/\nhttp://msdn.microsoft.com/en-us/library/bb250448.aspx\n\n", "Just remember that memory leaks are really about you not cleaning up after yourself. All you need is a little organization.\nIn the past, I have created my own proxy object for attaching events to DOM elements. It uses my javascript library's api to actually set and remove events. The proxy itself just keeps track of all of the references so that I can call a method on it to have it clean up all of my potential memory leaks.\nFor my purposes, I was able to just call a single deconstructor on the page that would clean up the leaks for the entire page when the user was leaving the page.\nYou may have to be more granular but the technique is the same.\n" ]
[ 6, 3, 1 ]
[]
[]
[ "internet_explorer", "javascript", "memory" ]
stackoverflow_0000095326_internet_explorer_javascript_memory.txt
Q: Legality of Encryption in Standard Libraries Some programming languages such as Java and C# include encryption packages in their standard libraries. Others such as Python and Ruby make you download third-party modules to do strong encryption. I assume that this is for legal reasons; perhaps Sun Microsystems has enough lawyers that they aren't afraid of getting sued, while Guido van Rossum feels more vulnerable. But what does the law actually say about this? At this point, would open source authors have anything to fear if they included strong encryption in their programming languages' standard libraries? If so, then why don't they? If not, then how do Sun and Microsoft get away with it. A: There are two issues: importation of encryption software, and exportation of encryption software. Some countries (China, Russia, Iran, Iraq, Myanmar, etc.) restrict the use of cryptography by their citizens. It is illegal to import encryption software to those countries. To enable unlimited encryption strength in the JDK, you have to download a new policy file. The software license there doesn't allow you to use the software if you're in a country that doesn't allow importation of encryption. This is called the "Unlimited Strength Jurisdiction Policy," and below I include part of its README.txt. Other countries, like the US, don't want to export encryption software to the Axis of Evil. So, it can be illegal to export encryption software to those countries. The US export restrictions have eased up considerably, probably in recognition of the futility of keeping encryption out of the hands of enemies, or possibly to encourage use of encryption that has been compromised by the NSA. But, they aren't gone altogether. I don't think the software can be licensed by terrorists. JCE for JDK 5.0 has been through the U.S. export review process. The JCE framework, along with the SunJCE provider that comes standard with it, is exportable. The JCE architecture allows flexible cryptographic strength to be configured via jurisdiction policy files. Due to the import restrictions of some countries, the jurisdiction policy files distributed with the JDK 5.0 software have built-in restrictions on available cryptographic strength. The jurisdiction policy files in this download bundle (the bundle including this README file) contain no restrictions on cryptographic strengths. This is appropriate for most countries. Framework vendors can create download bundles that include jurisdiction policy files that specify cryptographic restrictions appropriate for countries whose governments mandate restrictions. Users in those countries can download an appropriate bundle, and the JCE framework will enforce the specified restrictions. You are advised to consult your export/import control counsel or attorney to determine the exact requirements. A: In the US the important law is ITAR. A: Quick google turned up a Wikipedia article. http://en.wikipedia.org/wiki/Export_of_cryptography But as of now it seems like the "No need to reinvent the wheel" is correct. A: IANAL, But... Java and C# are closed-source, and thus have terms in the EULA that say more-or-less "It's not our fault if you use this somewhere you're not supposed to". They also have teams of lawyers to protect themselves and enforce that clause. Most open-source licenses do not have similar langauge, and even the ones that do, they don't have teams of lawyers on their side, as the OP said. Also, Python and PERL are older than Java and C#, from the days when it was illegal to export cryptographic software from the US. Not adding cryptography since the law was changed is perhaps simply a "consistency-is-good" decision.
Legality of Encryption in Standard Libraries
Some programming languages such as Java and C# include encryption packages in their standard libraries. Others such as Python and Ruby make you download third-party modules to do strong encryption. I assume that this is for legal reasons; perhaps Sun Microsystems has enough lawyers that they aren't afraid of getting sued, while Guido van Rossum feels more vulnerable. But what does the law actually say about this? At this point, would open source authors have anything to fear if they included strong encryption in their programming languages' standard libraries? If so, then why don't they? If not, then how do Sun and Microsoft get away with it.
[ "There are two issues: importation of encryption software, and exportation of encryption software.\nSome countries (China, Russia, Iran, Iraq, Myanmar, etc.) restrict the use of cryptography by their citizens. It is illegal to import encryption software to those countries.\nTo enable unlimited encryption strength in the JDK, you have to download a new policy file. The software license there doesn't allow you to use the software if you're in a country that doesn't allow importation of encryption. This is called the \"Unlimited Strength Jurisdiction Policy,\" and below I include part of its README.txt.\nOther countries, like the US, don't want to export encryption software to the Axis of Evil. So, it can be illegal to export encryption software to those countries.\nThe US export restrictions have eased up considerably, probably in recognition of the futility of keeping encryption out of the hands of enemies, or possibly to encourage use of encryption that has been compromised by the NSA. But, they aren't gone altogether. I don't think the software can be licensed by terrorists.\n\nJCE for JDK 5.0 has been through the U.S. export review process.\n The JCE framework, along with the SunJCE provider that comes\n standard with it, is exportable.\nThe JCE architecture allows flexible cryptographic strength\n to be configured via jurisdiction policy files. Due to the\n import restrictions of some countries, the jurisdiction policy\n files distributed with the JDK 5.0 software have built-in\n restrictions on available cryptographic strength. The jurisdiction\n policy files in this download bundle (the bundle including this\n README file) contain no restrictions on cryptographic strengths.\n This is appropriate for most countries. Framework vendors can\n create download bundles that include jurisdiction policy files\n that specify cryptographic restrictions appropriate for countries\n whose governments mandate restrictions. Users in those countries\n can download an appropriate bundle, and the JCE framework will\n enforce the specified restrictions.\nYou are advised to consult your export/import control counsel or\n attorney to determine the exact requirements.\n\n", "In the US the important law is ITAR. \n", "Quick google turned up a Wikipedia article.\nhttp://en.wikipedia.org/wiki/Export_of_cryptography\nBut as of now it seems like the \"No need to reinvent the wheel\" is correct.\n", "IANAL, But...\nJava and C# are closed-source, and thus have terms in the EULA that say more-or-less \"It's not our fault if you use this somewhere you're not supposed to\". They also have teams of lawyers to protect themselves and enforce that clause.\nMost open-source licenses do not have similar langauge, and even the ones that do, they don't have teams of lawyers on their side, as the OP said.\nAlso, Python and PERL are older than Java and C#, from the days when it was illegal to export cryptographic software from the US. Not adding cryptography since the law was changed is perhaps simply a \"consistency-is-good\" decision.\n" ]
[ 7, 2, 2, 0 ]
[]
[]
[ "encryption", "programming_languages" ]
stackoverflow_0000095297_encryption_programming_languages.txt
Q: Restarting ColdFusion mail queue We are currently experiencing intermittent mail queue stoppages. I'm seeking diagnostic help in another area. In the meantime, is there a way to restart the CF mail queue without restarting the service as a whole? CF8 standard Win2k3 Solution: We are now checking the age of the oldest file in the mail queue. When it exceeds a set age (currently 30 min) the mail queue is restarted. A: Yes there is. <cfset sFactory = CreateObject("java","coldfusion.server.ServiceFactory")> <cfset MailSpoolService = sFactory.mailSpoolService> <cfset MailSpoolService.stop()> <cfset MailSpoolService.start()>
Restarting ColdFusion mail queue
We are currently experiencing intermittent mail queue stoppages. I'm seeking diagnostic help in another area. In the meantime, is there a way to restart the CF mail queue without restarting the service as a whole? CF8 standard Win2k3 Solution: We are now checking the age of the oldest file in the mail queue. When it exceeds a set age (currently 30 min) the mail queue is restarted.
[ "Yes there is.\n<cfset sFactory = CreateObject(\"java\",\"coldfusion.server.ServiceFactory\")>\n<cfset MailSpoolService = sFactory.mailSpoolService>\n<cfset MailSpoolService.stop()>\n<cfset MailSpoolService.start()>\n\n" ]
[ 18 ]
[]
[]
[ "administration", "coldfusion" ]
stackoverflow_0000094948_administration_coldfusion.txt
Q: Unit Testing in .NET: How to Mock Entity Data Provider Does anyone know whether there's a way to mock Entity Data Provider so Unit Tests don't hit the live data? I found this blog but it seems the project hasn't been released: http://www.chrisdoesdev.com/index.php/archives/62 Thanks A: Mattwar has a great article on his blog about mocking up LinqtoSql with reflection -- perhaps you can use that as a starting point? A: I would be interested to know this myself. I don't think that it's possible, because one of the things that got the Agile/Alt.Net community in a tizzy about the Entity Framework was this very problem of the lack of persistence ignorance.
Unit Testing in .NET: How to Mock Entity Data Provider
Does anyone know whether there's a way to mock Entity Data Provider so Unit Tests don't hit the live data? I found this blog but it seems the project hasn't been released: http://www.chrisdoesdev.com/index.php/archives/62 Thanks
[ "Mattwar has a great article on his blog about mocking up LinqtoSql with reflection -- perhaps you can use that as a starting point?\n", "I would be interested to know this myself. I don't think that it's possible, because one of the things that got the Agile/Alt.Net community in a tizzy about the Entity Framework was this very problem of the lack of persistence ignorance.\n" ]
[ 3, 0 ]
[]
[]
[ "entity_framework", "mocking", "unit_testing" ]
stackoverflow_0000095426_entity_framework_mocking_unit_testing.txt
Q: PHP equivalent of Perl's 'use strict' (to require variables to be initialzied before use) Python's convention is that variables are created by first assignment, and trying to read their value before one has been assigned raises an exception. PHP by contrast implicitly creates a variable when it is read, with a null value. This means it is easy to do this in PHP: function mymodule_important_calculation() { $result = /* ... long and complex calculation ... */; return $resukt; } This function always returns null, and if null is a valid value for the functuion then the bug might go undetected for some time. The Python equivalent would complain that the variable resukt is being used before it is assigned. So... is there a way to configure PHP to be stricter with variable assignments? A: PHP doesn't do much forward checking of things at parse time. The best you can do is crank up the warning level to report your mistakes, but by the time you get an E_NOTICE, its too late, and its not possible to force E_NOTICES to occur in advance yet. A lot of people are toting the "error_reporting E_STRICT" flag, but its still retroactive warning, and won't protect you from bad code mistakes like you posted. This gem turned up on the php-dev mailing-list this week and I think its just the tool you want. Its more a lint-checker, but it adds scope to the current lint checking PHP does. PHP-Initialized Google Project There's the hope that with a bit of attention we can get this behaviour implemented in PHP itself. So put your 2-cents on the PHP mailing list / bug system / feature requests and see if we can encourage its integration. A: There is no way to make it fail as far as I know, but with E_NOTICE in error_reporting settings you can make it throw a warning (well, a notice :-) But still a string you can search for ). A: Check out error reporting, http://php.net/manual/en/function.error-reporting.php What you want is probably E_STRICT. Just bare in mind that PHP has no namespaces, and error reporting becomes global. Kind of sucks to be you if you use a 3rd party library from developers that did not have error reporting switched on. A: I'm pretty sure that it generates an error if the variable wasn't previously declared. If your installation isn't showing such errors, check the error_reporting() level in your php.ini file. A: You can try to play with the error reporting level as indicated here: http://us3.php.net/error_reporting but I'm not sure it mention the usage of non initiated variable, even with E_STRICT. A: There is something similar : in PHP you can change the error reporting level. It's a best practice to set it to maximum in a dev environnement. To do so : Add in your PHP.ini: error_reporting = E_ALL Or you can just add this at the top of the file your are working on : error_reporting(E_ALL); This won't prevent your code from running but the lack of variable assignments will display a very clear error message in your browser. A: If you use the "Analyze Code" on files, or your project in Zend Studio it will warn you about any uninitialized variables (this actually helped find a ton of misspelled variables lurking in seldom used portions of the code just waiting to cause very difficult to detect errors). Perhaps someone could add that functionality in the PHP lint function (php -l), which currently only checks for syntax errors.
PHP equivalent of Perl's 'use strict' (to require variables to be initialzied before use)
Python's convention is that variables are created by first assignment, and trying to read their value before one has been assigned raises an exception. PHP by contrast implicitly creates a variable when it is read, with a null value. This means it is easy to do this in PHP: function mymodule_important_calculation() { $result = /* ... long and complex calculation ... */; return $resukt; } This function always returns null, and if null is a valid value for the functuion then the bug might go undetected for some time. The Python equivalent would complain that the variable resukt is being used before it is assigned. So... is there a way to configure PHP to be stricter with variable assignments?
[ "PHP doesn't do much forward checking of things at parse time. \nThe best you can do is crank up the warning level to report your mistakes, but by the time you get an E_NOTICE, its too late, and its not possible to force E_NOTICES to occur in advance yet.\nA lot of people are toting the \"error_reporting E_STRICT\" flag, but its still retroactive warning, and won't protect you from bad code mistakes like you posted.\nThis gem turned up on the php-dev mailing-list this week and I think its just the tool you want. Its more a lint-checker, but it adds scope to the current lint checking PHP does. \nPHP-Initialized Google Project\nThere's the hope that with a bit of attention we can get this behaviour implemented in PHP itself. So put your 2-cents on the PHP mailing list / bug system / feature requests and see if we can encourage its integration. \n", "There is no way to make it fail as far as I know, but with E_NOTICE in error_reporting settings you can make it throw a warning (well, a notice :-) But still a string you can search for ).\n", "Check out error reporting, http://php.net/manual/en/function.error-reporting.php\nWhat you want is probably E_STRICT. Just bare in mind that PHP has no namespaces, and error reporting becomes global. Kind of sucks to be you if you use a 3rd party library from developers that did not have error reporting switched on.\n", "I'm pretty sure that it generates an error if the variable wasn't previously declared. If your installation isn't showing such errors, check the error_reporting() level in your php.ini file.\n", "You can try to play with the error reporting level as indicated here: http://us3.php.net/error_reporting but I'm not sure it mention the usage of non initiated variable, even with E_STRICT.\n", "There is something similar : in PHP you can change the error reporting level. It's a best practice to set it to maximum in a dev environnement. To do so :\nAdd in your PHP.ini: \nerror_reporting = E_ALL\n\nOr you can just add this at the top of the file your are working on :\nerror_reporting(E_ALL);\n\nThis won't prevent your code from running but the lack of variable assignments will display a very clear error message in your browser.\n", "If you use the \"Analyze Code\" on files, or your project in Zend Studio it will warn you about any uninitialized variables (this actually helped find a ton of misspelled variables lurking in seldom used portions of the code just waiting to cause very difficult to detect errors). Perhaps someone could add that functionality in the PHP lint function (php -l), which currently only checks for syntax errors. \n" ]
[ 10, 4, 1, 0, 0, 0, 0 ]
[]
[]
[ "php" ]
stackoverflow_0000091699_php.txt
Q: How do I attach an .mdf to sql2005? Running sp_attach_single_file_db gives this error: The log scan number (10913:125:2) passed to log scan in database 'myDB' is not valid Isn't it supposed to re-create the log file? How else would I be able to attach/repair that .mdf file? A: It depends what mode your database was in when it was detached, it's possible there are uncommitted/unwritten transactions in that log file that are needed to attach the database, otherwise there would be data loss. Do you know what recovery mode you were in when it was detached? A: It worked, when I installed another one (with .ldf log file) then the one in question, then detached the first one again. Don't ask me why.
How do I attach an .mdf to sql2005?
Running sp_attach_single_file_db gives this error: The log scan number (10913:125:2) passed to log scan in database 'myDB' is not valid Isn't it supposed to re-create the log file? How else would I be able to attach/repair that .mdf file?
[ "It depends what mode your database was in when it was detached, it's possible there are uncommitted/unwritten transactions in that log file that are needed to attach the database, otherwise there would be data loss. Do you know what recovery mode you were in when it was detached?\n", "It worked, when I installed another one (with .ldf log file) then the one in question, then detached the first one again. Don't ask me why.\n" ]
[ 1, 1 ]
[]
[]
[ "sql_server_2005" ]
stackoverflow_0000094866_sql_server_2005.txt
Q: MySQL Trigger & Stored Procedure Replication Ok,I'm running a setup with a single master and a number of slaves. All writes go through the master and are replicated down to the slaves which are used strictly for reads. Now I have a stored procedure (not function) which is called by a trigger on an insert. According to the MySQL docs, for replication triggers log the call to the trigger while stored procedures actually log the result of the stored procedure. So my question is, when my trigger gets fired, will it replicate both the trigger and the results of the procedure that the trigger calls (resulting in the procedure effectively being run twice)? Or will it simply replicate the trigger have the slaves re-run the stored procedure on their own? Thanks A: In MySQL 5.0 (and MySQL 5.1 with statement based binary logging), only the calling query is logged, so in your case, the INSERT would be logged. On the slave, the INSERT will be executed and then the trigger will be re-run on the slave. So the trigger needs to exist on the slave, and assuming it does, then it will be executed in exactly the same way as the master. In MySQL 5.1, there is row-based binary logging, which will log only the rows being changed, so the trigger would not be re-fired on the slave, but all rows that changed would still be propagated. A: In addition to Harrison's excellent answer: Assuming the databases are in sync (schema, data, same version) to start with, it should just work If it doesn't, then it may be that you're using something non deterministic in your queries or trigger. Fix that. Regardless of how you use replication, you need to have monitoring to check that the slaves are always in sync. Without any monitoring, they will become out of sync (subtly) and you won't notice. MySQL has no automatic built-in feature for checking this or fixing it.
MySQL Trigger & Stored Procedure Replication
Ok,I'm running a setup with a single master and a number of slaves. All writes go through the master and are replicated down to the slaves which are used strictly for reads. Now I have a stored procedure (not function) which is called by a trigger on an insert. According to the MySQL docs, for replication triggers log the call to the trigger while stored procedures actually log the result of the stored procedure. So my question is, when my trigger gets fired, will it replicate both the trigger and the results of the procedure that the trigger calls (resulting in the procedure effectively being run twice)? Or will it simply replicate the trigger have the slaves re-run the stored procedure on their own? Thanks
[ "In MySQL 5.0 (and MySQL 5.1 with statement based binary logging), only the calling query is logged, so in your case, the INSERT would be logged. \nOn the slave, the INSERT will be executed and then the trigger will be re-run on the slave. So the trigger needs to exist on the slave, and assuming it does, then it will be executed in exactly the same way as the master.\nIn MySQL 5.1, there is row-based binary logging, which will log only the rows being changed, so the trigger would not be re-fired on the slave, but all rows that changed would still be propagated.\n", "In addition to Harrison's excellent answer:\n\nAssuming the databases are in sync (schema, data, same version) to start with, it should just work\nIf it doesn't, then it may be that you're using something non deterministic in your queries or trigger. Fix that.\nRegardless of how you use replication, you need to have monitoring to check that the slaves are always in sync. Without any monitoring, they will become out of sync (subtly) and you won't notice. MySQL has no automatic built-in feature for checking this or fixing it.\n\n" ]
[ 6, 0 ]
[]
[]
[ "mysql", "stored_procedures", "triggers" ]
stackoverflow_0000093790_mysql_stored_procedures_triggers.txt
Q: When should I consider changing thread priority I once was asked to increase thread priority to fix a problem. I refused, saying that changing it was dangerous and was not the root cause of the problem. My question is, under what circumstannces should I conider changing priority of threads? A: When you've made a list of the threads you're using and defined a priority order for them which makes sense in terms of the work they do. If you nudge threads up here and there in order to bodge your way out of a problem, eventually they'll all be high priority and you're back where you started. Don't assume you can fix a race condition with prioritisation when really it needs locking, because chances are you've only fixed it in friendly conditions. There may still be cases where it can fail, such as when the lower-priority thread has undergone priority inheritance because another high-priority thread is waiting on another lock it's holding. If you classify threads along the lines of "these threads fill the audio buffer", "these threads make my app responsive to system events", "these threads make my app responsive to the user", "these threads are getting on with some business and will report when they're good and ready", then the threads ought to be prioritised accordingly. Finally, it depends on the OS. If thread priority is completely secondary to process priority, then it shouldn't be "dangerous" to prioritise threads: the only thing you can starve of CPU is yourself. But if your high-priority threads run in preference to the normal-priority threads of other, unrelated applications, then you have a broader responsibility. You should only be raising priorities of threads which do small amounts of urgent work. The definition of "small" depends what kind of device you're on - with a 3GHz multi-core processor you get away with a lot, but a mobile device might have pseudo real-time expectations that user-level apps can break. Keeping the audio buffer serviced is the canonical example of when to be high priority, though, since small under-runs usually cause nasty crackling. Long downloads (or other slow I/O) are the canonical example of when to be low priority, since there's no urgency processing this chunk of data if the next one won't be along for ages anyway. If you're ever writing a device driver you'll need to make more complex decisions how to play nicely with others. A: Not many. The only time I've ever had to change thread priorities in a positive direction was with a user interface thread. UIs must be extremely snappy in order for the app to feel right, so a lot of times it is best to prioritize painting threads higher than others. For example, the Swing Event Dispatch Thread runs at priority 6 by default (1 higher than the default). I do push threads down in priority quite a bit. Again, this is usually to keep the UI responsive while some long-running background process does its thing. However, this also will sometimes apply to polling daemons and the like which I know that I don't want to be interfering with anything, regardless of how minimal the interference. A: Our app uses a background thread to download data and we didn't want that interfering with the UI thread on single-core machines, so we deliberately prioritized that lower. A: I think it depends on the direction you're looking at changing the priority. Normally you shouldn't ever increase thread priority unless you have a very good reason. Increasing thread priority can cause your app's thread to start taking away time from other applications, which probably isn't what the user wants. If your thread is using up a significant amount of CPU it can make the machine hard to use, as some standard UI threads may start to starve. I'd say the only times you should increase priority above normal is if the user explicitly told your app to do so, but even then you want to prevent "clueless" users from doing so. Maybe if your app doesn't use much CPU normally, but might have brief bursts of really really important activity then it could be OK to have an increased priority, as it wouldn't normally detract from the user's general experience. Decreasing priority is another matter. If your app is doing something that takes a LOT of CPU and runs for a long time, yet isn't critical, then lowering the priority can be good. By lowering the priority you allow the CPU to be used for other things when it's needed, which helps keep the system responding quickly. As long as the system is mostly idling other than your app you'll still get most of the CPU time, but won't take away from tasks that need it more than you. An example of this would be a thread that indexes the hard drive (think google desktop). A: I would say when your original design assumptions about the threads are no longer valid. Thread priority is mostly a design decision about what work is most important. So for some examples of when to reconsider: If you add a new feature that might require its own thread that becomes more important, then reconsider thread priorities. If some requirements change that force you to reconsider the priorities of the work you are doing, then reconsider. Or, if you do performance testing and realize that your "high priority work" as specified in your design do not get the required performance, then tweak priorities. Otherwise, its often a hack.
When should I consider changing thread priority
I once was asked to increase thread priority to fix a problem. I refused, saying that changing it was dangerous and was not the root cause of the problem. My question is, under what circumstannces should I conider changing priority of threads?
[ "When you've made a list of the threads you're using and defined a priority order for them which makes sense in terms of the work they do.\nIf you nudge threads up here and there in order to bodge your way out of a problem, eventually they'll all be high priority and you're back where you started. Don't assume you can fix a race condition with prioritisation when really it needs locking, because chances are you've only fixed it in friendly conditions. There may still be cases where it can fail, such as when the lower-priority thread has undergone priority inheritance because another high-priority thread is waiting on another lock it's holding.\nIf you classify threads along the lines of \"these threads fill the audio buffer\", \"these threads make my app responsive to system events\", \"these threads make my app responsive to the user\", \"these threads are getting on with some business and will report when they're good and ready\", then the threads ought to be prioritised accordingly.\nFinally, it depends on the OS. If thread priority is completely secondary to process priority, then it shouldn't be \"dangerous\" to prioritise threads: the only thing you can starve of CPU is yourself. But if your high-priority threads run in preference to the normal-priority threads of other, unrelated applications, then you have a broader responsibility. You should only be raising priorities of threads which do small amounts of urgent work. The definition of \"small\" depends what kind of device you're on - with a 3GHz multi-core processor you get away with a lot, but a mobile device might have pseudo real-time expectations that user-level apps can break.\nKeeping the audio buffer serviced is the canonical example of when to be high priority, though, since small under-runs usually cause nasty crackling. Long downloads (or other slow I/O) are the canonical example of when to be low priority, since there's no urgency processing this chunk of data if the next one won't be along for ages anyway. If you're ever writing a device driver you'll need to make more complex decisions how to play nicely with others.\n", "Not many. The only time I've ever had to change thread priorities in a positive direction was with a user interface thread. UIs must be extremely snappy in order for the app to feel right, so a lot of times it is best to prioritize painting threads higher than others. For example, the Swing Event Dispatch Thread runs at priority 6 by default (1 higher than the default).\nI do push threads down in priority quite a bit. Again, this is usually to keep the UI responsive while some long-running background process does its thing. However, this also will sometimes apply to polling daemons and the like which I know that I don't want to be interfering with anything, regardless of how minimal the interference.\n", "Our app uses a background thread to download data and we didn't want that interfering with the UI thread on single-core machines, so we deliberately prioritized that lower.\n", "I think it depends on the direction you're looking at changing the priority.\nNormally you shouldn't ever increase thread priority unless you have a very good reason. Increasing thread priority can cause your app's thread to start taking away time from other applications, which probably isn't what the user wants. If your thread is using up a significant amount of CPU it can make the machine hard to use, as some standard UI threads may start to starve.\nI'd say the only times you should increase priority above normal is if the user explicitly told your app to do so, but even then you want to prevent \"clueless\" users from doing so. Maybe if your app doesn't use much CPU normally, but might have brief bursts of really really important activity then it could be OK to have an increased priority, as it wouldn't normally detract from the user's general experience.\nDecreasing priority is another matter. If your app is doing something that takes a LOT of CPU and runs for a long time, yet isn't critical, then lowering the priority can be good. By lowering the priority you allow the CPU to be used for other things when it's needed, which helps keep the system responding quickly. As long as the system is mostly idling other than your app you'll still get most of the CPU time, but won't take away from tasks that need it more than you. An example of this would be a thread that indexes the hard drive (think google desktop).\n", "I would say when your original design assumptions about the threads are no longer valid. \nThread priority is mostly a design decision about what work is most important. So for some examples of when to reconsider: If you add a new feature that might require its own thread that becomes more important, then reconsider thread priorities. If some requirements change that force you to reconsider the priorities of the work you are doing, then reconsider. Or, if you do performance testing and realize that your \"high priority work\" as specified in your design do not get the required performance, then tweak priorities.\nOtherwise, its often a hack.\n" ]
[ 6, 2, 1, 1, 0 ]
[]
[]
[ "language_agnostic", "multithreading" ]
stackoverflow_0000095649_language_agnostic_multithreading.txt
Q: How to build a 64-bit .NET DLL, with 64-bit COM interop? I need to build a managed DLL, targeted for x64, and expose it via x64 COM. I need a walk through, good article, etc... Interop is fairly straightforward, but when you talk about x64 on both sides, I can't find anything. A: Take a look at this discussion. And this.
How to build a 64-bit .NET DLL, with 64-bit COM interop?
I need to build a managed DLL, targeted for x64, and expose it via x64 COM. I need a walk through, good article, etc... Interop is fairly straightforward, but when you talk about x64 on both sides, I can't find anything.
[ "Take a look at this discussion.\nAnd this.\n" ]
[ 2 ]
[]
[]
[ ".net", "64_bit", "activex", "interop" ]
stackoverflow_0000095628_.net_64_bit_activex_interop.txt
Q: What causes Firefox to make a GET request after submitting a form via the POST method? What causes Firefox to follow a POST request with a GET request when submitting a form via the POST method? The GET method is sent to the same url as the POST method but without the request parameters. If you change the form method to GET, it will result in two identical GET requests. A: This is a bug in Firefox 3. This happens when the response to the POST contains an image tag with an empty source attribute. eg <img src=""/> A: The URL POSTed to might be returning a Redirect -- that would cause a GET. This is commonly done so that the page can be refreshed without reposting. A: Probably there is some javascript involved. The form is submitted as a result of an onclick event in an anchor with: href="..." onclick="..form.submit()"
What causes Firefox to make a GET request after submitting a form via the POST method?
What causes Firefox to follow a POST request with a GET request when submitting a form via the POST method? The GET method is sent to the same url as the POST method but without the request parameters. If you change the form method to GET, it will result in two identical GET requests.
[ "This is a bug in Firefox 3. This happens when the response to the POST contains an image tag with an empty source attribute. eg <img src=\"\"/>\n", "The URL POSTed to might be returning a Redirect -- that would cause a GET. This is commonly done so that the page can be refreshed without reposting.\n", "Probably there is some javascript involved. The form is submitted as a result of an onclick event in an anchor with: href=\"...\" onclick=\"..form.submit()\"\n" ]
[ 3, 2, 0 ]
[]
[]
[ "firefox" ]
stackoverflow_0000095715_firefox.txt
Q: How to Implement a Redirect on All Requests (on certain conditions)? I want to set something up so that if an Account within my app is disabled, I want all requests to be redirected to a "disabled" message. I've set this up in my ApplicationController: class ApplicationController < ActionController::Base before_filter :check_account def check_account redirect_to :controller => "main", :action => "disabled" and return if !$account.active? end end Of course, this doesn't quite work as it goes into an infinite loop if the Account is not active. I was hoping to use something like: redirect_to :controller => "main", :action => "disabled" and return if !$account.active? && @controller.controller_name != "main" && @controller.action_name != "disabled" but I noticed that in Rails v2.1 (what I'm using), @controller is now controller and this doesn't seem to work in ApplicationController. What would be the best way to implement something like this? A: You have several options. If your action method "disabled" is uniquely named in the scope of the application, you can add an exception to the before_filter call, like this: before_filter :check_account, :except => :disabled If you want to check specifically for the controller and action in the filter, you should note that this code is already part of the controller object. You can refer to it as "self," like so: def check_account return if self.controller_name == "main" && self.action_name == "disabled" redirect_to :controller => "main", :action => "disabled" and return if !$account.active? end Finally, if you like, you can overwrite the filter method from within MainController.rb: def check_account return if action_name == "disabled" super end A: You could also use a skip_before_filter for the one controller/method you don't want to have the filter apply to. A: How about first getting rid of that global variable $account. You are basically setting yourself up for some serious bugs by using a global. Just use an instance variable instead @ or better yet create a method on ApplicationController called current_account which access the @current_account instance variable. A: If theres not too many overrides then just put the if in the redirect filter if action != disabled redirect() end
How to Implement a Redirect on All Requests (on certain conditions)?
I want to set something up so that if an Account within my app is disabled, I want all requests to be redirected to a "disabled" message. I've set this up in my ApplicationController: class ApplicationController < ActionController::Base before_filter :check_account def check_account redirect_to :controller => "main", :action => "disabled" and return if !$account.active? end end Of course, this doesn't quite work as it goes into an infinite loop if the Account is not active. I was hoping to use something like: redirect_to :controller => "main", :action => "disabled" and return if !$account.active? && @controller.controller_name != "main" && @controller.action_name != "disabled" but I noticed that in Rails v2.1 (what I'm using), @controller is now controller and this doesn't seem to work in ApplicationController. What would be the best way to implement something like this?
[ "You have several options.\nIf your action method \"disabled\" is uniquely named in the scope of the application, you can add an exception to the before_filter call, like this:\nbefore_filter :check_account, :except => :disabled\n\nIf you want to check specifically for the controller and action in the filter, you should note that this code is already part of the controller object. You can refer to it as \"self,\" like so:\n def check_account\n return if self.controller_name == \"main\" && self.action_name == \"disabled\"\n\n redirect_to :controller => \"main\", :action => \"disabled\" and return if !$account.active?\n end\n\nFinally, if you like, you can overwrite the filter method from within MainController.rb:\n def check_account\n return if action_name == \"disabled\"\n super\n end\n\n", "You could also use a skip_before_filter for the one controller/method you don't want to have the filter apply to.\n", "How about first getting rid of that global variable $account. You are basically setting yourself up for some serious bugs by using a global. Just use an instance variable instead @ or better yet create a method on ApplicationController called current_account which access the @current_account instance variable. \n", "If theres not too many overrides then just put the if in the redirect filter\nif action != disabled\n redirect()\nend\n" ]
[ 6, 3, 1, 0 ]
[]
[]
[ "redirect", "ruby", "ruby_on_rails" ]
stackoverflow_0000089543_redirect_ruby_ruby_on_rails.txt
Q: Append Subject Header in Outlook (VBA) Basically, we have a rule setup to run a script when a code word is detected in the body of an incoming message. The script will append the current subject header with a word in front. For example, Before: "Test Message", After: "Dept - Test Message". Any ideas? A: Or if you need an entire script: Do the Run a script with the MailItem as the parameter. Sub RewriteSubject(MyMail As MailItem) Dim mailId As String Dim outlookNS As Outlook.NameSpace Dim myMailItem As Outlook.MailItem mailId = MyMail.EntryID Set outlookNS = Application.GetNamespace("MAPI") Set myMailItem = outlookNS.GetItemFromID(mailId) ' Do any detection here With myMailItem .Subject = "Dept - " & mailItem.Subject .Save End With Set myMailItem = Nothing Set outlookNS = Nothing End Sub A: Not tested: mailItem.Subject = "Dept - " & mailItem.Subject mailItem.Save A: Sub AppendSubject(MyMail As MailItem) Dim strID As String Dim mailNS As Outlook.NameSpace Dim mailItem As Outlook.MailItem strID = MyMail.EntryID Set mailNS = Application.GetNamespace("MAPI") Set mailItem = mailNS.GetItemFromID(strID) mailItem.Subject = "Dept - " & mailItem.Subject mailItem.Save Set mailItem = Nothing Set mailNS = Nothing End Sub Are we missing anything? EDIT: Doh! You already answered our question with a full script... Thanks!
Append Subject Header in Outlook (VBA)
Basically, we have a rule setup to run a script when a code word is detected in the body of an incoming message. The script will append the current subject header with a word in front. For example, Before: "Test Message", After: "Dept - Test Message". Any ideas?
[ "Or if you need an entire script:\nDo the Run a script with the MailItem as the parameter.\nSub RewriteSubject(MyMail As MailItem)\n\n Dim mailId As String\n Dim outlookNS As Outlook.NameSpace\n Dim myMailItem As Outlook.MailItem\n\n mailId = MyMail.EntryID\n Set outlookNS = Application.GetNamespace(\"MAPI\")\n Set myMailItem = outlookNS.GetItemFromID(mailId)\n\n ' Do any detection here\n\n With myMailItem \n .Subject = \"Dept - \" & mailItem.Subject\n .Save\n End With\n\n Set myMailItem = Nothing\n Set outlookNS = Nothing\n\nEnd Sub\n\n", "Not tested:\nmailItem.Subject = \"Dept - \" & mailItem.Subject\nmailItem.Save \n\n", "Sub AppendSubject(MyMail As MailItem)\n Dim strID As String\n Dim mailNS As Outlook.NameSpace\n Dim mailItem As Outlook.MailItem\n\n strID = MyMail.EntryID\n Set mailNS = Application.GetNamespace(\"MAPI\")\n Set mailItem = mailNS.GetItemFromID(strID)\n mailItem.Subject = \"Dept - \" & mailItem.Subject\n mailItem.Save\n\n Set mailItem = Nothing\n Set mailNS = Nothing\nEnd Sub\n\nAre we missing anything? EDIT: Doh! You already answered our question with a full script... Thanks!\n" ]
[ 4, 0, 0 ]
[]
[]
[ "append", "outlook", "vba" ]
stackoverflow_0000095625_append_outlook_vba.txt
Q: How are you using the Machine.config, or are you? For ASP.Net application deployment what type of information (if any) are you storing in the machine.config? If you're not using it, how are you managing environment specific configuration settings that may change for each environment? I'm looking for some "best practices" and the benefits/pitfalls of each. We're about to deploy a brand new application to production in two months and I've got some latitude in these types of decisions. I want to make sure that I'm approaching things in the best way possible and attempting to avoid shooting myself in the foot at a later date. FYI We're using it (machine.config) currently for just the DB connection information and storing all other variables that might change in a config table in the database. A: We are considering using machine.config to add one key for the environment, and then have one section in the web.config which is excactly the same for all environments. This way we can do a "real" XCopy deployment. E.g. in the machine.config for every computer (local dev workstations, stage servers, build servers, production servers), we'll add the following: <appSettings> <add key="Environment" value="Staging"/> </appSettings> Then, any configuration element that is environment-specific gets the environment appended, like so: <connectionStrings> <add name="Customers.Staging" provider="..." connectionString="..."/> </connectionStrings> <appSettings> <add key="NTDomain.Staging" value="test.mydomain.com"/> </appSettings> One problem that we don't have a solution for is how to enable say tracing in web.config for debugging environment and not for live environment. Another problem is that the live connectionstring incl. username and password is now in your Source Control system. This is however not a problem for us. A: If you load balance your servers, you ABSOLUTELY have to make sure the machine key is the same on all the servers. Viewstate is supposed to be server agnostic, but it is not, so you'll get viewstate corruption errors if the machine key is not the same across servers. <machineKey validationKey='A130E240DF1C49E2764EF8A86CEDCBB11274E5298A130CA08B90EED016C0 14CEAE1D86344C29E67E99DF83347E43820050A2B9C9FC89E0574BF3394B6D0401A9' decryptionKey='2CC37FFA8D14925B9CBCC0E3B1506F35066FEF33FEB4ADC8' validation='SHA1'/> From: http://www.c-sharpcorner.com/UploadFile/gopenath/Page107182007032219AM/Page1.aspx PS sure you can enableViewStateMAC="false", but don't. A: We use machine.config on our production server to set/remove specific configuration that are important for production and we never want to forget to set them. These are the 2 most important: <system.web> <deployment retail="true" /> <healthMonitoring enabled="true" /> </system.web> A: I use machine.config for not just ASP.NET, but for overall config as well. I implemented a hash algorithm (Tiger) in C# and wanted it to be available via machine request. So, registered my assembly in the GAC and added the following to machine.config: <?xml version="1.0" encoding="UTF-8"?> <configuration> <mscorlib> <cryptographySettings> <cryptoNameMapping> <cryptoClasses> <cryptoClass Tiger192="Jcs.Tiger.Tiger192, Jcs.Tiger, Culture=neutral, PublicKeyToken=66c61a8173417e64, Version=1.0.0.4"/> <cryptoClass Tiger160="Jcs.Tiger.Tiger160, Jcs.Tiger, Culture=neutral, PublicKeyToken=66c61a8173417e64, Version=1.0.0.4"/> <cryptoClass Tiger128="Jcs.Tiger.Tiger128, Jcs.Tiger, Culture=neutral, PublicKeyToken=66c61a8173417e64, Version=1.0.0.4"/> </cryptoClasses> <nameEntry name="Tiger" class="Tiger192"/> <nameEntry name="TigerFull" class="Tiger192"/> <nameEntry name="Tiger192" class="Tiger192"/> <nameEntry name="Tiger160" class="Tiger160"/> <nameEntry name="Tiger128" class="Tiger128"/> <nameEntry name="System.Security.Cryptography.HashAlgorithm" class="Tiger192"/> </cryptoNameMapping> <oidMap> <oidEntry OID="1.3.6.1.4.1.11591.12.2" name="Jcs.Tiger.Tiger192"/> </oidMap> </cryptographySettings> </mscorlib> </configuration> This allows my code to look like so: using (var h1 = HashAlgorithm.Create("Tiger192")) { ... } and there's no dependency on the Jcs.Tiger.dll assembly in my code at all, hard or soft.
How are you using the Machine.config, or are you?
For ASP.Net application deployment what type of information (if any) are you storing in the machine.config? If you're not using it, how are you managing environment specific configuration settings that may change for each environment? I'm looking for some "best practices" and the benefits/pitfalls of each. We're about to deploy a brand new application to production in two months and I've got some latitude in these types of decisions. I want to make sure that I'm approaching things in the best way possible and attempting to avoid shooting myself in the foot at a later date. FYI We're using it (machine.config) currently for just the DB connection information and storing all other variables that might change in a config table in the database.
[ "We are considering using machine.config to add one key for the environment, and then have one section in the web.config which is excactly the same for all environments. This way we can do a \"real\" XCopy deployment.\nE.g. in the machine.config for every computer (local dev workstations, stage servers, build servers, production servers), we'll add the following:\n<appSettings>\n <add key=\"Environment\" value=\"Staging\"/>\n</appSettings>\n\nThen, any configuration element that is environment-specific gets the environment appended, like so:\n<connectionStrings>\n <add name=\"Customers.Staging\" provider=\"...\" connectionString=\"...\"/>\n</connectionStrings>\n<appSettings>\n <add key=\"NTDomain.Staging\" value=\"test.mydomain.com\"/>\n</appSettings>\n\nOne problem that we don't have a solution for is how to enable say tracing in web.config for debugging environment and not for live environment.\nAnother problem is that the live connectionstring incl. username and password is now in your Source Control system. This is however not a problem for us.\n", "If you load balance your servers, you ABSOLUTELY have to make sure the machine key is the same on all the servers. Viewstate is supposed to be server agnostic, but it is not, so you'll get viewstate corruption errors if the machine key is not the same across servers.\n<machineKey validationKey='A130E240DF1C49E2764EF8A86CEDCBB11274E5298A130CA08B90EED016C0\n14CEAE1D86344C29E67E99DF83347E43820050A2B9C9FC89E0574BF3394B6D0401A9'\ndecryptionKey='2CC37FFA8D14925B9CBCC0E3B1506F35066FEF33FEB4ADC8' validation='SHA1'/>\n\nFrom: http://www.c-sharpcorner.com/UploadFile/gopenath/Page107182007032219AM/Page1.aspx\nPS sure you can enableViewStateMAC=\"false\", but don't.\n", "We use machine.config on our production server to set/remove specific configuration that are important for production and we never want to forget to set them.\nThese are the 2 most important:\n<system.web>\n <deployment retail=\"true\" />\n <healthMonitoring enabled=\"true\" />\n</system.web> \n\n", "I use machine.config for not just ASP.NET, but for overall config as well. I implemented a hash algorithm (Tiger) in C# and wanted it to be available via machine request. So, registered my assembly in the GAC and added the following to machine.config:\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<configuration>\n <mscorlib>\n <cryptographySettings>\n <cryptoNameMapping>\n <cryptoClasses>\n <cryptoClass Tiger192=\"Jcs.Tiger.Tiger192, Jcs.Tiger, Culture=neutral, PublicKeyToken=66c61a8173417e64, Version=1.0.0.4\"/>\n <cryptoClass Tiger160=\"Jcs.Tiger.Tiger160, Jcs.Tiger, Culture=neutral, PublicKeyToken=66c61a8173417e64, Version=1.0.0.4\"/>\n <cryptoClass Tiger128=\"Jcs.Tiger.Tiger128, Jcs.Tiger, Culture=neutral, PublicKeyToken=66c61a8173417e64, Version=1.0.0.4\"/>\n </cryptoClasses>\n <nameEntry name=\"Tiger\" class=\"Tiger192\"/>\n <nameEntry name=\"TigerFull\" class=\"Tiger192\"/>\n <nameEntry name=\"Tiger192\" class=\"Tiger192\"/>\n <nameEntry name=\"Tiger160\" class=\"Tiger160\"/>\n <nameEntry name=\"Tiger128\" class=\"Tiger128\"/>\n <nameEntry name=\"System.Security.Cryptography.HashAlgorithm\" class=\"Tiger192\"/>\n </cryptoNameMapping>\n <oidMap>\n <oidEntry OID=\"1.3.6.1.4.1.11591.12.2\" name=\"Jcs.Tiger.Tiger192\"/>\n </oidMap>\n </cryptographySettings>\n </mscorlib>\n</configuration>\n\nThis allows my code to look like so:\nusing (var h1 = HashAlgorithm.Create(\"Tiger192\"))\n{\n ...\n}\n\nand there's no dependency on the Jcs.Tiger.dll assembly in my code at all, hard or soft.\n" ]
[ 8, 8, 5, 2 ]
[]
[]
[ ".net", "asp.net", "configuration", "deployment" ]
stackoverflow_0000094053_.net_asp.net_configuration_deployment.txt
Q: Overriding a MIME type in Rails I want to override the JSON MIME type ("application/json") in Rails to ("text/x-json"). I tried to register the MIME type again in mime_types.rb but that didn't work. Any suggestions? Thanks. A: This should work (in an initializer, plugin, or some similar place): Mime.send(:remove_const, :JSON) Mime::Type.register "text/x-json", :json A: Try: render :json => var_containing_my_json, :content_type => 'text/x-json'
Overriding a MIME type in Rails
I want to override the JSON MIME type ("application/json") in Rails to ("text/x-json"). I tried to register the MIME type again in mime_types.rb but that didn't work. Any suggestions? Thanks.
[ "This should work (in an initializer, plugin, or some similar place):\nMime.send(:remove_const, :JSON)\nMime::Type.register \"text/x-json\", :json\n\n", "Try:\nrender :json => var_containing_my_json, :content_type => 'text/x-json'\n\n" ]
[ 11, 3 ]
[]
[]
[ "json", "mime", "mime_types", "ruby", "ruby_on_rails" ]
stackoverflow_0000095554_json_mime_mime_types_ruby_ruby_on_rails.txt
Q: How does jstl's sql tag work? I'm using the following code to query a database from my jsp, but I'd like to know more about what's happening behind the scenes. These are my two primary questions. Does the tag access the ResultSet directly, or is the query result being stored in a datastructure in memory? When is the connection closed? <%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %> <sql:query var="query" dataSource="${ds}" sql="${listQuery}"></sql:query> <c:forEach var="row" items="${query.rows}" begin="0"> ${row.data } ${row.more_data } </c:forEach> Note: I've always been against running queries in the jsp, but my result set is too large to store in memory between my action and my jsp. Using this tag library looks like the easiest solution. A: Observations based on the source for org.apache.taglibs.standard.tag.common.sql.QueryTagSupport The taglib traverses through the ResultSet and puts all of the data in arrays, Maps, and Lists. So, everything is loaded into memory before you even start looping. The connection is opened when the query start tag is encountered (doStartTag method). The results are retrieved when the query end tag is encountered (doEndTag method). The connection is closed in the doFinally method. It a nutshell, it is absolutely awful. A: The key thing here is this: javax.servlet.jsp.jstl.sql.Result That's what JSTL uses as the result of a SQL Query. If you look at the interface, it has this method: public java.util.SortedMap[] getRows() c:forEach "knows" about javax.servlet.jsp.jstl.sql.Result, since Result isn't anything else that forEach knows about (Collections, arrays, iterators, etc). So, all of that implies that the SQL query will suck the entire result set in to RAM. If you moved your query in to the JSP because you didn't want to load the entire result set in to a collection, then it doesn't look like the SQL tag will solve that problem for you. In truth you should look up Value List Pattern. But a "simple" solution to your problem would be to create a custom Iterator that "knows" about your ResultSet. This one wraps a result set and closes everything if it encounters an exception or if the result runs its course (like it would in a forEach). Kind of a special purpose thing. public class ResultSetIterator implements Iterator { Connection con; Statement s; ResultSet rs; Object curObject; boolean closed; public ResultSetIterator(Connection con, Statement s, ResultSet rs) { this.con = con; this.s = s; this.rs = rs; closed = false; } public boolean hasNext() { advance(); return curObject != null; } public Object next() { advance(); if (curObject == null) { throw new NoSuchElementException(); } else { Object result = curObject; curObject = null; return result; } } public void remove() { throw new UnsupportedOperationException("Not supported yet."); } private void advance() { if (closed) { curObject = null; return; } if (curObject == null) { try { if (rs.next()) { curObject = bindObject(rs); } } catch (SQLException ex) { shutDown(); throw new RuntimeException(ex); } } if (curObject == null) { // Still no object, must be at the end of the result set shutDown(); } } protected Object bindObject(ResultSet rs) throws SQLException { // Bind result set row to an object, replace or override this method String name = rs.getString(1); return name; } public void shutDown() { closed = true; try { rs.close(); } catch (SQLException ex) { // Ignored } try { s.close(); } catch (SQLException ex) { // Ignored } try { con.close(); } catch (SQLException ex) { // Ignored } } } This is, naturally, untested. But since JSTLs forEach can work with an Iterator, it's the simplest object you could really pass to it. This will prevent you from loading the entire result set in to memory. (As an interesting aside, it's notable how almost, but not quite, completely unlike Iterator a ResultSets behavior is.)
How does jstl's sql tag work?
I'm using the following code to query a database from my jsp, but I'd like to know more about what's happening behind the scenes. These are my two primary questions. Does the tag access the ResultSet directly, or is the query result being stored in a datastructure in memory? When is the connection closed? <%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %> <sql:query var="query" dataSource="${ds}" sql="${listQuery}"></sql:query> <c:forEach var="row" items="${query.rows}" begin="0"> ${row.data } ${row.more_data } </c:forEach> Note: I've always been against running queries in the jsp, but my result set is too large to store in memory between my action and my jsp. Using this tag library looks like the easiest solution.
[ "Observations based on the source for org.apache.taglibs.standard.tag.common.sql.QueryTagSupport\nThe taglib traverses through the ResultSet and puts all of the data in arrays, Maps, and Lists. So, everything is loaded into memory before you even start looping.\nThe connection is opened when the query start tag is encountered (doStartTag method). The results are retrieved when the query end tag is encountered (doEndTag method). The connection is closed in the doFinally method.\nIt a nutshell, it is absolutely awful.\n", "The key thing here is this: javax.servlet.jsp.jstl.sql.Result\nThat's what JSTL uses as the result of a SQL Query. If you look at the interface, it has this method: \npublic java.util.SortedMap[] getRows()\nc:forEach \"knows\" about javax.servlet.jsp.jstl.sql.Result, since Result isn't anything else that forEach knows about (Collections, arrays, iterators, etc).\nSo, all of that implies that the SQL query will suck the entire result set in to RAM.\nIf you moved your query in to the JSP because you didn't want to load the entire result set in to a collection, then it doesn't look like the SQL tag will solve that problem for you.\nIn truth you should look up Value List Pattern.\nBut a \"simple\" solution to your problem would be to create a custom Iterator that \"knows\" about your ResultSet. This one wraps a result set and closes everything if it encounters an exception or if the result runs its course (like it would in a forEach). Kind of a special purpose thing.\n\npublic class ResultSetIterator implements Iterator {\nConnection con;\nStatement s;\nResultSet rs;\nObject curObject;\nboolean closed;\n\npublic ResultSetIterator(Connection con, Statement s, ResultSet rs) {\n this.con = con;\n this.s = s;\n this.rs = rs;\n closed = false;\n}\n\npublic boolean hasNext() {\n advance();\n return curObject != null;\n}\n\npublic Object next() {\n advance();\n if (curObject == null) {\n throw new NoSuchElementException();\n } else {\n Object result = curObject;\n curObject = null;\n return result;\n }\n}\n\npublic void remove() {\n throw new UnsupportedOperationException(\"Not supported yet.\");\n}\n\nprivate void advance() {\n if (closed) {\n curObject = null;\n return;\n }\n if (curObject == null) {\n try {\n if (rs.next()) {\n curObject = bindObject(rs);\n }\n } catch (SQLException ex) {\n shutDown();\n throw new RuntimeException(ex);\n }\n }\n if (curObject == null) {\n // Still no object, must be at the end of the result set\n shutDown();\n }\n}\n\nprotected Object bindObject(ResultSet rs) throws SQLException {\n // Bind result set row to an object, replace or override this method\n String name = rs.getString(1);\n return name;\n}\n\npublic void shutDown() {\n closed = true;\n try {\n rs.close();\n } catch (SQLException ex) {\n // Ignored\n }\n try {\n s.close();\n } catch (SQLException ex) {\n // Ignored\n }\n try {\n con.close();\n } catch (SQLException ex) {\n // Ignored\n }\n}\n\n}\n\nThis is, naturally, untested. But since JSTLs forEach can work with an Iterator, it's the simplest object you could really pass to it. This will prevent you from loading the entire result set in to memory. (As an interesting aside, it's notable how almost, but not quite, completely unlike Iterator a ResultSets behavior is.)\n" ]
[ 8, 1 ]
[]
[]
[ "java", "jsp", "jstl", "web_applications" ]
stackoverflow_0000095134_java_jsp_jstl_web_applications.txt
Q: Do access modifiers affect reflection also? I always believe they did, but seeing some answers here make me doubt... Can I access private fields/properties/methods from outside a class through reflection? A: Yes you can access private fields via reflection. This is how a lot of ORMs go about populating an object without going through your properties (which will invoke business logic you might not have intended to be run on an object load). Access modifiers are not a form of security! A: You do, however, need extra permissions for accessing private/protected/internal fields/properties/methods from outside a class through reflection. A: Yes you can, you just specify the access modifier in the BindingFlags when you access them. A: Yes you can: but you really should questions yourself why you're going to :) There is actually only one case, where it can make sense and this is a UnitTest.
Do access modifiers affect reflection also?
I always believe they did, but seeing some answers here make me doubt... Can I access private fields/properties/methods from outside a class through reflection?
[ "Yes you can access private fields via reflection. This is how a lot of ORMs go about populating an object without going through your properties (which will invoke business logic you might not have intended to be run on an object load).\nAccess modifiers are not a form of security!\n", "You do, however, need extra permissions for accessing private/protected/internal fields/properties/methods from outside a class through reflection.\n", "Yes you can, you just specify the access modifier in the BindingFlags when you access them.\n", "Yes you can: but you really should questions yourself why you're going to :)\nThere is actually only one case, where it can make sense and this is a UnitTest.\n" ]
[ 5, 3, 2, 0 ]
[]
[]
[ ".net", "access_modifiers", "reflection" ]
stackoverflow_0000095974_.net_access_modifiers_reflection.txt
Q: How do you do paged lists in JavaServer Faces? I have a JSF application that I am converting over to use webservices instead of straight up database queries. There are some extremely long lists that before could be returned easily with a simple SQL query. I'd like to figured out how implement the paging using JSF/web services. Is there a good design pattern for doing paged web services? If it matters, I'm currently using the Apache MyFaces reference implementation of JSF with the Tomahawk extensions (a set of JSF components created by the MyFaces development team prior to its donation to Apache). A: It depends on whether you want to do client-side or server-side paging. If server side, your web services will have to include a couple of additional parameters (e.g. "startFrom" and "pageSize") which will let you specify which 'page' of the data to retrieve. Your service will probably also need to return the total result size so you can generate a paging control. If you decide that's too much effort you can do client-side paging in your backing bean (or get a component to do it for you), however it's not recommended if you're talking about thousands of objects! A: I like Seam's Query objects: http://docs.jboss.com/seam/2.1.0.BETA1/reference/en-US/html_single/#d0e7527 They basically abstract all the SQL/JPA in a Seam component that JSF can easily use. If you don't want to use Seam and/or JPA you could implement a similar pattern. A: Trinidad has a table component that supports paging, which may help. It is not ideal, but works well enough with Seam, as described in Pete Muir's Backing Trinidad's dataTable with Seam blog post. If you don't find a JSF component you like, you'll need to write your own logic to set parameters for limit and offset in your EJB-QL (JPA) queries. A: We used the RichFaces library Datatable: http://livedemo.exadel.com/richfaces-demo/richfaces/dataTable.jsf?tab=usage It's quite simple, and if you're not using RichFaces already, it's really easy to integrate with MyFaces. A: If you are getting all the results back from the webservice at once and can't include pagination into the actual web service call, you can try setting the list of items to a property on a managed bean. Then you can hook that up to the "value" attribute on a Tomahawk dataTable: http://myfaces.apache.org/tomahawk-project/tomahawk/tagdoc/t_dataTable.html and then you can use a Tomahawk dataScroller to paginate over the list of items stored in that property. Here is the reference for that component, it works well with the dataTable component: http://myfaces.apache.org/tomahawk-project/tomahawk/tagdoc/t_dataScroller.html You can include this inside the header/footer facets of the dataTable or as a separate compoment (you will need to specify the id of the dataTable in the 'for' attribute of the dataScroller. There's other neat things you can do with the dataTable like sorting and toggling details for each row, but that can be implemented once you get the basic pagination working. Hope that helps!
How do you do paged lists in JavaServer Faces?
I have a JSF application that I am converting over to use webservices instead of straight up database queries. There are some extremely long lists that before could be returned easily with a simple SQL query. I'd like to figured out how implement the paging using JSF/web services. Is there a good design pattern for doing paged web services? If it matters, I'm currently using the Apache MyFaces reference implementation of JSF with the Tomahawk extensions (a set of JSF components created by the MyFaces development team prior to its donation to Apache).
[ "It depends on whether you want to do client-side or server-side paging. If server side, your web services will have to include a couple of additional parameters (e.g. \"startFrom\" and \"pageSize\") which will let you specify which 'page' of the data to retrieve. Your service will probably also need to return the total result size so you can generate a paging control.\nIf you decide that's too much effort you can do client-side paging in your backing bean (or get a component to do it for you), however it's not recommended if you're talking about thousands of objects!\n", "I like Seam's Query objects: http://docs.jboss.com/seam/2.1.0.BETA1/reference/en-US/html_single/#d0e7527\nThey basically abstract all the SQL/JPA in a Seam component that JSF can easily use.\nIf you don't want to use Seam and/or JPA you could implement a similar pattern.\n", "Trinidad has a table component that supports paging, which may help. It is not ideal, but works well enough with Seam, as described in Pete Muir's Backing Trinidad's dataTable with Seam blog post.\nIf you don't find a JSF component you like, you'll need to write your own logic to set parameters for limit and offset in your EJB-QL (JPA) queries.\n", "We used the RichFaces library Datatable: http://livedemo.exadel.com/richfaces-demo/richfaces/dataTable.jsf?tab=usage\nIt's quite simple, and if you're not using RichFaces already, it's really easy to integrate with MyFaces.\n", "If you are getting all the results back from the webservice at once and can't include pagination into the actual web service call, you can try setting the list of items to a property on a managed bean. Then you can hook that up to the \"value\" attribute on a Tomahawk dataTable: \nhttp://myfaces.apache.org/tomahawk-project/tomahawk/tagdoc/t_dataTable.html\nand then you can use a Tomahawk dataScroller to paginate over the list of items stored in that property. Here is the reference for that component, it works well with the dataTable component:\nhttp://myfaces.apache.org/tomahawk-project/tomahawk/tagdoc/t_dataScroller.html\nYou can include this inside the header/footer facets of the dataTable or as a separate compoment (you will need to specify the id of the dataTable in the 'for' attribute of the dataScroller. \nThere's other neat things you can do with the dataTable like sorting and toggling details for each row, but that can be implemented once you get the basic pagination working. \nHope that helps!\n" ]
[ 3, 2, 2, 2, 1 ]
[]
[]
[ "java", "jsf", "paging", "web_services" ]
stackoverflow_0000069917_java_jsf_paging_web_services.txt
Q: What is a variable's linkage and storage specifier? When someone talks about a variables storage class specifier, what are they talking about? They also often talk about variable linkage in the same context, what is that? A: The storage class specifier controls the storage and the linkage of your variables. These are two concepts that are different. C specifies the following specifiers for variables: auto, extern, register, static. Storage The storage duration determines how long your variable will live in ram. There are three types of storage duration: static, automatic and dynamic. static If your variable is declared at file scope, or with an extern or static specifier, it will have static storage. The variable will exist for as long as the program is executing. No execution time is spent to create these variables. automatic If the variable is declared in a function, but without the extern or static specifier, it has automatic storage. The variable will exist only while you are executing the function. Once you return, the variable no longer exist. Automatic storage is typically done on the stack. It is a very fast operation to create these variables (simply increment the stack pointer by the size). dynamic If you use malloc (or new in C++) you are using dynamic storage. This storage will exist until you call free (or delete). This is the most expensive way to create storage, as the system must manage allocation and deallocation dynamically. Linkage Linkage specifies who can see and reference the variable. There are three types of linkage: internal linkage, external linkage and no linkage. no linkage This variable is only visible where it was declared. Typically applies to variables declared in a function. internal linkage This variable will be visible to all the functions within the file (called a translation unit), but other files will not know it exists. external linkage The variable will be visible to other translation units. These are often thought of as "global variables". Here is a table describing the storage and linkage characteristics based on the specifiers Storage Class Function File Specifier Scope Scope ----------------------------------------------------- none automatic static no linkage external linkage extern static static external linkage external linkage static static static no linkage internal linkage auto automatic invalid no linkage register automatic invalid no linkage A: Variable storage classes or type specifiers (like volatile, auto and static) define how/where variables are saved during program execution. For example, variables defined in functions are usually saved on the stack, which means that it will be lost after the function returns. Using the "static" keyword, you can force the compiler to put the variable in the data segment in memory, making the variables content persistent between calls to that function. The "register" keyword will cause the compiler to try as hard as possible to put the variable in a CPU register, useful for counters in loops etc. However, it's not guaranteed that it's actually in a register after all. Read more about type specifiers here.
What is a variable's linkage and storage specifier?
When someone talks about a variables storage class specifier, what are they talking about? They also often talk about variable linkage in the same context, what is that?
[ "The storage class specifier controls the storage and the linkage of your variables. These are two concepts that are different.\nC specifies the following specifiers for variables: auto, extern, register, static.\nStorage\nThe storage duration determines how long your variable will live in ram.\nThere are three types of storage duration: static, automatic and dynamic.\nstatic\nIf your variable is declared at file scope, or with an extern or static specifier, it will have static storage. The variable will exist for as long as the program is executing. No execution time is spent to create these variables.\nautomatic\nIf the variable is declared in a function, but without the extern or static specifier, it has automatic storage. The variable will exist only while you are executing the function. Once you return, the variable no longer exist. Automatic storage is typically done on the stack. It is a very fast operation to create these variables (simply increment the stack pointer by the size).\ndynamic\nIf you use malloc (or new in C++) you are using dynamic storage. This storage will exist until you call free (or delete). This is the most expensive way to create storage, as the system must manage allocation and deallocation dynamically.\nLinkage\nLinkage specifies who can see and reference the variable. There are three types of linkage: internal linkage, external linkage and no linkage.\nno linkage\nThis variable is only visible where it was declared. Typically applies to variables declared in a function.\ninternal linkage\nThis variable will be visible to all the functions within the file (called a translation unit), but other files will not know it exists.\nexternal linkage\nThe variable will be visible to other translation units. These are often thought of as \"global variables\".\nHere is a table describing the storage and linkage characteristics based on the specifiers\n\n Storage Class Function File \n Specifier Scope Scope \n-----------------------------------------------------\n none automatic static \n no linkage external linkage\n\n extern static static\n external linkage external linkage\n\n static static static\n no linkage internal linkage\n\n auto automatic invalid\n no linkage\n\nregister automatic invalid\n no linkage\n\n", "Variable storage classes or type specifiers (like volatile, auto and static) define how/where variables are saved during program execution. For example, variables defined in functions are usually saved on the stack, which means that it will be lost after the function returns. Using the \"static\" keyword, you can force the compiler to put the variable in the data segment in memory, making the variables content persistent between calls to that function. The \"register\" keyword will cause the compiler to try as hard as possible to put the variable in a CPU register, useful for counters in loops etc. However, it's not guaranteed that it's actually in a register after all.\nRead more about type specifiers here.\n" ]
[ 32, 0 ]
[]
[]
[ "c", "c++" ]
stackoverflow_0000095890_c_c++.txt
Q: Why is my program asking for permission to run on Vista? I've just built a VS C++ 6.0 program using VS 2008. When I attempt to run or debug the application, Vista asks for permission. What is it about how the program is built that causes this? The program is being built and run from a subfolder of C:\Dev This response made no sense to me as a solution to the problem. A: Possibility 1: Your program is marked as needing admin rights in its manifest Possibility 2: Your program is called setup.exe or install.exe - such program names always cause administrator rights to be required For detailed explanation of those and other possibilities why you see this check Getting to Know User Account Control Technet article A: The MVP was talking about having your code and project run from your user folder for example c:\users\yourname\appdata or something under that path. Do not disable UAC to fix this problem otherwise your application will not run on another machine unless it to has UAC turned off. It is a very bad practice. Your application, in a perfect world, should request elevated permissions from the user. A: Thank you Suma. You're response is the best yet and helped me arrive at a solution. I have determined that cause is explained by your first suggestion. Renaming the file to something not containing the word 'setup" did not help. Turned out I was mistaken. I have both VS 2005 and VS 2008 installed and when I tried opening the old .dsw file, it was 2005 that was launched and offered to upgrade the project. 2005 apparently created a manifest with only one line with the tag "assembly". Once I upgraded the project using VS 2008 a more extensive manifest file was created. I confirmed that the manifest is being embedded in my program by checking the Manifest Tool...Input and Output...Embed Manifest setting. This new manifest includes the following data: <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> A: If you're not an admin, then you probably don't have permission to execute programs in C:\Dev.
Why is my program asking for permission to run on Vista?
I've just built a VS C++ 6.0 program using VS 2008. When I attempt to run or debug the application, Vista asks for permission. What is it about how the program is built that causes this? The program is being built and run from a subfolder of C:\Dev This response made no sense to me as a solution to the problem.
[ "Possibility 1:\nYour program is marked as needing admin rights in its manifest\nPossibility 2:\nYour program is called setup.exe or install.exe - such program names always cause administrator rights to be required\nFor detailed explanation of those and other possibilities why you see this check Getting to Know User Account Control Technet article\n", "The MVP was talking about having your code and project run from your user folder for example c:\\users\\yourname\\appdata or something under that path.\nDo not disable UAC to fix this problem otherwise your application will not run on another machine unless it to has UAC turned off. It is a very bad practice. Your application, in a perfect world, should request elevated permissions from the user.\n", "Thank you Suma. You're response is the best yet and helped me arrive at a solution.\nI have determined that cause is explained by your first suggestion. Renaming the file to something not containing the word 'setup\" did not help. \nTurned out I was mistaken. I have both VS 2005 and VS 2008 installed and when I tried opening the old .dsw file, it was 2005 that was launched and offered to upgrade the project. 2005 apparently created a manifest with only one line with the tag \"assembly\". Once I upgraded the project using VS 2008 a more extensive manifest file was created. I confirmed that the manifest is being embedded in my program by checking the Manifest Tool...Input and Output...Embed Manifest setting. This new manifest includes the following data:\n<trustInfo xmlns=\"urn:schemas-microsoft-com:asm.v3\">\n <security>\n <requestedPrivileges>\n <requestedExecutionLevel level=\"asInvoker\" uiAccess=\"false\"></requestedExecutionLevel>\n </requestedPrivileges>\n </security> \n\n\n", "If you're not an admin, then you probably don't have permission to execute programs in C:\\Dev.\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "visual_studio_2008", "windows_vista" ]
stackoverflow_0000095525_visual_studio_2008_windows_vista.txt
Q: How to evaluate Eclipse RCP for an in-house project? I have only a basic understanding of Eclipse RCP. I am about to start an in-house application for our technical support team, that will likely grow over time. The team is distributed across continents so I would like to be able to auto-update the application when new versions are made available. The application aims to capture knowledge from technical support incidents while making it easy to replay data fixes across clients. The things that make eclipse RCP look interesting are Eclipse Communication Framework (ECF) and Data Tools Platform (DTP). My constraints are: Small Team (basically just me for now :) Have to manage it as a side project until its usefulness is proven I am basically looking for insights from other developers who have worked with Eclipse RCP or who know a better alternative. A: The best way to evaluate RCP is to create a small project ... I started with the tutorial here: http://www.ibm.com/developerworks/edu/os-dw-os-ecl-rcpapp.html and gradually created a less trivial application. Probably the best single resource I've found is the "Eclipse Rich Client Platform" book (which I initially borrowed from the local university library. The book's web site is here: http://eclipsercp.org/book/. The only downside to RCP is the size of the distributed program, but the automatic software update feature makes this much less painful and, if you modularize the application using plugins, the user doesn't have to download the entire application to receive updates to one plugin.
How to evaluate Eclipse RCP for an in-house project?
I have only a basic understanding of Eclipse RCP. I am about to start an in-house application for our technical support team, that will likely grow over time. The team is distributed across continents so I would like to be able to auto-update the application when new versions are made available. The application aims to capture knowledge from technical support incidents while making it easy to replay data fixes across clients. The things that make eclipse RCP look interesting are Eclipse Communication Framework (ECF) and Data Tools Platform (DTP). My constraints are: Small Team (basically just me for now :) Have to manage it as a side project until its usefulness is proven I am basically looking for insights from other developers who have worked with Eclipse RCP or who know a better alternative.
[ "The best way to evaluate RCP is to create a small project ... I started with the tutorial here: http://www.ibm.com/developerworks/edu/os-dw-os-ecl-rcpapp.html and gradually created a less trivial application.\nProbably the best single resource I've found is the \"Eclipse Rich Client Platform\" book (which I initially borrowed from the local university library. The book's web site is here: http://eclipsercp.org/book/.\nThe only downside to RCP is the size of the distributed program, but the automatic software update feature makes this much less painful and, if you modularize the application using plugins, the user doesn't have to download the entire application to receive updates to one plugin.\n" ]
[ 1 ]
[]
[]
[ "eclipse_ecf", "eclipse_rcp" ]
stackoverflow_0000095221_eclipse_ecf_eclipse_rcp.txt
Q: What is the easier way to know if a type param implements an interface in c# 2.0? For example, given a type param method i'm looking for something like the part in bold void MyMethod< T >() { if ( typeof(T).Implements( IMyInterface ) ) { //Do something else //Do something else } Anwers using C# 3.0 are also welcome, but first drop the .NET 2.0 ones please ;) A: Type.IsAssignableFrom if(typeof(IMyInterface).IsAssignableFrom(typeof(T))) { // something } else { // something else } A: I think if (typeof (IMyInterFace).IsAssignableFrom(typeof(T)) should also work: but i don't see an advantage... A: Ï've just tried using if( typeof(T).Equals(typeof(IMyInterface) ) ... And also works, but your answer seems more robust and was what I was looking for. Thanks!
What is the easier way to know if a type param implements an interface in c# 2.0?
For example, given a type param method i'm looking for something like the part in bold void MyMethod< T >() { if ( typeof(T).Implements( IMyInterface ) ) { //Do something else //Do something else } Anwers using C# 3.0 are also welcome, but first drop the .NET 2.0 ones please ;)
[ "Type.IsAssignableFrom\nif(typeof(IMyInterface).IsAssignableFrom(typeof(T)))\n{\n // something\n}\nelse\n{\n // something else\n}\n\n", "I think \nif (typeof (IMyInterFace).IsAssignableFrom(typeof(T))\n\nshould also work: but i don't see an advantage...\n", "Ï've just tried using \nif( typeof(T).Equals(typeof(IMyInterface) ) \n ...\n\nAnd also works, but your answer seems more robust and was what I was looking for. Thanks!\n" ]
[ 6, 1, 0 ]
[]
[]
[ "c#", "reflection", "types" ]
stackoverflow_0000096027_c#_reflection_types.txt
Q: How can we share individual rules between .drl files in JBoss Rules? We are using JBoss Rules (a.k.a. Drools) and have several .drl files that each contain several rules. Is there a way to avoid duplication between files, so that we can define common rules that are available to more than one .drl file? Unfortunately, there does not seem to be any kind of include or module facility. A: There is no way of including rules from another .drl file from within a .drl file. You can however add two .drl files to the same ruleBase and they will work as if they were in the same file. PackageBuilder builder = new PackageBuilder(); builder.addPackageFromDrl( new InputStreamReader( getClass().getResourceAsStream( "common.drl" ) ) ); builder.addPackageFromDrl( new InputStreamReader( getClass().getResourceAsStream( "rules1.drl" ) ) ); RuleBase ruleBase = RuleBaseFactory.newRuleBase(); ruleBase.addPackage( builder.getPackage() );
How can we share individual rules between .drl files in JBoss Rules?
We are using JBoss Rules (a.k.a. Drools) and have several .drl files that each contain several rules. Is there a way to avoid duplication between files, so that we can define common rules that are available to more than one .drl file? Unfortunately, there does not seem to be any kind of include or module facility.
[ "There is no way of including rules from another .drl file from within a .drl file.\nYou can however add two .drl files to the same ruleBase and they will work as if they were in the same file.\nPackageBuilder builder = new PackageBuilder();\nbuilder.addPackageFromDrl( new InputStreamReader( getClass().getResourceAsStream( \"common.drl\" ) ) );\nbuilder.addPackageFromDrl( new InputStreamReader( getClass().getResourceAsStream( \"rules1.drl\" ) ) );\nRuleBase ruleBase = RuleBaseFactory.newRuleBase();\nruleBase.addPackage( builder.getPackage() );\n\n" ]
[ 2 ]
[]
[]
[ "drools", "jboss_rules", "modularization" ]
stackoverflow_0000091917_drools_jboss_rules_modularization.txt
Q: Editing Autogenerated DMBL file for WCF Service In our project we have a standard auto-generated designer.cs file, linked to a DBML file, that contains all our object classes that map onto our database tables. We want to pass these objects directly through a WCF Service and so they need decorating with the [DataContract] and [DataMember] attributes where appropriate. What is the best approach to doing this so the changes won't get wiped out when the designer.cs file is re-generated upon a change to the database scheme or some other change. Partial classes are an option, but if the property I want to decorate with the DataMember attribute is already defined in the autogenerated designer.cs file then I can't add the same property definition to the partial class as this means the property will have been defined twice. A: Setting the DBML serialization mode to unidirectional will decorate the classes and a number of the members with the required attributes however it will ignore some of the associations to avoid circular references that were a problem prior to SP1. If you want those too check out my LINQ to SQL T4 template that provides full SP1 compatible DataContract attributes (uncomment the line data.SerializationMode = DataContractSP1 in the DataClasses.tt file) as well as letting you customize any other parts of the DBML to C#/VB.NET code generation process. A: The dbml files give partial classes, so you can create a new .cs file, define the partial class that you want to extend, and then decorate that with the attributes that you want. For instance if you have a generated data context that looks like public partial class MyDataContext : System.Data.Linq.DataContext { ... } you can define the following in a separate .cs file: [DataContract] public partial class MyDataContext { ... } This way you can extend the generated classes without worrying about them being overwritten when your dbml file is re-generated.
Editing Autogenerated DMBL file for WCF Service
In our project we have a standard auto-generated designer.cs file, linked to a DBML file, that contains all our object classes that map onto our database tables. We want to pass these objects directly through a WCF Service and so they need decorating with the [DataContract] and [DataMember] attributes where appropriate. What is the best approach to doing this so the changes won't get wiped out when the designer.cs file is re-generated upon a change to the database scheme or some other change. Partial classes are an option, but if the property I want to decorate with the DataMember attribute is already defined in the autogenerated designer.cs file then I can't add the same property definition to the partial class as this means the property will have been defined twice.
[ "Setting the DBML serialization mode to unidirectional will decorate the classes and a number of the members with the required attributes however it will ignore some of the associations to avoid circular references that were a problem prior to SP1.\nIf you want those too check out my LINQ to SQL T4 template that provides full SP1 compatible DataContract attributes (uncomment the line data.SerializationMode = DataContractSP1 in the DataClasses.tt file) as well as letting you customize any other parts of the DBML to C#/VB.NET code generation process.\n", "The dbml files give partial classes, so you can create a new .cs file, define the partial class that you want to extend, and then decorate that with the attributes that you want. For instance if you have a generated data context that looks like\npublic partial class MyDataContext : System.Data.Linq.DataContext\n{\n...\n}\n\nyou can define the following in a separate .cs file:\n[DataContract]\npublic partial class MyDataContext\n{\n...\n}\n\nThis way you can extend the generated classes without worrying about them being overwritten when your dbml file is re-generated.\n" ]
[ 3, 0 ]
[]
[]
[ ".net", "dbml", "linq_to_sql", "wcf" ]
stackoverflow_0000093159_.net_dbml_linq_to_sql_wcf.txt
Q: FindNextFile fails on 64-bit Windows? using C++Builder 2007, the FindFirstFile and FindNextFile functions doesn't seem to be able to find some files on 64-bit versions of Vista and XP. My test application is 32-bit. If I use them to iterate through the folder C:\Windows\System32\Drivers they only find a handful of files although there are 185 when I issue a dir command in a command prompt. Using the same example code lists all files fine on a 32-bit version of XP. Here is a small example program: int main(int argc, char* argv[]) { HANDLE hFind; WIN32_FIND_DATA FindData; int ErrorCode; bool cont = true; cout << "FindFirst/Next demo." << endl << endl; hFind = FindFirstFile("*.*", &FindData); if(hFind == INVALID_HANDLE_VALUE) { ErrorCode = GetLastError(); if (ErrorCode == ERROR_FILE_NOT_FOUND) { cout << "There are no files matching that path/mask\n" << endl; } else { cout << "FindFirstFile() returned error code " << ErrorCode << endl; } cont = false; } else { cout << FindData.cFileName << endl; } if (cont) { while (FindNextFile(hFind, &FindData)) { cout << FindData.cFileName << endl; } ErrorCode = GetLastError(); if (ErrorCode == ERROR_NO_MORE_FILES) { cout << endl << "All files logged." << endl; } else { cout << "FindNextFile() returned error code " << ErrorCode << endl; } if (!FindClose(hFind)) { ErrorCode = GetLastError(); cout << "FindClose() returned error code " << ErrorCode << endl; } } return 0; } Running it in the C:\Windows\System32\Drivers folder on 64-bit XP returns this: C:\WINDOWS\system32\drivers>t:\Project1.exe FindFirst/Next demo. . .. AsIO.sys ASUSHWIO.SYS hfile.txt raspti.zip stcp2v30.sys truecrypt.sys All files logged. A dir command on the same system returns this: C:\WINDOWS\system32\drivers>dir/p Volume in drive C has no label. Volume Serial Number is E8E1-0F1E Directory of C:\WINDOWS\system32\drivers 16-09-2008 23:12 <DIR> . 16-09-2008 23:12 <DIR> .. 17-02-2007 00:02 80.384 1394bus.sys 16-09-2008 23:12 9.453 a.txt 17-02-2007 00:02 322.560 acpi.sys 29-03-2006 14:00 18.432 acpiec.sys 24-03-2005 17:11 188.928 aec.sys 21-06-2008 15:07 291.840 afd.sys 29-03-2006 14:00 51.712 amdk8.sys 17-02-2007 00:03 111.104 arp1394.sys 08-05-2006 20:19 8.192 ASACPI.sys 29-03-2006 14:00 25.088 asyncmac.sys 17-02-2007 00:03 150.016 atapi.sys 17-02-2007 00:03 106.496 atmarpc.sys 29-03-2006 14:00 57.344 atmepvc.sys 17-02-2007 00:03 91.648 atmlane.sys 17-02-2007 00:03 569.856 atmuni.sys 24-03-2005 19:12 5.632 audstub.sys 29-03-2006 14:00 6.144 beep.sys Press any key to continue . . . etc. I'm puzzled. What is the reason for this? Brian A: Is there redirection going on? See the remarks on Wow64DisableWow64FsRedirection http://msdn.microsoft.com/en-gb/library/aa365743.aspx A: I found this on MSDN: If you are writing a 32-bit application to list all the files in a directory and the application may be run on a 64-bit computer, you should call the Wow64DisableWow64FsRedirectionfunction before calling FindFirstFile and call Wow64RevertWow64FsRedirection after the last call to FindNextFile. For more information, see File System Redirector. Here's the link I'll have to update my code because of this :-) A: Got it: http://msdn.microsoft.com/en-gb/library/aa384187(VS.85).aspx When a 32-bit application reads from one of these folders on a 64-bit OS: %windir%\system32\catroot %windir%\system32\catroot2 %windir%\system32\drivers\etc %windir%\system32\logfiles %windir%\system32\spool Windows actually lists the content of: %windir%\SysWOW64\catroot %windir%\SysWOW64\catroot2 %windir%\SysWOW64\drivers\etc %windir%\SysWOW64\logfiles %windir%\SysWOW64\spool Thanks for your input Kris, that helped me find out what is going on. EDIT: Thank you Ludvig too :-) A: Are you sure it is looking in the same directory as the dir command? They don't seem to have any files in common. Also, this isn't the issue, but the correct wild card for "all files" is * *.* means "all files with at least one . in the name" A: Are there any warnings when you compile? Have you turned ALL warnings on for this particular test (since it is not working)? Make sure first to solve the warnings. A: There are no problems with the example code. I have another application that fails too, written in Delphi. I think I found the answer based on Kris' answer about redirection: http://msdn.microsoft.com/en-gb/library/aa364418(VS.85).aspx
FindNextFile fails on 64-bit Windows?
using C++Builder 2007, the FindFirstFile and FindNextFile functions doesn't seem to be able to find some files on 64-bit versions of Vista and XP. My test application is 32-bit. If I use them to iterate through the folder C:\Windows\System32\Drivers they only find a handful of files although there are 185 when I issue a dir command in a command prompt. Using the same example code lists all files fine on a 32-bit version of XP. Here is a small example program: int main(int argc, char* argv[]) { HANDLE hFind; WIN32_FIND_DATA FindData; int ErrorCode; bool cont = true; cout << "FindFirst/Next demo." << endl << endl; hFind = FindFirstFile("*.*", &FindData); if(hFind == INVALID_HANDLE_VALUE) { ErrorCode = GetLastError(); if (ErrorCode == ERROR_FILE_NOT_FOUND) { cout << "There are no files matching that path/mask\n" << endl; } else { cout << "FindFirstFile() returned error code " << ErrorCode << endl; } cont = false; } else { cout << FindData.cFileName << endl; } if (cont) { while (FindNextFile(hFind, &FindData)) { cout << FindData.cFileName << endl; } ErrorCode = GetLastError(); if (ErrorCode == ERROR_NO_MORE_FILES) { cout << endl << "All files logged." << endl; } else { cout << "FindNextFile() returned error code " << ErrorCode << endl; } if (!FindClose(hFind)) { ErrorCode = GetLastError(); cout << "FindClose() returned error code " << ErrorCode << endl; } } return 0; } Running it in the C:\Windows\System32\Drivers folder on 64-bit XP returns this: C:\WINDOWS\system32\drivers>t:\Project1.exe FindFirst/Next demo. . .. AsIO.sys ASUSHWIO.SYS hfile.txt raspti.zip stcp2v30.sys truecrypt.sys All files logged. A dir command on the same system returns this: C:\WINDOWS\system32\drivers>dir/p Volume in drive C has no label. Volume Serial Number is E8E1-0F1E Directory of C:\WINDOWS\system32\drivers 16-09-2008 23:12 <DIR> . 16-09-2008 23:12 <DIR> .. 17-02-2007 00:02 80.384 1394bus.sys 16-09-2008 23:12 9.453 a.txt 17-02-2007 00:02 322.560 acpi.sys 29-03-2006 14:00 18.432 acpiec.sys 24-03-2005 17:11 188.928 aec.sys 21-06-2008 15:07 291.840 afd.sys 29-03-2006 14:00 51.712 amdk8.sys 17-02-2007 00:03 111.104 arp1394.sys 08-05-2006 20:19 8.192 ASACPI.sys 29-03-2006 14:00 25.088 asyncmac.sys 17-02-2007 00:03 150.016 atapi.sys 17-02-2007 00:03 106.496 atmarpc.sys 29-03-2006 14:00 57.344 atmepvc.sys 17-02-2007 00:03 91.648 atmlane.sys 17-02-2007 00:03 569.856 atmuni.sys 24-03-2005 19:12 5.632 audstub.sys 29-03-2006 14:00 6.144 beep.sys Press any key to continue . . . etc. I'm puzzled. What is the reason for this? Brian
[ "Is there redirection going on? See the remarks on Wow64DisableWow64FsRedirection http://msdn.microsoft.com/en-gb/library/aa365743.aspx\n", "I found this on MSDN:\nIf you are writing a 32-bit application to list all the files in a directory and the application may be run on a 64-bit computer, you should call the Wow64DisableWow64FsRedirectionfunction before calling FindFirstFile and call Wow64RevertWow64FsRedirection after the last call to FindNextFile. For more information, see File System Redirector.\nHere's the link\nI'll have to update my code because of this :-)\n", "Got it:\nhttp://msdn.microsoft.com/en-gb/library/aa384187(VS.85).aspx\nWhen a 32-bit application reads from one of these folders on a 64-bit OS:\n%windir%\\system32\\catroot\n%windir%\\system32\\catroot2\n%windir%\\system32\\drivers\\etc\n%windir%\\system32\\logfiles\n%windir%\\system32\\spool \n\nWindows actually lists the content of:\n%windir%\\SysWOW64\\catroot\n%windir%\\SysWOW64\\catroot2\n%windir%\\SysWOW64\\drivers\\etc\n%windir%\\SysWOW64\\logfiles\n%windir%\\SysWOW64\\spool \n\nThanks for your input Kris, that helped me find out what is going on.\nEDIT: Thank you Ludvig too :-)\n", "Are you sure it is looking in the same directory as the dir command? They don't seem to have any files in common.\nAlso, this isn't the issue, but the correct wild card for \"all files\" is *\n*.* means \"all files with at least one . in the name\"\n", "Are there any warnings when you compile? \nHave you turned ALL warnings on for this particular test (since it is not working)?\nMake sure first to solve the warnings.\n", "There are no problems with the example code. I have another application that fails too, written in Delphi. I think I found the answer based on Kris' answer about redirection:\nhttp://msdn.microsoft.com/en-gb/library/aa364418(VS.85).aspx\n" ]
[ 9, 2, 1, 0, 0, 0 ]
[]
[]
[ "64_bit", "c++", "c++builder" ]
stackoverflow_0000095956_64_bit_c++_c++builder.txt
Q: OSX Security Framework NameAndPassword sample application I am investigating security plugins using SFAuthorizationPluginView under Mac OSX and as a first step looking at the NameAndPassword sample application. The app builds OK but I cannot get it to authenticate. So does anyone have any experience of SFAuthorizationPluginView or any other examples. A: Does Debugging An Authorization Plug-In With Xcode help?
OSX Security Framework NameAndPassword sample application
I am investigating security plugins using SFAuthorizationPluginView under Mac OSX and as a first step looking at the NameAndPassword sample application. The app builds OK but I cannot get it to authenticate. So does anyone have any experience of SFAuthorizationPluginView or any other examples.
[ "Does Debugging An Authorization Plug-In With Xcode help?\n" ]
[ 3 ]
[]
[]
[ "cocoa", "macos", "sfauthorizationpluginview" ]
stackoverflow_0000093387_cocoa_macos_sfauthorizationpluginview.txt
Q: What's the best way to send a file over a network using C#? Can anyone point me to a tutorial on the best way to open a connection from client to server, read in a binary file and send its contents reliably across the network connection? Even better, is there an open source library that already does this that I could refer to? A: You should look into binary serialization and sending it over a TCP socket. Good explanation on different types of serialization: http://www.dotnetspider.com/resources/408-XML-serialization-Binary-serialization.aspx Good primer on TCP Client/Server in C#: http://www.codeproject.com/KB/IP/tcpclientserver.aspx A: This depends what you mean by network - if you're copying on a local network you can just use the file copy operations inside System.IO. If you're wanting to send to remote servers I do this using web services. I compress byte arrays and send them over and decompress on the remote side. The byte array is super easy to write back to disk using streams. I know some people prefer base 64 strings instead of the byte[]. not sure if it matters. A: I wouldn't use HTTP or FTP, for a single file it's too much overhead and too much to code, especially having a simple TCP server almost already made for you in C#. A: Sockets may be the best route if you're just having to do it over the network. If you use TCP, you get the reliability of communication but take an impact on speed. If you need higher performance, you could try using UDP instead. But the downside to UDP is that packet delivery and order is not guaranteed, so you would need to write all that plumbing yourself. If you are needing to transfer files over the web itself (programatically, and if you can't use FTP), then a web service approach via MTOM might fit your needs. If you are building on top of Windows Server 2003 R2, Windows Vista, or Windows Server 2008 and doing internal network transfers, another option is to leverage the new Remote Differential Compression feature. This not only does a really good job at compressing a file to minimize network traffic, but is also used directly by DFS replication. Downside (as a .NET developer), it's a COM+ technology. A: How about using HTTP or FTP? They were sort of made for this. Alex A: Depending upon where you are sending the file to, you might want to take a look at WebClient.UploadFileAsync and WebClient.UploadFile.
What's the best way to send a file over a network using C#?
Can anyone point me to a tutorial on the best way to open a connection from client to server, read in a binary file and send its contents reliably across the network connection? Even better, is there an open source library that already does this that I could refer to?
[ "You should look into binary serialization and sending it over a TCP socket.\nGood explanation on different types of serialization:\nhttp://www.dotnetspider.com/resources/408-XML-serialization-Binary-serialization.aspx\nGood primer on TCP Client/Server in C#:\nhttp://www.codeproject.com/KB/IP/tcpclientserver.aspx\n", "This depends what you mean by network - if you're copying on a local network you can just use the file copy operations inside System.IO. If you're wanting to send to remote servers I do this using web services. I compress byte arrays and send them over and decompress on the remote side. The byte array is super easy to write back to disk using streams.\nI know some people prefer base 64 strings instead of the byte[]. not sure if it matters.\n", "I wouldn't use HTTP or FTP, for a single file it's too much overhead and too much to code, especially having a simple TCP server almost already made for you in C#.\n", "Sockets may be the best route if you're just having to do it over the network. If you use TCP, you get the reliability of communication but take an impact on speed. If you need higher performance, you could try using UDP instead. But the downside to UDP is that packet delivery and order is not guaranteed, so you would need to write all that plumbing yourself.\nIf you are needing to transfer files over the web itself (programatically, and if you can't use FTP), then a web service approach via MTOM might fit your needs.\nIf you are building on top of Windows Server 2003 R2, Windows Vista, or Windows Server 2008 and doing internal network transfers, another option is to leverage the new Remote Differential Compression feature. This not only does a really good job at compressing a file to minimize network traffic, but is also used directly by DFS replication. Downside (as a .NET developer), it's a COM+ technology.\n", "How about using HTTP or FTP? They were sort of made for this.\nAlex\n", "Depending upon where you are sending the file to, you might want to take a look at WebClient.UploadFileAsync and WebClient.UploadFile.\n" ]
[ 6, 2, 1, 1, 0, 0 ]
[]
[]
[ ".net", "c#", "data_transfer", "ftp", "tcp" ]
stackoverflow_0000095235_.net_c#_data_transfer_ftp_tcp.txt
Q: How to find Commit Charge programmatically? I'm looking for the total commit charge. A: public static long GetCommitCharge() { var p = new System.Diagnostics.PerformanceCounter("Memory", "Committed Bytes"); return p.RawValue; } A: Here's an example using WMI: strComputer = "." Set objSWbemServices = GetObject("winmgmts:\\" & strComputer) Set colSWbemObjectSet = _ objSWbemServices.InstancesOf("Win32_LogicalMemoryConfiguration") For Each objSWbemObject In colSWbemObjectSet Wscript.Echo "Total Physical Memory (kb): " & _ objSWbemObject.TotalPhysicalMemory WScript.Echo "Total Virtual Memory (kb): " & _ objSWbemObject.TotalVirtualMemory WScript.Echo "Total Page File Space (kb): " & _ objSWbemObject.TotalPageFileSpace Next If you run this script under CScript, you should see the number of kilobytes of physical memory installed on the target computer displayed in the command window. The following is typical output from the script: Total Physical Memory (kb): 261676 Edit: Included total page file size property also taken from: http://www.microsoft.com/technet/scriptcenter/guide/sas_wmi_dieu.mspx?mfr=true
How to find Commit Charge programmatically?
I'm looking for the total commit charge.
[ " public static long GetCommitCharge()\n {\n var p = new System.Diagnostics.PerformanceCounter(\"Memory\", \"Committed Bytes\");\n return p.RawValue;\n }\n\n", "Here's an example using WMI:\nstrComputer = \".\"\n\nSet objSWbemServices = GetObject(\"winmgmts:\\\\\" & strComputer)\nSet colSWbemObjectSet = _\n objSWbemServices.InstancesOf(\"Win32_LogicalMemoryConfiguration\")\n\nFor Each objSWbemObject In colSWbemObjectSet\n Wscript.Echo \"Total Physical Memory (kb): \" & _\n objSWbemObject.TotalPhysicalMemory\n WScript.Echo \"Total Virtual Memory (kb): \" & _\n objSWbemObject.TotalVirtualMemory\n WScript.Echo \"Total Page File Space (kb): \" & _\n objSWbemObject.TotalPageFileSpace\nNext\n\nIf you run this script under CScript, you should see the number of kilobytes of physical memory installed on the target computer displayed in the command window. The following is typical output from the script:\nTotal Physical Memory (kb): 261676\nEdit: Included total page file size property also\ntaken from: http://www.microsoft.com/technet/scriptcenter/guide/sas_wmi_dieu.mspx?mfr=true\n" ]
[ 4, 1 ]
[]
[]
[ "c#" ]
stackoverflow_0000095850_c#.txt
Q: PDB files for production app and the "Optimize code" flag When should I include PDB files for a production release? Should I use the Optimize code flag and how would that affect the information I get from an exception? If there is a noticeable performance benefit I would want to use the optimizations but if not I'd rather have accurate debugging info. What is typically done for a production app? A: When you want to see source filenames and line numbers in your stacktraces, generate PDBs using the pdb-only option. Optimization is separate from PDB generation, i.e. you can optimize and generate PDBs without a performance hit. From the C# Language Reference If you use /debug:full, be aware that there is some impact on the speed and size of JIT optimized code and a small impact on code quality with /debug:full. We recommend /debug:pdbonly or no PDB for generating release code. A: To answer your first question, you only need to include PDBs for a production release if you need line numbers for your exception reports. To answer your second question, using the "Optimise" flag with PDBs means that any stack "collapse" will be reflected in the stack trace. I'm not sure whether the actual line number reported can be wrong - this needs more investigation. To answer your third question, you can have the best of both worlds with a rather neat trick. The major differences between the default debug build and default release build are that when doing a default release build, optimization is turned on and debug symbols are not emitted. So, in four steps: Change your release config to emit debug symbols. This has virtually no effect on the performance of your app, and is very useful if (when?) you need to debug a release build of your app. Compile using your new release build config, i.e. with debug symbols and with optimization. Note that 99% of code optimization is done by the JIT compiler, not the language compiler. Create a text file in your app's folder called xxxx.exe.ini (or dll or whatever), where xxxx is the name of your executable. This text file should initially look like: [.NET Framework Debugging Control] GenerateTrackingInfo=0 AllowOptimize=1 With these settings, your app runs at full speed. When you want to debug your app by turning on debug tracking and possibly turning off (CIL) code optimization, just use the following settings: [.NET Framework Debugging Control] GenerateTrackingInfo=1 AllowOptimize=0 EDIT According to cateye's comment, this can also work in a hosted environment such as ASP.NET. A: There is no need to include them in your distribution, but you should definitely be building them and keeping them. Otherwise debugging a crash dump is practically impossible. I would also turn on optimizations. Whilst it does make debugging more difficult the performance gains are usually very non-trivial depending on the nature of the application. We easily see over 10x performance on release vs debug builds for some algorithms.
PDB files for production app and the "Optimize code" flag
When should I include PDB files for a production release? Should I use the Optimize code flag and how would that affect the information I get from an exception? If there is a noticeable performance benefit I would want to use the optimizations but if not I'd rather have accurate debugging info. What is typically done for a production app?
[ "When you want to see source filenames and line numbers in your stacktraces, generate PDBs using the pdb-only option. Optimization is separate from PDB generation, i.e. you can optimize and generate PDBs without a performance hit.\nFrom the C# Language Reference\n\nIf you use /debug:full, be aware that there is some impact on the speed and size of JIT optimized code and a small impact on code quality with /debug:full. We recommend /debug:pdbonly or no PDB for generating release code.\n\n", "To answer your first question, you only need to include PDBs for a production release if you need line numbers for your exception reports. \nTo answer your second question, using the \"Optimise\" flag with PDBs means that any stack \"collapse\" will be reflected in the stack trace. I'm not sure whether the actual line number reported can be wrong - this needs more investigation.\nTo answer your third question, you can have the best of both worlds with a rather neat trick. The major differences between the default debug build and default release build are that when doing a default release build, optimization is turned on and debug symbols are not emitted. So, in four steps:\n\nChange your release config to emit debug symbols. This has virtually no effect on the performance of your app, and is very useful if (when?) you need to debug a release build of your app. \nCompile using your new release build config, i.e. with debug symbols and with optimization. Note that 99% of code optimization is done by the JIT compiler, not the language compiler.\nCreate a text file in your app's folder called xxxx.exe.ini (or dll or whatever), where xxxx is the name of your executable. This text file should initially look like:\n[.NET Framework Debugging Control]\nGenerateTrackingInfo=0\nAllowOptimize=1\n\nWith these settings, your app runs at full speed. When you want to debug your app by turning on debug tracking and possibly turning off (CIL) code optimization, just use the following settings:\n[.NET Framework Debugging Control]\nGenerateTrackingInfo=1\nAllowOptimize=0 \n\n\nEDIT According to cateye's comment, this can also work in a hosted environment such as ASP.NET.\n", "There is no need to include them in your distribution, but you should definitely be building them and keeping them. Otherwise debugging a crash dump is practically impossible.\nI would also turn on optimizations. Whilst it does make debugging more difficult the performance gains are usually very non-trivial depending on the nature of the application. We easily see over 10x performance on release vs debug builds for some algorithms.\n" ]
[ 19, 16, 3 ]
[]
[]
[ "build_process", "c#", "visual_studio" ]
stackoverflow_0000041842_build_process_c#_visual_studio.txt
Q: open IE without toolbar or address bar from Windows VB Application Shell ("explorer.exe www.google.com") is how I'm currently opening my products ad page after successful install. However I think it would look much nicer if I could do it more like Avira does, or even a popup where there are no address bar links etc. Doing this via an inbrowser link is easy enough <a href="http://page.com" onClick="javascript:window.open('http://page.com','windows','width=650,height=350,toolbar=no,menubar=no,scrollbars=yes,resizable=yes,location=no,directories=no,status=no'); return false")">Link text</a> But how would I go about adding this functionality in VB? A: If you want it to look professional, you need to use an actual browser component. VB.NET comes with one. If you are using an older version of VB, you'd need to go third party. If you want to stay with a shell open, you would have to individually target the browser command-line and pass arguments to indicate that it should not have toolbars etc. A: Speaking as a user, I find castrated popup windows annoying and unproductive. So my answer is: "don't".
open IE without toolbar or address bar from Windows VB Application
Shell ("explorer.exe www.google.com") is how I'm currently opening my products ad page after successful install. However I think it would look much nicer if I could do it more like Avira does, or even a popup where there are no address bar links etc. Doing this via an inbrowser link is easy enough <a href="http://page.com" onClick="javascript:window.open('http://page.com','windows','width=650,height=350,toolbar=no,menubar=no,scrollbars=yes,resizable=yes,location=no,directories=no,status=no'); return false")">Link text</a> But how would I go about adding this functionality in VB?
[ "If you want it to look professional, you need to use an actual browser component. VB.NET comes with one. If you are using an older version of VB, you'd need to go third party. If you want to stay with a shell open, you would have to individually target the browser command-line and pass arguments to indicate that it should not have toolbars etc.\n", "Speaking as a user, I find castrated popup windows annoying and unproductive.\nSo my answer is: \"don't\". \n" ]
[ 1, 1 ]
[]
[]
[ "browser", "popup", "vb.net" ]
stackoverflow_0000096123_browser_popup_vb.net.txt
Q: Why would I use 2's complement to compare two doubles instead of comparing their differences against an epsilon value? Referenced here and here...Why would I use two's complement over an epsilon method? It seems like the epsilon method would be good enough for most cases. Update: I'm purely looking for a theoretical reason why you'd use one over the other. I've always used the epsilon method. Has anyone used the 2's complement comparison successfully? Why? Why Not? A: the second link you reference mentions an article that has quite a long description of the issue: http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm but unless you are tweaking performance I would stick with epsilon so people can debug your code A: In short, when comparing two floats with unknown origins, picking an epsilon that is valid is almost impossible. For example: What is a good epsilon when comparing distance in miles between Atlanta GA, Dallas TX and some place in Ohio? What is a good epsilon when comparing distance in miles between my left foot, my right foot and the computer under my desk? EDIT: Ok, I'm getting a fair number of people not understanding why you wouldn't know what your epsilon is. Back in the old days of lore, I wrote two programs that worked with NeverWinter Nights (a game made by BioWare). One of the programs took a binary model and converted it to ASCII. The other program took an ASCII model and compiled it into binary. One of the tests I wrote was to take all of BioWare's binary models, decompile them to ASCII and then back to binary. Then I compared my binary version with original one from BioWare. One of the problems during the comparison was dealing with some of the slight variances in floating point values. So instead of coming up with a bunch of different EPSILONS for each type of floating point number (vertex, normal, etc), I wanted to use something such as this twos compliment compare. Thus avoiding the whole multiple EPSILON issue. The same type of issue can apply to any type of software that processes 3rd party data and then needs to validate their results with the original. In these cases you might not even know what the floating point values represent, you just have to compare them. We ran into this issue with our industrial automation software. EDIT: LOL, this has been voted up and down by different people. I'll boil the problem down to this, given two arbitrary floating point numbers, how do you decide what epsilon to use? You can't. How can you compare 1e23 and 1.0001e23 with an epsilon and still compare 1e-23 and 5.2e-23 using the same epsilon? Sure, you can do some dynamic epsilon tricks, but that is the whole point to the integer compare (which does NOT require the integers be exact). The integer compare is able to compare two floats using an epsilon relative to the magnitude of the numbers. EDIT Steve, lets look at what you said in the comments: "But you know what equality means to you... Hence, you should be able to find an appropriate epsilon". Turn this statement around to say: "If you know what equality means to you, then you should be able to find an appropriate epsilon." The whole point to what I am trying to say is that there are applications where we don't know what equality means in the absolute sense, thus we have to resort to a relative compare which is what the integer version is trying to do. A: The bits method might be faster. I say might because on modern (multicore, highly pipelined) processors it is often impossible to guess what is really faster. Code the simplest most obviously correct implementation, then measure, then optomise. A: When it comes to speed, follow these rules: If you're not a very experienced developer, don't optimize. If you are an experienced developer, don't optimize yet. Do the easiest method. Alex A: Oskar's right. Don't screw with this unless you really, really need that performance. And you don't. If you were in the situation that did, you wouldn't have needed to ask the question -- you'd already know. If you think you do, then you don't. Your performance problems lie elsewhere. Just use the readable version. A: Using any method that compares bitwise will result in trouble when fractions are represented by approximations. All floating point numbers with fractions that are not denominated in powers of two (1/2, 1/4, 1/8, 1/65536, &c) are approximated. So, of course, are all irrational numbers. float third = 1/3; float two=2.0; float another_two=third*6.0; if(two != another_two) print ("Approximation!\n"); The only time comparing bitwise would work is when you derive the floating point numbers exactly the same way or they are exact representations (whole numbers, fraction powers of two). Even then, there can be multiple representations of some numbers, though I have never seen this in a working system.
Why would I use 2's complement to compare two doubles instead of comparing their differences against an epsilon value?
Referenced here and here...Why would I use two's complement over an epsilon method? It seems like the epsilon method would be good enough for most cases. Update: I'm purely looking for a theoretical reason why you'd use one over the other. I've always used the epsilon method. Has anyone used the 2's complement comparison successfully? Why? Why Not?
[ "the second link you reference mentions an article that has quite a long description of the issue:\nhttp://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm\nbut unless you are tweaking performance I would stick with epsilon so people can debug your code\n", "In short, when comparing two floats with unknown origins, picking an epsilon that is valid is almost impossible.\nFor example:\nWhat is a good epsilon when comparing distance in miles between Atlanta GA, Dallas TX and some place in Ohio?\nWhat is a good epsilon when comparing distance in miles between my left foot, my right foot and the computer under my desk?\nEDIT:\nOk, I'm getting a fair number of people not understanding why you wouldn't know what your epsilon is.\nBack in the old days of lore, I wrote two programs that worked with NeverWinter Nights (a game made by BioWare). One of the programs took a binary model and converted it to ASCII. The other program took an ASCII model and compiled it into binary. One of the tests I wrote was to take all of BioWare's binary models, decompile them to ASCII and then back to binary. Then I compared my binary version with original one from BioWare. One of the problems during the comparison was dealing with some of the slight variances in floating point values. So instead of coming up with a bunch of different EPSILONS for each type of floating point number (vertex, normal, etc), I wanted to use something such as this twos compliment compare. Thus avoiding the whole multiple EPSILON issue.\nThe same type of issue can apply to any type of software that processes 3rd party data and then needs to validate their results with the original. In these cases you might not even know what the floating point values represent, you just have to compare them. We ran into this issue with our industrial automation software.\nEDIT:\nLOL, this has been voted up and down by different people. \nI'll boil the problem down to this, given two arbitrary floating point numbers, how do you decide what epsilon to use? You can't. \nHow can you compare 1e23 and 1.0001e23 with an epsilon and still compare 1e-23 and 5.2e-23 using the same epsilon? Sure, you can do some dynamic epsilon tricks, but that is the whole point to the integer compare (which does NOT require the integers be exact). \nThe integer compare is able to compare two floats using an epsilon relative to the magnitude of the numbers.\nEDIT\nSteve, lets look at what you said in the comments: \n\"But you know what equality means to you... Hence, you should be able to find an appropriate epsilon\".\nTurn this statement around to say:\n\"If you know what equality means to you, then you should be able to find an appropriate epsilon.\"\nThe whole point to what I am trying to say is that there are applications where we don't know what equality means in the absolute sense, thus we have to resort to a relative compare which is what the integer version is trying to do.\n", "The bits method might be faster. I say might because on modern (multicore, highly pipelined) processors it is often impossible to guess what is really faster.\nCode the simplest most obviously correct implementation, then measure, then optomise.\n", "When it comes to speed, follow these rules:\n\nIf you're not a very experienced developer, don't optimize.\nIf you are an experienced developer, don't optimize yet.\n\nDo the easiest method.\nAlex\n", "Oskar's right. Don't screw with this unless you really, really need that performance. \nAnd you don't. If you were in the situation that did, you wouldn't have needed to ask the question -- you'd already know. If you think you do, then you don't. Your performance problems lie elsewhere. Just use the readable version.\n", "Using any method that compares bitwise will result in trouble when fractions are represented by approximations. All floating point numbers with fractions that are not denominated in powers of two (1/2, 1/4, 1/8, 1/65536, &c) are approximated. So, of course, are all irrational numbers.\nfloat third = 1/3;\nfloat two=2.0;\nfloat another_two=third*6.0;\nif(two != another_two)\n print (\"Approximation!\\n\");\nThe only time comparing bitwise would work is when you derive the floating point numbers exactly the same way or they are exact representations (whole numbers, fraction powers of two). Even then, there can be multiple representations of some numbers, though I have never seen this in a working system.\n" ]
[ 3, 2, 2, 0, 0, 0 ]
[]
[]
[ "c++", "double", "floating_point" ]
stackoverflow_0000096233_c++_double_floating_point.txt
Q: comparison of ways to maintain state There are various ways to maintain user state using in web development. These are the ones that I can think of right now: Query String Cookies Form Methods (Get and Post) Viewstate (ASP.NET only I guess) Session (InProc Web server) Session (Dedicated web server) Session (Database) Local Persistence (Google Gears) (thanks Steve Moyer) etc. I know that each method has its own advantages and disadvantages like cookies not being secure and QueryString having a length limit and being plain ugly to look at! ;) But, when designing a web application I am always confused as to what methods to use for what application or what methods to avoid. What I would like to know is what method(s) do you generally use and would recommend or more interestingly which of these methods would you like to avoid in certain scenarios and why? A: While this is a very complicated question to answer, I have a few quick-bite things I think about when considering implementing state. Query string state is only useful for the most basic tasks -- e.g., maintaining the position of a user within a wizard, perhaps, or providing a path to redirect the user to after they complete a given task (e.g., logging in). Otherwise, query string state is horribly insecure, difficult to implement, and in order to do it justice, it needs to be tied to some server-side state machine by containing a key to tie the client to the server's maintained state for that client. Cookie state is more or less the same -- it's just fancier than query string state. But it's still totally maintained on the client side unless the data in the cookie is a key to tie the client to some server-side state machine. Form method state is again similar -- it's useful for hiding fields that tie a given form to some bit of data on the back end (e.g., "this user is editing record #512, so the form will contain a hidden input with the value 512"). It's not useful for much else, and again, is just another implementation of the same idea behind query string and cookie state. Session state (any of the ways you describe) are all great, since they're infinitely extensible and can handle anything your chosen programming language can handle. The first caveat is that there needs to be a key in the client's hand to tie that client to its state being stored on the server; this is where most web frameworks provide either a cookie-based or query string-based key back to the client. (Almost every modern one uses cookies, but falls back on query strings if cookies aren't enabled.) The second caveat is that you need to put some though into how you're storing your state... will you put it in a database? Does your web framework handle it entirely for you? Again, most modern web frameworks take the work out of this, and for me to go about implementing my own state machine, I need a very good reason... otherwise, I'm likely to create security holes and functionality breakage that's been hashed out over time in any of the mature frameworks. So I guess I can't really imagine not wanting to use session-based state for anything but the most trivial reason. A: Security is also an issue; values in the query string or form fields can be trivially changed by the user. User authentication should be saved either in an encrypted or tamper-evident cookie or in the server-side session. Keeping track of values passed in a form as a user completes a process, like a site sign-up, well, that can probably be kept in hidden form fields. The nice (and sometimes dangerous) thing, though, about the query string is that the state can be picked up by anyone who clicks on a link. As mentioned above, this is dangerous if it gives the user some authorization they shouldn't have. It's nice, though, for showing your friends something you found on the site. A: With the increasing use of Web 2.0, I think there are two important methods missing from your list: 8 AJAX applications - since the page doesn't reload and there is no page to page navigation, state isn't an issue (but persisting user data must use the asynchronous XML calls). 9 Local persistence - Browser-based applications can persist their user data and state to the local hard drive using libraries such as Google Gears. As for which one is best, I think they all have their place, but the Query String method is problematic for search engines. A: Personally, since almost all of my web development is in PHP, I use PHP's session handlers. Sessions are the most flexible, in my experience: they're normally faster than db accesses, and the cookies they generate die when the browser closes (by default). A: Avoid InProc if you plan to host your website on a cheap-n-cheerful host like webhost4life. I've learnt the hard way that because their systems are over subscribed, they recycle the applications very frequently which causes your session to get lost. Very annoying. Their suggestion is to use StateServer which is fine except you have to serialise/deserialise the session eash post back. I love objects and my web app is full of them. I'm concerned about performance when switching to StateServer. I need to refactor to only put the stuff I really need in the session. Wish I'd know that before I started... Cheers, Rob. A: Be careful what state you store client side (query strings, form fields, cookies). Anything security-related should not be stored client-side, except maybe a session identifier if it is reasonably obscured and hard to guess. There are too many websites that have settings like "authenticated=true" and store those in a cookie or query string or hidden form field. It is trivial for a user to bypass something like that. Remember that ANY input coming from a client could have been tampered with and should not be trusted. A: Signed Cookies linked to some sort of database store when you need to grab data. There's no reason to be storing data on the client side if you have a connected back-end; you're just looking for trouble if this is a public facing website. A: It's not some much a question of what to use & what to avoid, but when to use which. Each has a particular circumstances when it is the best, and a different circumstance when it's the worst. The deciding factor is generally lifetime of the data. Session state lives longer than form fields, and so on.
comparison of ways to maintain state
There are various ways to maintain user state using in web development. These are the ones that I can think of right now: Query String Cookies Form Methods (Get and Post) Viewstate (ASP.NET only I guess) Session (InProc Web server) Session (Dedicated web server) Session (Database) Local Persistence (Google Gears) (thanks Steve Moyer) etc. I know that each method has its own advantages and disadvantages like cookies not being secure and QueryString having a length limit and being plain ugly to look at! ;) But, when designing a web application I am always confused as to what methods to use for what application or what methods to avoid. What I would like to know is what method(s) do you generally use and would recommend or more interestingly which of these methods would you like to avoid in certain scenarios and why?
[ "While this is a very complicated question to answer, I have a few quick-bite things I think about when considering implementing state.\n\nQuery string state is only useful for the most basic tasks -- e.g., maintaining the position of a user within a wizard, perhaps, or providing a path to redirect the user to after they complete a given task (e.g., logging in). Otherwise, query string state is horribly insecure, difficult to implement, and in order to do it justice, it needs to be tied to some server-side state machine by containing a key to tie the client to the server's maintained state for that client.\nCookie state is more or less the same -- it's just fancier than query string state. But it's still totally maintained on the client side unless the data in the cookie is a key to tie the client to some server-side state machine.\nForm method state is again similar -- it's useful for hiding fields that tie a given form to some bit of data on the back end (e.g., \"this user is editing record #512, so the form will contain a hidden input with the value 512\"). It's not useful for much else, and again, is just another implementation of the same idea behind query string and cookie state.\nSession state (any of the ways you describe) are all great, since they're infinitely extensible and can handle anything your chosen programming language can handle. The first caveat is that there needs to be a key in the client's hand to tie that client to its state being stored on the server; this is where most web frameworks provide either a cookie-based or query string-based key back to the client. (Almost every modern one uses cookies, but falls back on query strings if cookies aren't enabled.) The second caveat is that you need to put some though into how you're storing your state... will you put it in a database? Does your web framework handle it entirely for you? Again, most modern web frameworks take the work out of this, and for me to go about implementing my own state machine, I need a very good reason... otherwise, I'm likely to create security holes and functionality breakage that's been hashed out over time in any of the mature frameworks.\n\nSo I guess I can't really imagine not wanting to use session-based state for anything but the most trivial reason.\n", "Security is also an issue; values in the query string or form fields can be trivially changed by the user. User authentication should be saved either in an encrypted or tamper-evident cookie or in the server-side session. Keeping track of values passed in a form as a user completes a process, like a site sign-up, well, that can probably be kept in hidden form fields.\nThe nice (and sometimes dangerous) thing, though, about the query string is that the state can be picked up by anyone who clicks on a link. As mentioned above, this is dangerous if it gives the user some authorization they shouldn't have. It's nice, though, for showing your friends something you found on the site.\n", "With the increasing use of Web 2.0, I think there are two important methods missing from your list:\n8 AJAX applications - since the page doesn't reload and there is no page to page navigation, state isn't an issue (but persisting user data must use the asynchronous XML calls).\n9 Local persistence - Browser-based applications can persist their user data and state to the local hard drive using libraries such as Google Gears.\nAs for which one is best, I think they all have their place, but the Query String method is problematic for search engines.\n", "Personally, since almost all of my web development is in PHP, I use PHP's session handlers.\nSessions are the most flexible, in my experience: they're normally faster than db accesses, and the cookies they generate die when the browser closes (by default).\n", "Avoid InProc if you plan to host your website on a cheap-n-cheerful host like webhost4life. I've learnt the hard way that because their systems are over subscribed, they recycle the applications very frequently which causes your session to get lost. Very annoying. \nTheir suggestion is to use StateServer which is fine except you have to serialise/deserialise the session eash post back. I love objects and my web app is full of them. I'm concerned about performance when switching to StateServer. I need to refactor to only put the stuff I really need in the session.\nWish I'd know that before I started...\nCheers, Rob.\n", "Be careful what state you store client side (query strings, form fields, cookies). Anything security-related should not be stored client-side, except maybe a session identifier if it is reasonably obscured and hard to guess. There are too many websites that have settings like \"authenticated=true\" and store those in a cookie or query string or hidden form field. It is trivial for a user to bypass something like that. Remember that ANY input coming from a client could have been tampered with and should not be trusted.\n", "Signed Cookies linked to some sort of database store when you need to grab data. There's no reason to be storing data on the client side if you have a connected back-end; you're just looking for trouble if this is a public facing website.\n", "It's not some much a question of what to use & what to avoid, but when to use which. Each has a particular circumstances when it is the best, and a different circumstance when it's the worst.\nThe deciding factor is generally lifetime of the data. Session state lives longer than form fields, and so on. \n" ]
[ 12, 3, 2, 1, 1, 1, 1, 0 ]
[]
[]
[ "state" ]
stackoverflow_0000095655_state.txt
Q: Lightweight REST library for Java I'm looking for a light version of REST for a Java web application I'm developing. I've looked at RESTlet (www.restlet.org) and the REST plugin for Struts 2, but I haven't made up my mind. I'm leaning towards RESTlet, as it seems to be lighter. Has anyone implemented a RESTful layer without any of the the frameworks or with the frameworks? Any performance issues that you've seen because of the new web layer? Did the introduction of REST added unmanageable or unreasonable complexity to your project? (Some complexity is understandable, but what I mean is just plain overkilling your design just to add REST) A: I'm a huge fan of JAX-RS - I think they've done a great job with that specification. I use it on a number of projects and its been a joy to work with. JAX-RS lets you create REST resources using POJOs with simple annotations dealing with the URI mappings, HTTP methods and content negotiation all integrated nicely with dependency injection. There's no complex APIs to learn; just the core REST concepts (URIs, headers/response codes and content negotiation) are required. FWIW JAX-RS is quite Rails-ish from the controller point of view There are a number of JAX-RS implementations out there - see this thread for a discussion. My personal recommendation is to use Jersey as its got the biggest, most active community behind it, has the best features at the time of writing (WADL support, implicit views, spring integration, nice REST client API); though if you are using JBoss/SEAM you might find RESTeasy integrates a little better. A: I'm a big fan of Restlet, but I usually use it to implement apps whose primary role is to be a RESTful web service. It sounds like you're looking to add a RESTful API to an existing application. If that's the case, JAX-RS's (or Enunciate's) annotation-based approach might be a better fit for your project. As for Restlet, I can tell you that I've been very impressed with the developers and the community; they're very active, engaged, responsive, and committed to a stable, efficient, reliable, and effective framework. My single favorite aspect of the framework is that it is a ground-up implementation of the REST paradigm; therefore there is no impedance-mismatch between a Restlet app's external API and internal implementation. I also really like how flexible it is - it can run inside a Java application container/server such as JBoss, Tomcat, Jetty, etc, or standalone, with an embedded HTTP server library. A: Well, I've used Enunciate quite a bit. It uses simple annotations to provide either REST and/or SOAP endpoints. http://enunciate.codehaus.org Plus, Ryan Heaton has always provided top-notch support for things, too. A: You know there is a new JCP API for Accessing RESTful Services, also: JAX-RS JCP311 https://jsr311.dev.java.net/ The open source version is called Project Jersey A: I am working on a REST API for gliffy.com and we ended up rolling our own. We didn't want to have to bring in Struts 2, Spring, or any other framework. I looked at RESTLet and found it incredibly confusing and over complicated. Apache has an implementation of the JAX-RS spec, but it is not finalized and also has some oddities to it. We're tentatively planning to open source our solution, but that's not for a few months. Rolling your own is easy, though. The Servlet Specification gives you everything you need, and you can easily connect to a database via Hibernate (see http://www.naildrivin5.com/daveblog5000/?p=39 for how to set up JPA without using EJB3). A: I found restlet to be a really elegant architecture. I'm working in the .net world so it was not an option for me, but I was able to build my own framework following the same basic principles of restlet. I have found the conversion of our WCF contract-based SOA application to REST based one has significantly simplified the application,
Lightweight REST library for Java
I'm looking for a light version of REST for a Java web application I'm developing. I've looked at RESTlet (www.restlet.org) and the REST plugin for Struts 2, but I haven't made up my mind. I'm leaning towards RESTlet, as it seems to be lighter. Has anyone implemented a RESTful layer without any of the the frameworks or with the frameworks? Any performance issues that you've seen because of the new web layer? Did the introduction of REST added unmanageable or unreasonable complexity to your project? (Some complexity is understandable, but what I mean is just plain overkilling your design just to add REST)
[ "I'm a huge fan of JAX-RS - I think they've done a great job with that specification. I use it on a number of projects and its been a joy to work with. \nJAX-RS lets you create REST resources using POJOs with simple annotations dealing with the URI mappings, HTTP methods and content negotiation all integrated nicely with dependency injection. There's no complex APIs to learn; just the core REST concepts (URIs, headers/response codes and content negotiation) are required. FWIW JAX-RS is quite Rails-ish from the controller point of view\nThere are a number of JAX-RS implementations out there - see this thread for a discussion.\nMy personal recommendation is to use Jersey as its got the biggest, most active community behind it, has the best features at the time of writing (WADL support, implicit views, spring integration, nice REST client API); though if you are using JBoss/SEAM you might find RESTeasy integrates a little better.\n", "I'm a big fan of Restlet, but I usually use it to implement apps whose primary role is to be a RESTful web service. It sounds like you're looking to add a RESTful API to an existing application. If that's the case, JAX-RS's (or Enunciate's) annotation-based approach might be a better fit for your project.\nAs for Restlet, I can tell you that I've been very impressed with the developers and the community; they're very active, engaged, responsive, and committed to a stable, efficient, reliable, and effective framework. My single favorite aspect of the framework is that it is a ground-up implementation of the REST paradigm; therefore there is no impedance-mismatch between a Restlet app's external API and internal implementation. I also really like how flexible it is - it can run inside a Java application container/server such as JBoss, Tomcat, Jetty, etc, or standalone, with an embedded HTTP server library.\n", "Well, I've used Enunciate quite a bit. It uses simple annotations to provide either REST and/or SOAP endpoints.\nhttp://enunciate.codehaus.org\nPlus, Ryan Heaton has always provided top-notch support for things, too.\n", "You know there is a new JCP API for Accessing RESTful Services, also:\nJAX-RS JCP311 \nhttps://jsr311.dev.java.net/\nThe open source version is called Project Jersey\n", "I am working on a REST API for gliffy.com and we ended up rolling our own. We didn't want to have to bring in Struts 2, Spring, or any other framework. I looked at RESTLet and found it incredibly confusing and over complicated.\nApache has an implementation of the JAX-RS spec, but it is not finalized and also has some oddities to it. We're tentatively planning to open source our solution, but that's not for a few months. \nRolling your own is easy, though. The Servlet Specification gives you everything you need, and you can easily connect to a database via Hibernate (see http://www.naildrivin5.com/daveblog5000/?p=39 for how to set up JPA without using EJB3).\n", "I found restlet to be a really elegant architecture. I'm working in the .net world so it was not an option for me, but I was able to build my own framework following the same basic principles of restlet.\nI have found the conversion of our WCF contract-based SOA application to REST based one has significantly simplified the application,\n" ]
[ 19, 8, 3, 3, 1, 1 ]
[]
[]
[ "java", "rest" ]
stackoverflow_0000066288_java_rest.txt
Q: How do I intercept a paste event in an editbox? How do I intercept a paste event in an editbox, possibly before the value is transferred to the object? A: Look up subclassing windows. A: If you subclass then intercept the WM_PASTE message you can do what you want, throw the message away to prevent the paste, manipulate the clipboard data, whatever. A: Subclass the edit box and handle the WM_PASTE message.
How do I intercept a paste event in an editbox?
How do I intercept a paste event in an editbox, possibly before the value is transferred to the object?
[ "Look up subclassing windows.\n", "If you subclass then intercept the WM_PASTE message you can do what you want, throw the message away to prevent the paste, manipulate the clipboard data, whatever. \n", "Subclass the edit box and handle the WM_PASTE message.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "c", "edit_control", "winapi" ]
stackoverflow_0000096271_c_edit_control_winapi.txt
Q: Suggestions for Adding Plugin Capability? Is there a general procedure for programming extensibility capability into your code? I am wondering what the general procedure is for adding extension-type capability to a system you are writing so that functionality can be extended through some kind of plugin API rather than having to modify the core code of a system. Do such things tend to be dependent on the language the system was written in, or is there a general method for allowing for this? A: I've used event-based APIs for plugins in the past. You can insert hooks for plugins by dispatching events and providing access to the application state. For example, if you were writing a blogging application, you might want to raise an event just before a new post is saved to the database, and provide the post HTML to the plugin to alter as needed. A: This is generally something that you'll have to expose yourself, so yes, it will be dependent on the language your system is written in (though often it's possible to write wrappers for other languages as well). If, for example, you had a program written in C, for Windows, plugins would be written for your program as DLLs. At runtime, you would manually load these DLLs, and expose some interface to them. For example, the DLLs might expose a gimme_the_interface() function which could accept a structure filled with function pointers. These function pointers would allow the DLL to make calls, register callbacks, etc. If you were in C++, you would use the DLL system, except you would probably pass an object pointer instead of a struct, and the object would implement an interface which provided functionality (accomplishing the same thing as the struct, but less ugly). For Java, you would load class files on-demand instead of DLLs, but the basic idea would be the same. In all cases, you'll need to define a standard interface between your code and the plugins, so that you can initialize the plugins, and so the plugins can interact with you. P.S. If you'd like to see a good example of a C++ plugin system, check out the foobar2000 SDK. I haven't used it in quite a while, but it used to be really well done. I assume it still is. A: I'm tempted to point you to the Design Patterns book for this generic question :p Seriously, I think the answer is no. You can't write extensible code by default, it will be both hard to write/extend and awfully inefficient (Mozilla started with the idea of being very extensible, used XPCOM everywhere, and now they realized it was a mistake and started to remove it where it doesn't make sense). what makes sense to do is to identify the pieces of your system that can be meaningfully extended and support a proper API for these cases (e.g. language support plug-ins in an editor). You'd use the relevant patterns, but the specific implementation depends on your platform/language choice. IMO, it also helps to use a dynamic language - makes it possible to tweak the core code at run time (when absolutely necessary). I appreciated that Mozilla's extensibility works that way when writing Firefox extensions. A: I think there are two aspects to your question: The design of the system to be extendable (the design patterns, inversion of control and other architectural aspects) (http://www.martinfowler.com/articles/injection.html). And, at least to me, yes these patterns/techniques are platform/language independent and can be seen as a "general procedure". Now, their implementation is language and platform dependend (for example in C/C++ you have the dynamic library stuff, etc.) Several 'frameworks' have been developed to give you a programming environment that provides you pluggability/extensibility but as some other people mention, don't get too crazy making everything pluggable. In the Java world a good specification to look is OSGi (http://en.wikipedia.org/wiki/OSGi) with several implementations the best one IMHO being Equinox (http://www.eclipse.org/equinox/) A: Find out what minimum requrements you want to put on a plugin writer. Then make one or more Interfaces that the writer must implement for your code to know when and where to execute the code. Make an API the writer can use to access some of the functionality in your code. You could also make a base class the writer must inherit. This will make wiring up the API easier. Then use some kind of reflection to scan a directory, and load the classes you find that matches your requirements. Some people also make a scripting language for their system, or implements an interpreter for a subset of an existing language. This is also a possible route to go. Bottom line is: When you get the code to load, only your imagination should be able to stop you. Good luck. A: If you are using a compiled language such as C or C++, it may be a good idea to look at plugin support via scripting languages. Both Python and Lua are excellent languages that are used to script a large number of applications (Civ4 and blender use Python, Supreme Commander uses Lua, etc). If you are using C++, check out the boost python library. Otherwise, python ships with headers that can be used in C, and does a fairly good job documenting the C/python API. The documentation seemed less complete for Lua, but I may not have been looking hard enough. Either way, you can offer a fairly solid scripting platform without a terrible amount of work. It still isn't trivial, but it provides you with a very good base to work from.
Suggestions for Adding Plugin Capability?
Is there a general procedure for programming extensibility capability into your code? I am wondering what the general procedure is for adding extension-type capability to a system you are writing so that functionality can be extended through some kind of plugin API rather than having to modify the core code of a system. Do such things tend to be dependent on the language the system was written in, or is there a general method for allowing for this?
[ "I've used event-based APIs for plugins in the past. You can insert hooks for plugins by dispatching events and providing access to the application state.\nFor example, if you were writing a blogging application, you might want to raise an event just before a new post is saved to the database, and provide the post HTML to the plugin to alter as needed.\n", "This is generally something that you'll have to expose yourself, so yes, it will be dependent on the language your system is written in (though often it's possible to write wrappers for other languages as well).\nIf, for example, you had a program written in C, for Windows, plugins would be written for your program as DLLs. At runtime, you would manually load these DLLs, and expose some interface to them. For example, the DLLs might expose a gimme_the_interface() function which could accept a structure filled with function pointers. These function pointers would allow the DLL to make calls, register callbacks, etc.\nIf you were in C++, you would use the DLL system, except you would probably pass an object pointer instead of a struct, and the object would implement an interface which provided functionality (accomplishing the same thing as the struct, but less ugly). For Java, you would load class files on-demand instead of DLLs, but the basic idea would be the same.\nIn all cases, you'll need to define a standard interface between your code and the plugins, so that you can initialize the plugins, and so the plugins can interact with you.\nP.S. If you'd like to see a good example of a C++ plugin system, check out the foobar2000 SDK. I haven't used it in quite a while, but it used to be really well done. I assume it still is.\n", "I'm tempted to point you to the Design Patterns book for this generic question :p\nSeriously, I think the answer is no. You can't write extensible code by default, it will be both hard to write/extend and awfully inefficient (Mozilla started with the idea of being very extensible, used XPCOM everywhere, and now they realized it was a mistake and started to remove it where it doesn't make sense).\nwhat makes sense to do is to identify the pieces of your system that can be meaningfully extended and support a proper API for these cases (e.g. language support plug-ins in an editor). You'd use the relevant patterns, but the specific implementation depends on your platform/language choice.\nIMO, it also helps to use a dynamic language - makes it possible to tweak the core code at run time (when absolutely necessary). I appreciated that Mozilla's extensibility works that way when writing Firefox extensions.\n", "I think there are two aspects to your question: \nThe design of the system to be extendable (the design patterns, inversion of control and other architectural aspects) (http://www.martinfowler.com/articles/injection.html). And, at least to me, yes these patterns/techniques are platform/language independent and can be seen as a \"general procedure\".\nNow, their implementation is language and platform dependend (for example in C/C++ you have the dynamic library stuff, etc.) \nSeveral 'frameworks' have been developed to give you a programming environment that provides you pluggability/extensibility but as some other people mention, don't get too crazy making everything pluggable. \nIn the Java world a good specification to look is OSGi (http://en.wikipedia.org/wiki/OSGi) with several implementations the best one IMHO being Equinox (http://www.eclipse.org/equinox/)\n", "\nFind out what minimum requrements you want to put on a plugin writer. Then make one or more Interfaces that the writer must implement for your code to know when and where to execute the code. \nMake an API the writer can use to access some of the functionality in your code. \n\nYou could also make a base class the writer must inherit. This will make wiring up the API easier. Then use some kind of reflection to scan a directory, and load the classes you find that matches your requirements. \nSome people also make a scripting language for their system, or implements an interpreter for a subset of an existing language. This is also a possible route to go.\nBottom line is: When you get the code to load, only your imagination should be able to stop you.\nGood luck.\n", "If you are using a compiled language such as C or C++, it may be a good idea to look at plugin support via scripting languages. Both Python and Lua are excellent languages that are used to script a large number of applications (Civ4 and blender use Python, Supreme Commander uses Lua, etc). \nIf you are using C++, check out the boost python library. Otherwise, python ships with headers that can be used in C, and does a fairly good job documenting the C/python API. The documentation seemed less complete for Lua, but I may not have been looking hard enough. Either way, you can offer a fairly solid scripting platform without a terrible amount of work. It still isn't trivial, but it provides you with a very good base to work from.\n" ]
[ 4, 3, 2, 2, 1, 1 ]
[]
[]
[ "extensibility", "plugins" ]
stackoverflow_0000008140_extensibility_plugins.txt
Q: Is there any way to pass a structure type to a c function I have some code with multiple functions very similar to each other to look up an item in a list based on the contents of one field in a structure. The only difference between the functions is the type of the structure that the look up is occurring in. If I could pass in the type, I could remove all the code duplication. I also noticed that there is some mutex locking happening in these functions as well, so I think I might leave them alone... A: If you ensure that the field is placed in the same place in each such structure, you can simply cast a pointer to get at the field. This technique is used in lots of low level system libraries e.g. BSD sockets. struct person { int index; }; struct clown { int index; char *hat; }; /* we're not going to define a firetruck here */ struct firetruck; struct fireman { int index; struct firetruck *truck; }; int getindexof(struct person *who) { return who->index; } int main(int argc, char *argv[]) { struct fireman sam; /* somehow sam gets initialised */ sam.index = 5; int index = getindexof((struct person *) &sam); printf("Sam's index is %d\n", index); return 0; } You lose type safety by doing this, but it's a valuable technique. [ I have now actually tested the above code and fixed the various minor errors. It's much easier when you have a compiler. ] A: Since structures are nothing more than predefined blocks of memory, you can do this. You could pass a void * to the structure, and an integer or something to define the type. From there, the safest thing to do would be to recast the void * into a pointer of the appropriate type before accessing the data. You'll need to be very, very careful, as you lose type-safety when you cast to a void * and you can likely end up with a difficult to debug runtime error when doing something like this. A: I think you should look at the C standard functions qsort() and bsearch() for inspiration. These are general purpose code to sort arrays and to search for data in a pre-sorted array. They work on any type of data structure - but you pass them a pointer to a helper function that does the comparisons. The helper function knows the details of the structure, and therefore does the comparison correctly. In fact, since you are wanting to do searches, it may be that all you need is bsearch(), though if you are building the data structures on the fly, you may decide you need a different structure than a sorted list. (You can use sorted lists -- it just tends to slow things down compared with, say, a heap. However, you'd need a general heap_search() function, and a heap_insert() function, to do the job properly, and such functions are not standardized in C. Searching the web shows such functions exist - not by that name; just do not try "c heap search" since it is assumed you meant "cheap search" and you get tons of junk!) A: If the ID field you test is part of a common initial sequence of fields shared by all the structs, then using a union guarantees that the access will work: #include <stdio.h> typedef struct { int id; int junk1; } Foo; typedef struct { int id; long junk2; } Bar; typedef union { struct { int id; } common; Foo foo; Bar bar; } U; int matches(const U *candidate, int wanted) { return candidate->common.id == wanted; } int main(void) { Foo f = { 23, 0 }; Bar b = { 42, 0 }; U fu; U bu; fu.foo = f; bu.bar = b; puts(matches(&fu, 23) ? "true" : "false"); puts(matches(&bu, 42) ? "true" : "false"); return 0; } If you're unlucky, and the field appears at different offsets in the various structs, you can add an offset parameter to your function. Then, offsetof and a wrapper macro simulate what the OP asked for - passing the type of struct at the call site: #include <stddef.h> #include <stdio.h> typedef struct { int id; int junk1; } Foo; typedef struct { int junk2; int id; } Bar; int matches(const void* candidate, size_t idOffset, int wanted) { return *(int*)((const unsigned char*)candidate + idOffset) == wanted; } #define MATCHES(type, candidate, wanted) matches(candidate, offsetof(type, id), wanted) int main(void) { Foo f = { 23, 0 }; Bar b = { 0, 42 }; puts(MATCHES(Foo, &f, 23) ? "true" : "false"); puts(MATCHES(Bar, &b, 42) ? "true" : "false"); return 0; } A: One way to do this is to have a type field as the first byte of the structure. Your receiving function looks at this byte and then casts the pointer to the correct type based on what it discovers. Another approach is to pass the type information as a separate parameter to each function that needs it. A: You can do this with a parameterized macro but most coding policies will frown on that. #include #define getfield(s, name) ((s).name) typedef struct{ int x; }Bob; typedef struct{ int y; }Fred; int main(int argc, char**argv){ Bob b; b.x=6; Fred f; f.y=7; printf("%d, %d\n", getfield(b, x), getfield(f, y)); } A: Short answer: no. You can, however, create your own method for doing so, i.e. providing a specification for how to create such a struct. However, it's generally not necessary and is not worth the effort; just pass by reference. (callFuncWithInputThenOutput(input, &struct.output);)
Is there any way to pass a structure type to a c function
I have some code with multiple functions very similar to each other to look up an item in a list based on the contents of one field in a structure. The only difference between the functions is the type of the structure that the look up is occurring in. If I could pass in the type, I could remove all the code duplication. I also noticed that there is some mutex locking happening in these functions as well, so I think I might leave them alone...
[ "If you ensure that the field is placed in the same place in each such structure, you can simply cast a pointer to get at the field. This technique is used in lots of low level system libraries e.g. BSD sockets.\nstruct person {\n int index;\n};\n\nstruct clown {\n int index;\n char *hat;\n};\n\n/* we're not going to define a firetruck here */\nstruct firetruck;\n\n\nstruct fireman {\n int index;\n struct firetruck *truck;\n};\n\nint getindexof(struct person *who)\n{\n return who->index;\n}\n\nint main(int argc, char *argv[])\n{\n struct fireman sam;\n /* somehow sam gets initialised */\n sam.index = 5;\n\n int index = getindexof((struct person *) &sam);\n printf(\"Sam's index is %d\\n\", index);\n\n return 0;\n}\n\nYou lose type safety by doing this, but it's a valuable technique.\n[ I have now actually tested the above code and fixed the various minor errors. It's much easier when you have a compiler. ]\n", "Since structures are nothing more than predefined blocks of memory, you can do this. You could pass a void * to the structure, and an integer or something to define the type.\nFrom there, the safest thing to do would be to recast the void * into a pointer of the appropriate type before accessing the data.\nYou'll need to be very, very careful, as you lose type-safety when you cast to a void * and you can likely end up with a difficult to debug runtime error when doing something like this.\n", "I think you should look at the C standard functions qsort() and bsearch() for inspiration. These are general purpose code to sort arrays and to search for data in a pre-sorted array. They work on any type of data structure - but you pass them a pointer to a helper function that does the comparisons. The helper function knows the details of the structure, and therefore does the comparison correctly.\nIn fact, since you are wanting to do searches, it may be that all you need is bsearch(), though if you are building the data structures on the fly, you may decide you need a different structure than a sorted list. (You can use sorted lists -- it just tends to slow things down compared with, say, a heap. However, you'd need a general heap_search() function, and a heap_insert() function, to do the job properly, and such functions are not standardized in C. Searching the web shows such functions exist - not by that name; just do not try \"c heap search\" since it is assumed you meant \"cheap search\" and you get tons of junk!)\n", "If the ID field you test is part of a common initial sequence of fields shared by all the structs, then using a union guarantees that the access will work:\n#include <stdio.h>\n\ntypedef struct\n{\n int id;\n int junk1;\n} Foo;\n\ntypedef struct\n{\n int id;\n long junk2;\n} Bar;\n\ntypedef union\n{\n struct\n {\n int id;\n } common;\n\n Foo foo;\n Bar bar;\n} U;\n\nint matches(const U *candidate, int wanted)\n{\n return candidate->common.id == wanted;\n}\n\nint main(void)\n{\n Foo f = { 23, 0 };\n Bar b = { 42, 0 };\n\n U fu;\n U bu;\n\n fu.foo = f;\n bu.bar = b;\n\n puts(matches(&fu, 23) ? \"true\" : \"false\");\n puts(matches(&bu, 42) ? \"true\" : \"false\");\n\n return 0;\n}\n\nIf you're unlucky, and the field appears at different offsets in the various structs, you can add an offset parameter to your function. Then, offsetof and a wrapper macro simulate what the OP asked for - passing the type of struct at the call site:\n#include <stddef.h>\n#include <stdio.h>\n\ntypedef struct\n{\n int id;\n int junk1;\n} Foo;\n\ntypedef struct\n{\n int junk2;\n int id;\n} Bar;\n\nint matches(const void* candidate, size_t idOffset, int wanted)\n{\n return *(int*)((const unsigned char*)candidate + idOffset) == wanted;\n}\n\n#define MATCHES(type, candidate, wanted) matches(candidate, offsetof(type, id), wanted)\n\nint main(void)\n{\n Foo f = { 23, 0 };\n Bar b = { 0, 42 };\n puts(MATCHES(Foo, &f, 23) ? \"true\" : \"false\");\n puts(MATCHES(Bar, &b, 42) ? \"true\" : \"false\");\n\n return 0;\n}\n\n", "One way to do this is to have a type field as the first byte of the structure. Your receiving function looks at this byte and then casts the pointer to the correct type based on what it discovers. Another approach is to pass the type information as a separate parameter to each function that needs it. \n", "You can do this with a parameterized macro but most coding policies will frown on that.\n\n#include \n#define getfield(s, name) ((s).name)\n\ntypedef struct{\n int x;\n}Bob;\n\ntypedef struct{\n int y;\n}Fred;\n\nint main(int argc, char**argv){\n Bob b;\n b.x=6;\n\n Fred f;\n f.y=7;\n\n printf(\"%d, %d\\n\", getfield(b, x), getfield(f, y));\n}\n\n", "Short answer: no. You can, however, create your own method for doing so, i.e. providing a specification for how to create such a struct. However, it's generally not necessary and is not worth the effort; just pass by reference. (callFuncWithInputThenOutput(input, &struct.output);)\n" ]
[ 7, 3, 1, 1, 0, 0, 0 ]
[ "I'm a little rusty on c, but try using a void* pointer as the variable type in the function parameter. Then pass the address of the structure to the function, and then use it he way that you would.\nvoid foo(void* obj);\n\nvoid main()\n{\n struct bla obj;\n ...\n foo(&obj);\n ...\n}\n\nvoid foo(void* obj)\n{\n printf(obj -> x, \"%s\")\n}\n\n" ]
[ -1 ]
[ "c" ]
stackoverflow_0000067790_c.txt
Q: #line and jump to line Do any editors honer C #line directives with regards to goto line features? Context: I'm working on a code generator and need to jump to a line of the output but the line is specified relative to the the #line directives I'm adding. I can drop them but then finding the input line is even a worse pain A: If the editor is scriptable it should be possible to write a script to do the navigation. There might even be a Vim or Emacs script that already does something similar. FWIW when I writing a lot of Bison/Flexx I wrote a Zeus Lua macro script that attempted to do something similar (i.e. move from input file to the corresponding line of the output file by search for the #line marker). For any one that might be interested here is that particular macro script. A: #line directives are normally inserted by the precompiler, not into source code, so editors won't usually honor that if the file extension is .c. However, the normal file extension for post-compiled files is .i or .gch, so you might try using that and see what happens. A: I've used the following in a header file occasionally to produce clickable items in the VC6 and recent VS(2003+) compiler ouptut window. Basically, this exploits the fact that items output in the compiler output are essentially being parsed for "PATH(LINENUM): message". This presumes on the Microsoft compiler's treatment of "pragma remind". This isn't quite exactly what you asked... but it might be generally helpful in arriving at something you can get the compiler to emit that some editors might honor. // The following definitions will allow you to insert // clickable items in the output stream of the Microsoft compiler. // The error and warning variants will be reported by the // IDE as actual warnings and errors... which means you can make // them occur in the task list. // In theory, the coding standards could be checked to some extent // in this way and reminders that show up as warnings or even // errors inserted... #define strify0(X) #X #define strify(X) strify0(X) #define remind(S) message(__FILE__ "(" strify( __LINE__ ) ") : " S) // example usage #pragma remind("warning: fake warning") #pragma remind("error: fake error") I haven't tried it in a while but it should still work. A: Use sed or a similar tool to translate the #lines to something else not interpreted by the compiler, so you get C error messages on the real line, but have a reference to the original input file nearby.
#line and jump to line
Do any editors honer C #line directives with regards to goto line features? Context: I'm working on a code generator and need to jump to a line of the output but the line is specified relative to the the #line directives I'm adding. I can drop them but then finding the input line is even a worse pain
[ "If the editor is scriptable it should be possible to write a script to do the navigation. There might even be a Vim or Emacs script that already does something similar.\nFWIW when I writing a lot of Bison/Flexx I wrote a Zeus Lua macro script that attempted to do something similar (i.e. move from input file to the corresponding line of the output file by search for the #line marker).\nFor any one that might be interested here is that particular macro script.\n", "#line directives are normally inserted by the precompiler, not into source code, so editors won't usually honor that if the file extension is .c.\nHowever, the normal file extension for post-compiled files is .i or .gch, so you might try using that and see what happens.\n", "I've used the following in a header file occasionally to produce clickable items in\nthe VC6 and recent VS(2003+) compiler ouptut window.\nBasically, this exploits the fact that items output in the compiler output\nare essentially being parsed for \"PATH(LINENUM): message\".\nThis presumes on the Microsoft compiler's treatment of \"pragma remind\".\nThis isn't quite exactly what you asked... but it might be generally helpful\nin arriving at something you can get the compiler to emit that some editors might honor.\n\n // The following definitions will allow you to insert\n // clickable items in the output stream of the Microsoft compiler.\n // The error and warning variants will be reported by the\n // IDE as actual warnings and errors... which means you can make\n // them occur in the task list.\n\n // In theory, the coding standards could be checked to some extent\n // in this way and reminders that show up as warnings or even\n // errors inserted...\n\n\n #define strify0(X) #X\n #define strify(X) strify0(X)\n #define remind(S) message(__FILE__ \"(\" strify( __LINE__ ) \") : \" S)\n\n // example usage\n #pragma remind(\"warning: fake warning\")\n #pragma remind(\"error: fake error\")\n\nI haven't tried it in a while but it should still work.\n", "Use sed or a similar tool to translate the #lines to something else not interpreted by the compiler, so you get C error messages on the real line, but have a reference to the original input file nearby.\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "preprocessor", "text_editor" ]
stackoverflow_0000050819_preprocessor_text_editor.txt
Q: Batch file to copy files from one directory to another I have two code bases of an application. I need to copy all the files in all the directories with .java from the newer code base, to the older (so I can commit it to svn). How can I write a batch files to do this? A: XCOPY /D ? xcopy c:\olddir\*.java c:\newdir /D /E /Q /Y A: If you've lots of different instances of this problem to solve, I've had some success with Apache Ant for this kind of copy/update/backup kind of thing. There is a bit of a learning curve, though, and it does require you to have a Java runtime environment installed. A: I like Robocopy ("Robust File Copy"). It is a command-line directory replication command. It was available as part of the Windows Resource Kit, and is introduced as a standard feature of Windows Vista and Windows Server 2008.
Batch file to copy files from one directory to another
I have two code bases of an application. I need to copy all the files in all the directories with .java from the newer code base, to the older (so I can commit it to svn). How can I write a batch files to do this?
[ "XCOPY /D ?\nxcopy c:\\olddir\\*.java c:\\newdir /D /E /Q /Y\n\n", "If you've lots of different instances of this problem to solve, I've had some success with Apache Ant for this kind of copy/update/backup kind of thing.\nThere is a bit of a learning curve, though, and it does require you to have a Java runtime environment installed. \n", "I like Robocopy (\"Robust File Copy\"). It is a command-line directory replication command. It was available as part of the Windows Resource Kit, and is introduced as a standard feature of Windows Vista and Windows Server 2008.\n" ]
[ 6, 1, 1 ]
[]
[]
[ "batch_file", "windows" ]
stackoverflow_0000096264_batch_file_windows.txt
Q: Automatically verify my website's links are pointing to urls that exist? Is there a tool to automatically search through my site and test all the links? I hate running across bad urls. A: Xenu link sleuth is excellent (and free) A: w3.org checklink A: If I were you, I'd check out the W3C Link Checker. A: Something like this should work: http://www.dead-links.com/ Do google searches for "404 checker" or "broken link checker" A: I used Xenu's Link Sleuth in the past. It will crawl your site and tell you which links point to nowhere. It is not super fancy but it works. http://en.wikipedia.org/wiki/Xenu%27s_Link_Sleuth The Wikipedia page lists a whole bunch of other products. A: WebHTTrack Can take a long time to go through a large web site (I archived a 250MB website and it took approximately 2 hours - it wasn't local though) It has a log so you should be able to track 404s easily. A: Also check out Google's webmaster tools. http://www.google.com/webmasters/tools/ They give you the ability to see the 404's that GoogleBot discovers when crawling your website (along with lots and lots of other stuff).
Automatically verify my website's links are pointing to urls that exist?
Is there a tool to automatically search through my site and test all the links? I hate running across bad urls.
[ "Xenu link sleuth is excellent (and free)\n", "w3.org checklink\n", "If I were you, I'd check out the W3C Link Checker.\n", "Something like this should work: http://www.dead-links.com/\nDo google searches for \"404 checker\" or \"broken link checker\"\n", "I used Xenu's Link Sleuth in the past. It will crawl your site and tell you which links point to nowhere. It is not super fancy but it works.\nhttp://en.wikipedia.org/wiki/Xenu%27s_Link_Sleuth\nThe Wikipedia page lists a whole bunch of other products.\n", "WebHTTrack\nCan take a long time to go through a large web site (I archived a 250MB website and it took approximately 2 hours - it wasn't local though) It has a log so you should be able to track 404s easily.\n", "Also check out Google's webmaster tools.\nhttp://www.google.com/webmasters/tools/\nThey give you the ability to see the 404's that GoogleBot discovers when crawling your website (along with lots and lots of other stuff).\n" ]
[ 5, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "testing" ]
stackoverflow_0000096349_testing.txt
Q: RSS Statistics/Traffic Metrics I want to track how much traffic I'm getting on an RSS feed that is set up using .Net 2.0 & SQL Server. Is there an industry standard on what metrics I should use, for example, page hits? A: Feedburner analysis gives you statistics like: (source: blogperfume.com) (source: blogperfume.com) (source: blogperfume.com) A: I agree, FeedBurner is probably your best option, almost all large sites use it. A: Instead of tracking it yourself, I would encourage you to use a service like FeedBurner. Even if you don't want to use a 3rd party service it will give you an idea of what other people track for RSS feeds.
RSS Statistics/Traffic Metrics
I want to track how much traffic I'm getting on an RSS feed that is set up using .Net 2.0 & SQL Server. Is there an industry standard on what metrics I should use, for example, page hits?
[ "Feedburner analysis gives you statistics like:\n\n(source: blogperfume.com) \n\n(source: blogperfume.com)\n\n(source: blogperfume.com) \n", "I agree, FeedBurner is probably your best option, almost all large sites use it.\n", "Instead of tracking it yourself, I would encourage you to use a service like FeedBurner. Even if you don't want to use a 3rd party service it will give you an idea of what other people track for RSS feeds.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "metrics", "rss" ]
stackoverflow_0000095346_metrics_rss.txt
Q: Am I turning away customers by disabling SSL 2.0 and PCT 1.0 in IIS5? Do I risk losing sales by disabling SSL 2.0 and PCT 1.0 in IIS5? Clarification: Sales would be lost by client not being able to connect via SSL to complete ecommerce transaction because SSL 2.0 or PCT 1.0 is disabled on the web server. Microsoft kbase article: http://support.microsoft.com/kb/187498 A: Modern browsers either don't appear to support SSLv2 at all (Google Chrome, Opera 9.52, Firefox) or have it disabled by default (IE7, IE8). That said, are you concerned about losing business from people using much-less-than-modern web browsers? Possibly more importantly, are you concerned about your customers' security? Even if they can only connect using SSLv2, do you want them performing secure transactions with you using a protocol that is known to be insecure (see Google)? As a computer professional, I would not hesitate to recommend to management that SSLv2 be disabled. I would leave it up to the bean counters to determine whether they think the additional income is worth the potential liability. A: No. The number of users with support for SSLv2 at all, much less SSLv2 only, is negligible. It has been obsolete since 1996, and is disabled or not even included in all modern browsers of significance. A: Only you can really answer that question. Your customers' experience of your site will be mediated by their browser. The first place to look for browser information is at a listing of the user-agents that are being used to access your website. Hopefully you have a good log analyzer such as Analog, Weblog, Google Analytics, WebTrends, etc. This is the first place to look and should give you a good idea of the SSL level that your general community supports. You may also want to alter your application to check for the SSL level supported by your users' browsers that get to the "complete ecommerce transaction" part of your website. This is the best method to determine if you are turning away customers. Remember that the SSL level is auto negotiated between the server and the client (best encryption used first) so you don't necessarily need to disable older versions, but you could pop up a message to the user encouraging them to upgrade. A: Presumably you use SSL to protect users from man-in-the-middle or other attacks, yes? SSLv2 is useless for this. Disable it -- the number of users who use a browser without SSLv3 or TLS support is vanishingly small, and it's easier to make them somebody else's problem than explain why somebody in Nigeria is using their credit card.
Am I turning away customers by disabling SSL 2.0 and PCT 1.0 in IIS5?
Do I risk losing sales by disabling SSL 2.0 and PCT 1.0 in IIS5? Clarification: Sales would be lost by client not being able to connect via SSL to complete ecommerce transaction because SSL 2.0 or PCT 1.0 is disabled on the web server. Microsoft kbase article: http://support.microsoft.com/kb/187498
[ "Modern browsers either don't appear to support SSLv2 at all (Google Chrome, Opera 9.52, Firefox) or have it disabled by default (IE7, IE8).\nThat said, are you concerned about losing business from people using much-less-than-modern web browsers?\nPossibly more importantly, are you concerned about your customers' security? Even if they can only connect using SSLv2, do you want them performing secure transactions with you using a protocol that is known to be insecure (see Google)?\nAs a computer professional, I would not hesitate to recommend to management that SSLv2 be disabled. I would leave it up to the bean counters to determine whether they think the additional income is worth the potential liability.\n", "No. The number of users with support for SSLv2 at all, much less SSLv2 only, is negligible. It has been obsolete since 1996, and is disabled or not even included in all modern browsers of significance.\n", "Only you can really answer that question. Your customers' experience of your site will be mediated by their browser. The first place to look for browser information is at a listing of the user-agents that are being used to access your website. Hopefully you have a good log analyzer such as Analog, Weblog, Google Analytics, WebTrends, etc. This is the first place to look and should give you a good idea of the SSL level that your general community supports.\nYou may also want to alter your application to check for the SSL level supported by your users' browsers that get to the \"complete ecommerce transaction\" part of your website. This is the best method to determine if you are turning away customers. \nRemember that the SSL level is auto negotiated between the server and the client (best encryption used first) so you don't necessarily need to disable older versions, but you could pop up a message to the user encouraging them to upgrade.\n", "Presumably you use SSL to protect users from man-in-the-middle or other attacks, yes? SSLv2 is useless for this. Disable it -- the number of users who use a browser without SSLv3 or TLS support is vanishingly small, and it's easier to make them somebody else's problem than explain why somebody in Nigeria is using their credit card.\n" ]
[ 5, 2, 1, 0 ]
[]
[]
[ "iis_5", "ssl" ]
stackoverflow_0000064621_iis_5_ssl.txt
Q: SQL server 2000 Like Statement Usage I have a SQL statement that looks like: SELECT [Phone] FROM [Table] WHERE ( [Phone] LIKE '[A-Z][a-z]' OR [Phone] = 'N/A' OR [Phone] LIKE '[0]' ) The part I'm having trouble with is the where statement with the "LIKEs". I've seen SQL statements where authors used like statements in the way I'm using them above. At first, I thought this might be a version of Regular Expressions but I've since learned. Is anyone familiar with using like statements in such a way. Note: the "N/A" is working fine. What I need to match is phone numbers that have characters. Or phone numbers which contain nothing but zero. A: Check here. [] matches a range of characters. I think you want something like this: SELECT [Phone] FROM [Table] WHERE ( [Phone] LIKE '%[A-Z]%' OR [Phone] LIKE '%[a-z]%' OR [Phone] = 'N/A' OR [Phone] LIKE '0' ) A: Try using the t-sql ISNUMERIC function. That will show you which ones are/are not numeric. You may also need to TRIM or REPLACE spaces to get what you want. For example, to find valid phone numbers, replace spaces with '', test with ISNUMERIC, and test with LEN. Although I will warn you, this will be tedious if you have to deal with international phone numbers. The thing to note with your SQL above, is that SQL Server doesn't understand Regex.
SQL server 2000 Like Statement Usage
I have a SQL statement that looks like: SELECT [Phone] FROM [Table] WHERE ( [Phone] LIKE '[A-Z][a-z]' OR [Phone] = 'N/A' OR [Phone] LIKE '[0]' ) The part I'm having trouble with is the where statement with the "LIKEs". I've seen SQL statements where authors used like statements in the way I'm using them above. At first, I thought this might be a version of Regular Expressions but I've since learned. Is anyone familiar with using like statements in such a way. Note: the "N/A" is working fine. What I need to match is phone numbers that have characters. Or phone numbers which contain nothing but zero.
[ "Check here.\n[] matches a range of characters.\nI think you want something like this:\nSELECT [Phone]\nFROM [Table]\nWHERE\n(\n [Phone] LIKE '%[A-Z]%'\n OR [Phone] LIKE '%[a-z]%'\n OR [Phone] = 'N/A'\n OR [Phone] LIKE '0'\n)\n\n", "Try using the t-sql ISNUMERIC function. That will show you which ones are/are not numeric.\nYou may also need to TRIM or REPLACE spaces to get what you want.\nFor example, to find valid phone numbers, replace spaces with '', test with ISNUMERIC, and test with LEN.\nAlthough I will warn you, this will be tedious if you have to deal with international phone numbers.\nThe thing to note with your SQL above, is that SQL Server doesn't understand Regex.\n" ]
[ 5, 2 ]
[]
[]
[ "sql", "sql_server" ]
stackoverflow_0000096390_sql_sql_server.txt
Q: Switching an OSS project license from GPL to L-GPL on Sourceforge Is that possible to switch an Open Source Project license from GPL to LGPL v3 ? I am the project originator and the only contributor. A: yes, of course. You can change the licence to whatever you want. Go to your admin page, then edit registration from the menu, view Public Info, then edit the Trove categorisation. You need to remove then add a new category. Easy (if a little link-happy). This applies if you're an admin of the project, no matter how many of you there are or who has contributed. A: In addition to the points noted, note that anyone to whom the product was already licensed (and anyone they licensed it on to) would be entitled to stay under the GPL - you can't change the terms they took the software under (unless they agree to the change). A: AFAIK, you can do that if you're the only contributor to the source code, irrespective of whether you're the original author or lead developer. If you can get approval from all the contributors, then you can change the license. Warning: IANAL.
Switching an OSS project license from GPL to L-GPL on Sourceforge
Is that possible to switch an Open Source Project license from GPL to LGPL v3 ? I am the project originator and the only contributor.
[ "yes, of course. You can change the licence to whatever you want.\nGo to your admin page, then edit registration from the menu, view Public Info, then edit the Trove categorisation. You need to remove then add a new category. Easy (if a little link-happy).\nThis applies if you're an admin of the project, no matter how many of you there are or who has contributed.\n", "In addition to the points noted, note that anyone to whom the product was already licensed (and anyone they licensed it on to) would be entitled to stay under the GPL - you can't change the terms they took the software under (unless they agree to the change).\n", "AFAIK, you can do that if you're the only contributor to the source code, irrespective of whether you're the original author or lead developer.\nIf you can get approval from all the contributors, then you can change the license.\nWarning: IANAL.\n" ]
[ 6, 3, 1 ]
[]
[]
[ "licensing", "open_source" ]
stackoverflow_0000092754_licensing_open_source.txt
Q: What is the best place to store a configuration file in a Java web application (WAR)? I create a web application (WAR) and deploy it on Tomcat. In the webapp there is a page with a form where an administrator can enter some configuration data. I don't want to store this data in an DBMS, but just in an XML file on the file system. Where to put it? I would like to put the file somewhere in the directory tree where the application itself is deployed. Should my configuration file be in the WEB-INF directory? Or put it somewhere else? And what is the Java code to use in a servlet to find the absolute path of the directory? Or can it be accessed with a relative path? A: What we do is to put it in a separate directory on the server (you could use something like /config, /opt/config, /root/config, /home/username/config, or anything you want). When our servlets start up, they read the XML file, get a few things out of it (most importantly DB connection information), and that's it. I asked about why we did this once. It would be nice to store everything in the DB, but obviously you can't store DB connection information in the DB. You could hardcode things in the code, but that's ugly for many reasons. If the info ever has to change you have to rebuild the code and redeploy. If someone gets a copy of your code or your WAR file they would then get that information. Putting things in the WAR file seems nice, but if you want to change things much it could be a bad idea. The problem is that if you have to change the information, then next time you redeploy it will overwrite the file so anything you didn't remember to change in the version getting built into the WAR gets forgotten. The file in a special place on the file system thing works quite well for us. It doesn't have any big downsides. You know where it is, it's stored seperatly, makes deploying to multiple machines easy if they all need different config values (since it's not part of the WAR). The only other solution I can think of that would work well would be keeping everything in the DB except the DB login info. That would come from Java system properties that are retrieved through the JVM. This the Preferences API thing mentioned by Hans Doggen above. I don't think it was around when our application was first developed, if it was it wasn't used. As for the path for accessing the configuration file, it's just a file on the filesystem. You don't need to worry about the web path. So when your servlet starts up it just opens the file at "/config/myapp/config.xml" (or whatever) and it will find the right thing. Just hardcodeing the path in for this one seems pretty harmless to me. A: WEB-INF is a good place to put your config file. Here's some code to get the absolute path of the directory from a servlet. public void init(ServletConfig servletConfig) throws ServletException{ super.init(servletConfig); String path = servletConfig.getServletContext().getRealPath("/WEB-INF") A: Putting it in WEB-INF will hide the XML file from users who try to access it directly through a URL, so yes, I'd say put it in WEB-INF. A: I would not store it in the application folder, because that would override the configuration with a new deployment of the application. I suggest you have a look at the Preferences API, or write something in the users folder (the user that is running Tomcat). A: The answer to this depends on how you intend to read and write that config file. For example, the Spring framework gives you the ability to use XML configuration files (or Java property files); these can be stored in your classpath (e.g., in the WEB-INF directory), anywhere else on the filesystem, or even in memory. If you were to use Spring for this, then the easiest place to store the config file is in your WEB-INF directory, and then use Spring's ClassPathXmlApplicationContext class to access your configuration file. But again, it all depends on how you plan to access that file. A: If it is your custom config WEB-INF is a good place for it. But some libraries may require configs to reside in WEB-INF/classes.
What is the best place to store a configuration file in a Java web application (WAR)?
I create a web application (WAR) and deploy it on Tomcat. In the webapp there is a page with a form where an administrator can enter some configuration data. I don't want to store this data in an DBMS, but just in an XML file on the file system. Where to put it? I would like to put the file somewhere in the directory tree where the application itself is deployed. Should my configuration file be in the WEB-INF directory? Or put it somewhere else? And what is the Java code to use in a servlet to find the absolute path of the directory? Or can it be accessed with a relative path?
[ "What we do is to put it in a separate directory on the server (you could use something like /config, /opt/config, /root/config, /home/username/config, or anything you want). When our servlets start up, they read the XML file, get a few things out of it (most importantly DB connection information), and that's it.\nI asked about why we did this once.\nIt would be nice to store everything in the DB, but obviously you can't store DB connection information in the DB.\nYou could hardcode things in the code, but that's ugly for many reasons. If the info ever has to change you have to rebuild the code and redeploy. If someone gets a copy of your code or your WAR file they would then get that information.\nPutting things in the WAR file seems nice, but if you want to change things much it could be a bad idea. The problem is that if you have to change the information, then next time you redeploy it will overwrite the file so anything you didn't remember to change in the version getting built into the WAR gets forgotten.\nThe file in a special place on the file system thing works quite well for us. It doesn't have any big downsides. You know where it is, it's stored seperatly, makes deploying to multiple machines easy if they all need different config values (since it's not part of the WAR).\nThe only other solution I can think of that would work well would be keeping everything in the DB except the DB login info. That would come from Java system properties that are retrieved through the JVM. This the Preferences API thing mentioned by Hans Doggen above. I don't think it was around when our application was first developed, if it was it wasn't used.\nAs for the path for accessing the configuration file, it's just a file on the filesystem. You don't need to worry about the web path. So when your servlet starts up it just opens the file at \"/config/myapp/config.xml\" (or whatever) and it will find the right thing. Just hardcodeing the path in for this one seems pretty harmless to me.\n", "WEB-INF is a good place to put your config file. Here's some code to get the absolute path of the directory from a servlet.\npublic void init(ServletConfig servletConfig) throws ServletException{\n super.init(servletConfig);\n String path = servletConfig.getServletContext().getRealPath(\"/WEB-INF\")\n\n", "Putting it in WEB-INF will hide the XML file from users who try to access it directly through a URL, so yes, I'd say put it in WEB-INF.\n", "I would not store it in the application folder, because that would override the configuration with a new deployment of the application.\nI suggest you have a look at the Preferences API, or write something in the users folder (the user that is running Tomcat).\n", "The answer to this depends on how you intend to read and write that config file.\nFor example, the Spring framework gives you the ability to use XML configuration files (or Java property files); these can be stored in your classpath (e.g., in the WEB-INF directory), anywhere else on the filesystem, or even in memory. If you were to use Spring for this, then the easiest place to store the config file is in your WEB-INF directory, and then use Spring's ClassPathXmlApplicationContext class to access your configuration file.\nBut again, it all depends on how you plan to access that file.\n", "If it is your custom config WEB-INF is a good place for it. But some libraries may require configs to reside in WEB-INF/classes.\n" ]
[ 50, 16, 9, 6, 3, 1 ]
[]
[]
[ "jakarta_ee", "java", "tomcat", "web_applications" ]
stackoverflow_0000096247_jakarta_ee_java_tomcat_web_applications.txt
Q: Logging image downloads I'm trying to find a way of finding out who is downloading what image from an image gallery. Users can download using a button beside the thumbnail or right click and use the "save link as" Is it possible to relate a user session or ID to a "save link as" action from all browsers using either PHP or JavaScript. A: Yes, my preferred way of doing this would be via PHP. You'd have to set up a script which would load up the file and send it to the user browser. This script would also be able to log the download somewhere (e.g. your database). For example - in very rough pseudo-code: download.php $file = $_GET['file']; updateFileCount($file); header('Content-Type: image/jpeg'); sendFile($file); Then, you just have your download link point to download.php instead of the actual file. (Note that updateFileCount and sendFile are functions that you would have to provide, of course - this script is an example of a download script which you could use) Note: I highly recommend avoiding the use of $_GET['file'] to get the whole filename - malicious users could use it to retrieve sensitive files from your web server. But the safe use of PHP downloads is a topic for another question. A: You need a gateway script, like ImageDownload.php?picture=me.jpg, or something like that. That page whould return the image bytes, as well as logging that the image is downloaded. A: Because the images being saved are on their computer locally there would be no way to get that kind of information as they have already retrieved the image from your system. Even with javascript the best I know that you could do is to log each time a user presses the second mousebutton using some kind of ajax'y stuff. I don't really like the idea, but if you wanted to log everytime someone downloaded an image you could host the images inside a flash or java app that made it a requirement to click a download image button. That way the only way for them to get the image without doing that would be to either capture packets as they came into their side or take a screenshot. A: Your server access logs should already have the request for the non-thumbnailed version of the file, so you just need to modify the log format to include the sessionid, which I presume you can map back to a user. A: I agree strongly with the suggestion put forward by Phill Sacre. For what you are looking for this is the way to go. It also has the benefit of being potentially able to keep the tracked files out of the direct web path so that they can't be direct linked to. I use this method in a client site where the images are paid content so must be restricted access.
Logging image downloads
I'm trying to find a way of finding out who is downloading what image from an image gallery. Users can download using a button beside the thumbnail or right click and use the "save link as" Is it possible to relate a user session or ID to a "save link as" action from all browsers using either PHP or JavaScript.
[ "Yes, my preferred way of doing this would be via PHP. You'd have to set up a script which would load up the file and send it to the user browser. This script would also be able to log the download somewhere (e.g. your database).\nFor example - in very rough pseudo-code:\ndownload.php\n$file = $_GET['file'];\nupdateFileCount($file);\nheader('Content-Type: image/jpeg');\nsendFile($file);\n\nThen, you just have your download link point to download.php instead of the actual file. (Note that updateFileCount and sendFile are functions that you would have to provide, of course - this script is an example of a download script which you could use)\nNote: I highly recommend avoiding the use of $_GET['file'] to get the whole filename - malicious users could use it to retrieve sensitive files from your web server. But the safe use of PHP downloads is a topic for another question.\n", "You need a gateway script, like ImageDownload.php?picture=me.jpg, or something like that.\nThat page whould return the image bytes, as well as logging that the image is downloaded.\n", "Because the images being saved are on their computer locally there would be no way to get that kind of information as they have already retrieved the image from your system. Even with javascript the best I know that you could do is to log each time a user presses the second mousebutton using some kind of ajax'y stuff.\nI don't really like the idea, but if you wanted to log everytime someone downloaded an image you could host the images inside a flash or java app that made it a requirement to click a download image button. That way the only way for them to get the image without doing that would be to either capture packets as they came into their side or take a screenshot.\n", "Your server access logs should already have the request for the non-thumbnailed version of the file, so you just need to modify the log format to include the sessionid, which I presume you can map back to a user.\n", "I agree strongly with the suggestion put forward by Phill Sacre. For what you are looking for this is the way to go. \nIt also has the benefit of being potentially able to keep the tracked files out of the direct web path so that they can't be direct linked to.\nI use this method in a client site where the images are paid content so must be restricted access.\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "download", "session" ]
stackoverflow_0000070600_download_session.txt
Q: How to create a fast PHP library? For our online game, we have written tons of PHP classes and functions grouped by theme in files and then folders. In the end, we have now all our backend code (logic & DB access layers) in a set of files that we call libs and we include our libs in our GUI (web pages, presentation layer) using include_once('pathtolib/file.inc'). The problem is that we have been lazy with inclusions and most include statements are made inside our libs file resulting that from each webpage, each time we include any libs file, we actually load the entire libs, file by file. This has a significant impact on the performance. Therefore What would be the best solution ? Remove all include statements from the libs file and only call the necessary one from the web pages ? Do something else ? Server uses a classic LAMP stack (PHP5). EDIT: We have a mix of simple functions (legacy reason and the majority of the code) and classes. So autoload will not be enough. A: Manage all includes manually, only where needed Set your include_path to only where it has to be, the default is something like .:/usr/lib/pear/:/usr/lib/php, point it only at where it has to be, php.net/set_include_path Don't use autoload, it's slow and makes APC and equivalent caches jobs a lot harder You can turn off the "stat"-operation in APC, but then you have to clear the cache manually every time you update the files A: If you've done your programming in an object-oriented way, you can make use of the autoload function, which will load classes from their source files on-demand as you call them. Edit: I noticed that someone downvoted both answers that referred to autoloading. Are we wrong? Is the overhead of the __autoload function too high to use it for performance purposes? If there is something I'm not realizing about this technique, I'd be really interested to know what it is. A: If you want to get really hard-core, do some static analysis, and figure out exactly what libraries are needed when, and only include those. If you use include and not include_once, then there is a bit of a speed savings there as well. All that said, Matt's answer about the Zend Optimizer is right on the money. If you want, try the Advanced PHP Cache (APC), which is an opcode cache, and free. It should be in the PECL repository. A: You could use spl_autoload_register() or __autoload() to create whatever rules you need for including the files that you need for classes, however autoload introduces its own performance overheads. You'll need to make sure whatever you use is prepended to all gui pages using a php.ini setting or an apache config. For your files with generic functions, I would suggest that you wrap them in a utility class and do a simple find and replace to replace all your function() calls with util::function(), which would then enable you to autoload these functions (again, there is an overhead introduced to calling a method rather than a global function). Essentially the best thing to do is go back through your code and pay off your design debt by fixing the include issues. This will give you the most performance benefit, and it will allow you to make the most of optimisers like eAccelerator, Zend Platform and APC Here is a sample method for loading stuff dynamically public static function loadClass($class) { if (class_exists($class, false) || interface_exists($class, false)) { return; } $file = YOUR_LIB_ROOT.str_replace('_', DIRECTORY_SEPARATOR, $class).'.php'; if (file_exists($file)) { include_once $file; if (!class_exists($class, false) && !interface_exists($class, false)) { throw new Exception('File '.$file.' was loaded but class '.$class.' was not found'); } } } A: What your looking for is Automap PECL extension. It basically allows for auto loading with only a small overhead of loading a pre-computed map file. You can also sub divide the map file if you know a specific directory will only pull from certain PHP files. You can read more about it here. A: It's been a while since I used php, but shouldn't the Zend Optimizer or Cache help in this case? Does php still load & compile every included file again for every request? I'm not sure if autoloading is the answer. If these files are included, they are probably needed in the class including it, so they will still be autoloaded anyway. A: Use a byte code cache (ideally APC) so that PHP doesn't need to parse the libraries on each page load. Be aware that using autoload will negate the benefits of using a byte code cache (you can read more about this here). A: Use a profiler. If you try to optimise without having measures, you're working blind.
How to create a fast PHP library?
For our online game, we have written tons of PHP classes and functions grouped by theme in files and then folders. In the end, we have now all our backend code (logic & DB access layers) in a set of files that we call libs and we include our libs in our GUI (web pages, presentation layer) using include_once('pathtolib/file.inc'). The problem is that we have been lazy with inclusions and most include statements are made inside our libs file resulting that from each webpage, each time we include any libs file, we actually load the entire libs, file by file. This has a significant impact on the performance. Therefore What would be the best solution ? Remove all include statements from the libs file and only call the necessary one from the web pages ? Do something else ? Server uses a classic LAMP stack (PHP5). EDIT: We have a mix of simple functions (legacy reason and the majority of the code) and classes. So autoload will not be enough.
[ "\nManage all includes manually, only where needed\nSet your include_path to only where it has to be, the default is something like .:/usr/lib/pear/:/usr/lib/php, point it only at where it has to be, php.net/set_include_path\nDon't use autoload, it's slow and makes APC and equivalent caches jobs a lot harder\nYou can turn off the \"stat\"-operation in APC, but then you have to clear the cache manually every time you update the files\n\n", "If you've done your programming in an object-oriented way, you can make use of the autoload function, which will load classes from their source files on-demand as you call them.\nEdit: I noticed that someone downvoted both answers that referred to autoloading. Are we wrong? Is the overhead of the __autoload function too high to use it for performance purposes? If there is something I'm not realizing about this technique, I'd be really interested to know what it is.\n", "If you want to get really hard-core, do some static analysis, and figure out exactly what libraries are needed when, and only include those.\nIf you use include and not include_once, then there is a bit of a speed savings there as well. \nAll that said, Matt's answer about the Zend Optimizer is right on the money. If you want, try the Advanced PHP Cache (APC), which is an opcode cache, and free. It should be in the PECL repository.\n", "You could use spl_autoload_register() or __autoload() to create whatever rules you need for including the files that you need for classes, however autoload introduces its own performance overheads. You'll need to make sure whatever you use is prepended to all gui pages using a php.ini setting or an apache config.\nFor your files with generic functions, I would suggest that you wrap them in a utility class and do a simple find and replace to replace all your function() calls with util::function(), which would then enable you to autoload these functions (again, there is an overhead introduced to calling a method rather than a global function).\nEssentially the best thing to do is go back through your code and pay off your design debt by fixing the include issues. This will give you the most performance benefit, and it will allow you to make the most of optimisers like eAccelerator, Zend Platform and APC\nHere is a sample method for loading stuff dynamically\npublic static function loadClass($class)\n{\n if (class_exists($class, false) ||\n interface_exists($class, false))\n {\n return;\n }\n\n $file = YOUR_LIB_ROOT.str_replace('_', DIRECTORY_SEPARATOR, $class).'.php';\n\n if (file_exists($file))\n {\n include_once $file;\n if (!class_exists($class, false) &&\n !interface_exists($class, false))\n {\n throw new Exception('File '.$file.' was loaded but class '.$class.' was not found');\n }\n }\n}\n\n", "What your looking for is Automap PECL extension.\nIt basically allows for auto loading with only a small overhead of loading a pre-computed map file. You can also sub divide the map file if you know a specific directory will only pull from certain PHP files.\nYou can read more about it here.\n", "It's been a while since I used php, but shouldn't the Zend Optimizer or Cache help in this case? Does php still load & compile every included file again for every request?\nI'm not sure if autoloading is the answer. If these files are included, they are probably needed in the class including it, so they will still be autoloaded anyway. \n", "Use a byte code cache (ideally APC) so that PHP doesn't need to parse the libraries on each page load. Be aware that using autoload will negate the benefits of using a byte code cache (you can read more about this here).\n", "Use a profiler. If you try to optimise without having measures, you're working blind.\n" ]
[ 5, 3, 3, 2, 2, 1, 1, 1 ]
[]
[]
[ "include", "php" ]
stackoverflow_0000093279_include_php.txt
Q: Is is possible to convert C# double[,,] array to double[] without making a copy I have huge 3D arrays of numbers in my .NET application. I need to convert them to a 1D array to pass it to a COM library. Is there a way to convert the array without making a copy of all the data? I can do the conversion like this, but then I use twice the ammount of memory which is an issue in my application: double[] result = new double[input.GetLength(0) * input.GetLength(1) * input.GetLength(2)]; for (i = 0; i < input.GetLength(0); i++) for (j = 0; j < input.GetLength(1); j++) for (k = 0; k < input.GetLength(2); k++) result[i * input.GetLength(1) * input.GetLength(2) + j * input.GetLength(2) + k)] = input[i,j,l]; return result; A: I don't believe the way C# stores that data in memory would make it feasible the same way a simple cast in C would. Why not use a 1d array to begin with and perhaps make a class for the type so you can access it in your program as if it were a 3d array? A: Unfortunately, C# arrays aren't guaranteed to be in contiguous memory like they are in closer-to-the-metal languages like C. So, no. There's no way to convert double[,,] to double[] without an element-by-element copy. A: Consider abstracting access to the data with a Proxy (similar to iterators/smart-pointers in C++). Unfortunately, syntax isn't as clean as C++ as operator() not available to overload and operator[] is single-arg, but still close. Of course, this extra level of abstraction adds complexity and work of its own, but it would allow you to make minimal changes to existing code that uses double[,,] objects, while allowing you to use a single double[] array for both interop and your in-C# computation. class Matrix3 { // referece-to-element object public struct Matrix3Elem{ private Matrix3Impl impl; private uint dim0, dim1, dim2; // other constructors Matrix3Elem(Matrix3Impl impl_, uint dim0_, uint dim1_, uint dim2_) { impl = impl_; dim0 = dim0_; dim1 = dim1_; dim2 = dim2_; } public double Value{ get { return impl.GetAt(dim0,dim1,dim2); } set { impl.SetAt(dim0, dim1, dim2, value); } } } // implementation object internal class Matrix3Impl { private double[] data; uint dsize0, dsize1, dsize2; // dimension sizes // .. Resize() public double GetAt(uint dim0, uint dim1, uint dim2) { // .. check bounds return data[ (dim2 * dsize1 + dim1) * dsize0 + dim0 ]; } public void SetAt(uint dim0, uint dim1, uint dim2, double value) { // .. check bounds data[ (dim2 * dsize1 + dim1) * dsize0 + dim0 ] = value; } } private Matrix3Impl impl; public Matrix3Elem Elem(uint dim0, uint dim1, uint dim2){ return new Matrix2Elem(dim0, dim1, dim2); } // .. Resize // .. GetLength0(), GetLength1(), GetLength1() } And then using this type to both read and write -- 'foo[1,2,3]' is now written as 'foo.Elem(1,2,3).Value', in both reading values and writing values, on left side of assignment and value expressions. void normalize(Matrix3 m){ double s = 0; for (i = 0; i < input.GetLength0; i++) for (j = 0; j < input.GetLength(1); j++) for (k = 0; k < input.GetLength(2); k++) { s += m.Elem(i,j,k).Value; } for (i = 0; i < input.GetLength0; i++) for (j = 0; j < input.GetLength(1); j++) for (k = 0; k < input.GetLength(2); k++) { m.Elem(i,j,k).Value /= s; } } Again, added development costs, but shares data, removing copying overhead and copying related developtment costs. It's a tradeoff. A: Without knowing details of your COM library, I'd look into creating a facade class in .Net and exposing it to COM, if necessary. Your facade would take a double[,,] and have an indexer that will map from [] to [,,]. Edit: I agree about the points made in the comments, Lorens suggestion is better. A: As a workaround you could make a class which maintains the array in one dimensional form (maybe even in closer to bare metal form so you can pass it easily to the COM library?) and then overload operator[] on this class to make it usable as a multidimensional array in your C# code.
Is is possible to convert C# double[,,] array to double[] without making a copy
I have huge 3D arrays of numbers in my .NET application. I need to convert them to a 1D array to pass it to a COM library. Is there a way to convert the array without making a copy of all the data? I can do the conversion like this, but then I use twice the ammount of memory which is an issue in my application: double[] result = new double[input.GetLength(0) * input.GetLength(1) * input.GetLength(2)]; for (i = 0; i < input.GetLength(0); i++) for (j = 0; j < input.GetLength(1); j++) for (k = 0; k < input.GetLength(2); k++) result[i * input.GetLength(1) * input.GetLength(2) + j * input.GetLength(2) + k)] = input[i,j,l]; return result;
[ "I don't believe the way C# stores that data in memory would make it feasible the same way a simple cast in C would. Why not use a 1d array to begin with and perhaps make a class for the type so you can access it in your program as if it were a 3d array?\n", "Unfortunately, C# arrays aren't guaranteed to be in contiguous memory like they are in closer-to-the-metal languages like C. So, no. There's no way to convert double[,,] to double[] without an element-by-element copy.\n", "Consider abstracting access to the data with a Proxy (similar to iterators/smart-pointers in C++). Unfortunately, syntax isn't as clean as C++ as operator() not available to overload and operator[] is single-arg, but still close.\nOf course, this extra level of abstraction adds complexity and work of its own, but it would allow you to make minimal changes to existing code that uses double[,,] objects, while allowing you to use a single double[] array for both interop and your in-C# computation.\nclass Matrix3\n{\n // referece-to-element object\n public struct Matrix3Elem{\n private Matrix3Impl impl;\n private uint dim0, dim1, dim2;\n // other constructors\n Matrix3Elem(Matrix3Impl impl_, uint dim0_, uint dim1_, uint dim2_) {\n impl = impl_; dim0 = dim0_; dim1 = dim1_; dim2 = dim2_;\n }\n public double Value{\n get { return impl.GetAt(dim0,dim1,dim2); }\n set { impl.SetAt(dim0, dim1, dim2, value); }\n }\n }\n\n // implementation object\n internal class Matrix3Impl\n {\n private double[] data;\n uint dsize0, dsize1, dsize2; // dimension sizes\n // .. Resize() \n public double GetAt(uint dim0, uint dim1, uint dim2) {\n // .. check bounds\n return data[ (dim2 * dsize1 + dim1) * dsize0 + dim0 ];\n }\n public void SetAt(uint dim0, uint dim1, uint dim2, double value) {\n // .. check bounds\n data[ (dim2 * dsize1 + dim1) * dsize0 + dim0 ] = value;\n }\n }\n\n private Matrix3Impl impl;\n\n public Matrix3Elem Elem(uint dim0, uint dim1, uint dim2){\n return new Matrix2Elem(dim0, dim1, dim2);\n }\n // .. Resize\n // .. GetLength0(), GetLength1(), GetLength1()\n}\n\nAnd then using this type to both read and write -- 'foo[1,2,3]' is now written as 'foo.Elem(1,2,3).Value', in both reading values and writing values, on left side of assignment and value expressions.\nvoid normalize(Matrix3 m){\n\n double s = 0;\n for (i = 0; i < input.GetLength0; i++) \n for (j = 0; j < input.GetLength(1); j++) \n for (k = 0; k < input.GetLength(2); k++)\n {\n s += m.Elem(i,j,k).Value;\n }\n for (i = 0; i < input.GetLength0; i++) \n for (j = 0; j < input.GetLength(1); j++) \n for (k = 0; k < input.GetLength(2); k++)\n {\n m.Elem(i,j,k).Value /= s;\n }\n}\n\nAgain, added development costs, but shares data, removing copying overhead and copying related developtment costs. It's a tradeoff.\n", "Without knowing details of your COM library, I'd look into creating a facade class in .Net and exposing it to COM, if necessary.\nYour facade would take a double[,,] and have an indexer that will map from [] to [,,].\nEdit: I agree about the points made in the comments, Lorens suggestion is better.\n", "As a workaround you could make a class which maintains the array in one dimensional form (maybe even in closer to bare metal form so you can pass it easily to the COM library?) and then overload operator[] on this class to make it usable as a multidimensional array in your C# code.\n" ]
[ 8, 6, 3, 1, 1 ]
[]
[]
[ ".net", "arrays", "c#" ]
stackoverflow_0000096054_.net_arrays_c#.txt
Q: Dependency Injection and Circular reference I am just starting out with DI & unit testing and have hit a snag which I am sure is a no brainer for those more experienced devs : I have a class called MessageManager which receives data and saves it to a db. Within the same assembly (project in Visual Studio) I have created a repository interface with all the methods needed to access the db. The concrete implementation of this interface is in a separate assembly called DataAccess. So DataAccess needs a project reference to MessageManager to know about the repository interface. And MessageManager needs a project reference to DataAccess so that the client of MessageManager can inject a concrete implementation of the repository interface. This is of courser not allowed I could move the interface into the data access assembly but I believe the repository interface is meant to reside in the same assembly as the client that uses it So what have I done wrong? A: You should separate your interface out of either assembly. Putting the interface along with the consumer or the implementor defeats the purpose of having the interface. The purpose of the interface is to allow you to inject any object that implements that interface, whether or not it's the same assembly that your DataAccess object belongs to. On the other hand you need to allow MessageManager to consume that interface without the need to consume any concrete implementation. Put your interface in another project, and problem is solved. A: You only have two choices: add an assembly to hold the interface or move the interface into the DataAccess assembly. Even if you're developing an architecture where the DataAccess class may someday be replaced by another implementor (even in another assembly) of the repository interface, there's no reason to exclude it from the DataAccess assembly. A: I think you should move the repository interface over to the DataAccess assembly. Then DataAccess has no need to reference MessageManager anymore. However, it remains hard to say since I know next to nothing about your architecture... A: Frequently you can solve circular reference issues by using setter injection instead of constructor injection. In pseudo-code: Foo f = new Foo(); Bar b = new Bar(); f.setBar(b); b.setFoo(f); A: Are you using an Inversion of Control Container? If so, the answer is simple. Assembly A contains: MessageManager IRepository ContainerA (add MessageManager) Assembly B contains (and ref's AssemblyA): Repository implements IRepository ContainerB extends ContainerA (add Repository) Assembly C (or B) would start the app/ask the container for MessageManager which would know how to resolve MessageManager and the IRepository. A: Dependency inversion is in play: High level modules should not depend upon low level modules. Both should depend upon abstractions. Abstractions should not depend upon details. Details should depend upon abstractions. The abstraction that the classes in the DatAccess assembly depend upon needs to be in a separate assembly from the DataAccess classes and the concrete implementation of that abstration (MessageManager). Yes that is more assemblies. Personally that's not a big deal for me. I don't see a big downside in extra assemblies.
Dependency Injection and Circular reference
I am just starting out with DI & unit testing and have hit a snag which I am sure is a no brainer for those more experienced devs : I have a class called MessageManager which receives data and saves it to a db. Within the same assembly (project in Visual Studio) I have created a repository interface with all the methods needed to access the db. The concrete implementation of this interface is in a separate assembly called DataAccess. So DataAccess needs a project reference to MessageManager to know about the repository interface. And MessageManager needs a project reference to DataAccess so that the client of MessageManager can inject a concrete implementation of the repository interface. This is of courser not allowed I could move the interface into the data access assembly but I believe the repository interface is meant to reside in the same assembly as the client that uses it So what have I done wrong?
[ "You should separate your interface out of either assembly. Putting the interface along with the consumer or the implementor defeats the purpose of having the interface.\nThe purpose of the interface is to allow you to inject any object that implements that interface, whether or not it's the same assembly that your DataAccess object belongs to. On the other hand you need to allow MessageManager to consume that interface without the need to consume any concrete implementation.\nPut your interface in another project, and problem is solved.\n", "You only have two choices: add an assembly to hold the interface or move the interface into the DataAccess assembly. Even if you're developing an architecture where the DataAccess class may someday be replaced by another implementor (even in another assembly) of the repository interface, there's no reason to exclude it from the DataAccess assembly.\n", "I think you should move the repository interface over to the DataAccess assembly. Then DataAccess has no need to reference MessageManager anymore.\nHowever, it remains hard to say since I know next to nothing about your architecture...\n", "Frequently you can solve circular reference issues by using setter injection instead of constructor injection.\nIn pseudo-code:\nFoo f = new Foo();\nBar b = new Bar();\nf.setBar(b);\nb.setFoo(f);\n\n", "Are you using an Inversion of Control Container? If so, the answer is simple. \nAssembly A contains:\n\nMessageManager\nIRepository\nContainerA (add MessageManager)\n\nAssembly B contains (and ref's AssemblyA):\n\nRepository implements IRepository\nContainerB extends ContainerA (add Repository)\n\nAssembly C (or B) would start the app/ask the container for MessageManager which would know how to resolve MessageManager and the IRepository.\n", "Dependency inversion is in play: \nHigh level modules should not depend upon low level modules. Both should depend upon abstractions. Abstractions should not depend upon details. Details should depend upon abstractions. \nThe abstraction that the classes in the DatAccess assembly depend upon needs to be in a separate assembly from the DataAccess classes and the concrete implementation of that abstration (MessageManager). \nYes that is more assemblies. Personally that's not a big deal for me. I don't see a big downside in extra assemblies.\n" ]
[ 3, 2, 1, 1, 1, 0 ]
[ "You could leave the structure as you currently have it (without the dependency from MessageManager to DataAccess that causes the problem) and then have MessageManager dynamically load the concrete implementation required at runtime using the System.Reflection.Assembly class.\n" ]
[ -1 ]
[ "oop", "tdd" ]
stackoverflow_0000089959_oop_tdd.txt
Q: What is the quickest way to a very simple blog? I am about to start a new project and would like to document its development in a very simple blog. My requirements are: self-hosted on my Gentoo-based LAMP stack (that seems to rule out blogger) Integration in a django based website (as in www.myproject.com/about, www.myproject.com/blog etc rather than www.myproject.com and a totally different site at blog.myproject.com) very little or no learning curve that's specific to the blog engine (don't want to learn an API just to blog, but having to get deeper into Django to be able to roll my own would be OK) According to the answers so far, there is a chance that this excludes Wordpress Should I a) install blog engine X (please specify X) b) use django to hand-roll a way to post new entries and a page on my website to display the posts in descending chronological order A: Install Wordpress. It is the most common engine for a reason. It's PHP but will play just fine in your environment. A: If you're the perfectionist kind, roll your own. It isn't that hard You learn something useful You'll get exactly what you want and need Be warned that you may run into a quagmire fighting comment spam, fixing security holes, etc. But it'll probably be a fun project. If you are the practical type and ready to face some integration pain, use an existing engine like WadcomBlog (Python) or PyBlosxom, or something completely different like MovableType or WordPress. Here's a simple Django blog example to get you started. Some pros and cons of rolling your blog engine this article by Phil Haack. Jeff Croft apparently rolled his own as well. A: I've tried WordPress recently and am very disappointed. As long as you don't want to customize anything, all is well. But imagine you want to install a plugin to handle Markdown editing. There the trouble begins. The plugin architecture of WordPress is seriously screwd up. In the case of Markdown, this means that no good solution exists. The existing plugin is a series of (quite well-documented) hacks that fall apart at a hard stare. I never intended to write the least bit of code for WordPress but the last few days, I've been knee-deep in PHP the whole time, hacking plugins as well as the WordPress core in order to make it work for my special scenario (which really isn't all that special, I'm just a perfectionist). Which is a pity, because the documentation of WordPress is more than just patchy. I don't use it anymore, I grep for functions and read the source. All in all, one of the less enjoyable OpenSource projects. A: You can spend hours if not days customizing Wordpress with plugins, themes, etc... I would go with a 0 installation solution, such as blogger (https://www.blogger.com/start) You can even use our own domain name with it if you need do. EDIT: Plus, if you ever get slashdotted, digged or redditted, google can handle the traffic, your server probably can't. A: For me, Wordpress is still the quickest & simplest to setup and get going. It can be extended to do pretty much anything or you can keep it real simple. Runs on PHP, but unless you want to write plugins for it, you never need to write code A: Have a look at Blosxom. It's file-based, so no crufty database. The basic idea has been ported to different languages, pyblosxom is in Python. A: I use PyBlosxom for my personal blog, and I think it is pretty useful if you need something minimalistic. The deployment is simple, as you need only the python runtime and cgi. You might want to have some basic knowledge of python at least if you are going to use it, though. Have a look at Blosxom. It's file-based, so no crufty database. The basic idea has been ported to different languages, pyblosxom is in Python. A: I wrote the engine for my personal blog in maybe 6 hours during one weekend, with comments, labels, simplified markup, sitemap, feeds and so on. It was great fun and I learned a lot of Django. If you decide to go this way, look at generic views, this Django feature will save you much of work (and learn few useful tricks). A: I Haven't tried it myself yet (other than the demo), but I've bookmarked Chyrp so that if I ever need to set up a quick & simple blog (kind of like you're describing) I could try this. So check it out, might be a good option for you.
What is the quickest way to a very simple blog?
I am about to start a new project and would like to document its development in a very simple blog. My requirements are: self-hosted on my Gentoo-based LAMP stack (that seems to rule out blogger) Integration in a django based website (as in www.myproject.com/about, www.myproject.com/blog etc rather than www.myproject.com and a totally different site at blog.myproject.com) very little or no learning curve that's specific to the blog engine (don't want to learn an API just to blog, but having to get deeper into Django to be able to roll my own would be OK) According to the answers so far, there is a chance that this excludes Wordpress Should I a) install blog engine X (please specify X) b) use django to hand-roll a way to post new entries and a page on my website to display the posts in descending chronological order
[ "Install Wordpress. It is the most common engine for a reason. It's PHP but will play just fine in your environment.\n", "If you're the perfectionist kind, roll your own.\n\nIt isn't that hard\nYou learn something useful\nYou'll get exactly what you want and need\n\nBe warned that you may run into a quagmire fighting comment spam, fixing security holes, etc. But it'll probably be a fun project.\nIf you are the practical type and ready to face some integration pain, use an existing engine like WadcomBlog (Python) or PyBlosxom, or something completely different like MovableType or WordPress.\nHere's a simple Django blog example to get you started.\nSome pros and cons of rolling your blog engine this article by Phil Haack.\nJeff Croft apparently rolled his own as well.\n", "I've tried WordPress recently and am very disappointed. As long as you don't want to customize anything, all is well. But imagine you want to install a plugin to handle Markdown editing. There the trouble begins. The plugin architecture of WordPress is seriously screwd up. In the case of Markdown, this means that no good solution exists. The existing plugin is a series of (quite well-documented) hacks that fall apart at a hard stare.\nI never intended to write the least bit of code for WordPress but the last few days, I've been knee-deep in PHP the whole time, hacking plugins as well as the WordPress core in order to make it work for my special scenario (which really isn't all that special, I'm just a perfectionist). Which is a pity, because the documentation of WordPress is more than just patchy. I don't use it anymore, I grep for functions and read the source. All in all, one of the less enjoyable OpenSource projects.\n", "You can spend hours if not days customizing Wordpress with plugins, themes, etc...\nI would go with a 0 installation solution, such as blogger (https://www.blogger.com/start)\nYou can even use our own domain name with it if you need do. \nEDIT: Plus, if you ever get slashdotted, digged or redditted, google can handle the traffic, your server probably can't.\n", "For me, Wordpress is still the quickest & simplest to setup and get going. It can be extended to do pretty much anything or you can keep it real simple. Runs on PHP, but unless you want to write plugins for it, you never need to write code\n", "Have a look at Blosxom. It's file-based, so no crufty database. The basic idea has been ported to different languages, pyblosxom is in Python.\n", "I use PyBlosxom for my personal blog, and I think it is pretty useful if you need something minimalistic. The deployment is simple, as you need only the python runtime and cgi. You might want to have some basic knowledge of python at least if you are going to use it, though.\n\nHave a look at Blosxom. It's file-based, so no crufty database. The basic idea has been ported to different languages, pyblosxom is in Python.\n\n", "I wrote the engine for my personal blog in maybe 6 hours during one weekend, with comments, labels, simplified markup, sitemap, feeds and so on. It was great fun and I learned a lot of Django.\nIf you decide to go this way, look at generic views, this Django feature will save you much of work (and learn few useful tricks).\n", "I Haven't tried it myself yet (other than the demo), but I've bookmarked Chyrp so that if I ever need to set up a quick & simple blog (kind of like you're describing) I could try this. So check it out, might be a good option for you.\n" ]
[ 16, 13, 6, 5, 3, 1, 1, 1, 0 ]
[]
[]
[ "blogs", "django" ]
stackoverflow_0000051311_blogs_django.txt
Q: interacting with a CMutex without MFC We have multiple MFC apps, which use CMutex( false, "blah" ), where "blah" allows the mutex to work across process boundaries. One of these apps was re-written without MFC (using Qt instead). How can I simulate the CMutex using Win32 calls? (Qt's QMutex is not inter-process.) I prefer not to modify the MFC apps. A: For inter-process mutexes you want these calls: CreateMutex WaitForSingleObject ReleaseMutex CloseHandle These are the underlying Win32 API calls that CMutex is a wrapper around. For in-process only mutexes you can also use these calls, which are faster: InitializeCriticalSection EnterCriticalSection LeaveCriticalSection DeleteCriticalSection A: The following funcs will probably be what you want, they are all documented on MSDN. CreateMutex(...) WaitForSingleObject(...) ReleaseMutex(...)
interacting with a CMutex without MFC
We have multiple MFC apps, which use CMutex( false, "blah" ), where "blah" allows the mutex to work across process boundaries. One of these apps was re-written without MFC (using Qt instead). How can I simulate the CMutex using Win32 calls? (Qt's QMutex is not inter-process.) I prefer not to modify the MFC apps.
[ "For inter-process mutexes you want these calls:\n\nCreateMutex\nWaitForSingleObject\nReleaseMutex\nCloseHandle\n\nThese are the underlying Win32 API calls that CMutex is a wrapper around.\nFor in-process only mutexes you can also use these calls, which are faster:\n\nInitializeCriticalSection\nEnterCriticalSection\nLeaveCriticalSection\nDeleteCriticalSection\n\n", "The following funcs will probably be what you want, they are all documented on MSDN.\nCreateMutex(...)\nWaitForSingleObject(...)\nReleaseMutex(...)\n\n" ]
[ 3, 1 ]
[]
[]
[ "mfc", "qt" ]
stackoverflow_0000096538_mfc_qt.txt
Q: KornShell (ksh) code to send attachments with mailx and uuencode? I need to attach a file with mailx but at the moment I am not having success. Here's my code: subject="Something happened" to="[email protected]" body="Attachment Test" attachment=/path/to/somefile.csv uuencode $attachment | mailx -s "$subject" "$to" << EOF The message is ready to be sent with the following file or link attachments: somefile.csv Note: To protect against computer viruses, e-mail programs may prevent sending or receiving certain types of file attachments. Check your e-mail security settings to determine how attachments are handled. EOF Any feedback would be highly appreciated. Update I have added the attachment var to avoid having to use the path every time. A: You have to concat both the text of your message and the uuencoded attachment: $ subject="Something happened" $ to="[email protected]" $ body="Attachment Test" $ attachment=/path/to/somefile.csv $ $ cat >msg.txt <<EOF > The message is ready to be sent with the following file or link attachments: > > somefile.csv > > Note: To protect against computer viruses, e-mail programs may prevent > sending or receiving certain types of file attachments. Check your > e-mail security settings to determine how attachments are handled. > > EOF $ ( cat msg.txt ; uuencode $attachment somefile.csv) | mailx -s "$subject" "$to" There are different ways to provide the message text, this is just an example that is close to your original question. If the message should be reused it makes sense to just store it in a file and use this file. A: Well, here are the first few problems you've got. You appear to be assuming that a mail client is going to handle uuencoded attachment without any headers. That won't happen. You're misusing I/O redirection: uuencode's output and the here-document are both being fed to mailx, which can't happen. You're misusing uuencode: if one path is given, it's just a name to give the decoded file, not an input file name. Giving the file twice will assign the same name to the decoded file as that which was read. The -m flag forces base64 encode. But this still isn't going to provide attachment headers for mailx. You're way better getting a copy of mpack, which will do what you want. If you must do it, you could do something like this: cat <<EOF | ( cat -; uuencode -m /path/to/somefile.csv /path/to/somefile.csv; ) | mailx -s "$subject" "$to" place your message from the here block in your example here EOF There are lots of other possibilities... but this one still has the here document as in your example and was easy off the top of my head, and there's no temp file involved.
KornShell (ksh) code to send attachments with mailx and uuencode?
I need to attach a file with mailx but at the moment I am not having success. Here's my code: subject="Something happened" to="[email protected]" body="Attachment Test" attachment=/path/to/somefile.csv uuencode $attachment | mailx -s "$subject" "$to" << EOF The message is ready to be sent with the following file or link attachments: somefile.csv Note: To protect against computer viruses, e-mail programs may prevent sending or receiving certain types of file attachments. Check your e-mail security settings to determine how attachments are handled. EOF Any feedback would be highly appreciated. Update I have added the attachment var to avoid having to use the path every time.
[ "You have to concat both the text of your message and the uuencoded attachment:\n$ subject=\"Something happened\"\n$ to=\"[email protected]\"\n$ body=\"Attachment Test\"\n$ attachment=/path/to/somefile.csv\n$\n$ cat >msg.txt <<EOF\n> The message is ready to be sent with the following file or link attachments:\n>\n> somefile.csv\n>\n> Note: To protect against computer viruses, e-mail programs may prevent\n> sending or receiving certain types of file attachments. Check your\n> e-mail security settings to determine how attachments are handled.\n>\n> EOF\n$ ( cat msg.txt ; uuencode $attachment somefile.csv) | mailx -s \"$subject\" \"$to\"\n\nThere are different ways to provide the message text, this is just an example that is close to your original question. If the message should be reused it makes sense to just store it in a file and use this file.\n", "Well, here are the first few problems you've got.\n\nYou appear to be assuming that a mail client is going to handle uuencoded attachment without any headers. That won't happen.\nYou're misusing I/O redirection: uuencode's output and the here-document are both being fed to mailx, which can't happen.\nYou're misusing uuencode: if one path is given, it's just a name to give the decoded file, not an input file name. Giving the file twice will assign the same name to the decoded file as that which was read. The -m flag forces base64 encode. But this still isn't going to provide attachment headers for mailx.\n\nYou're way better getting a copy of mpack, which will do what you want.\nIf you must do it, you could do something like this:\ncat <<EOF | ( cat -; uuencode -m /path/to/somefile.csv /path/to/somefile.csv; ) | mailx -s \"$subject\" \"$to\" \nplace your message from the here block in your example here\nEOF\n\nThere are lots of other possibilities... but this one still has the here document\nas in your example and was easy off the top of my head, and there's no temp file involved.\n" ]
[ 3, 1 ]
[]
[]
[ "ksh", "scripting", "shell", "solaris", "unix" ]
stackoverflow_0000096326_ksh_scripting_shell_solaris_unix.txt
Q: Unix commands like ping, ssh, work fine but socket-based programs are failing in connect I got a call from a tester about a machine that was failing our software. When I examined the problem machine, I quickly realized the problem was fairly low level: Inbound network traffic works fine. Basic outbound command like ping and ssh are working fine, but anything involving the connect() call is failing with "No route to host". For example - on this particular machine this program will fail on the connect() statement for any IP address other than 127.0.0.1: #!/usr/bin/perl -w use strict; use Socket; my ($remote,$port, $iaddr, $paddr, $proto, $line); $remote = shift || 'localhost'; $port = shift || 2345; # random port if ($port =~ /\D/) { $port = getservbyname($port, 'tcp') } die "No port" unless $port; $iaddr = inet_aton($remote) || die "no host: $remote"; $paddr = sockaddr_in($port, $iaddr); $proto = getprotobyname('tcp'); socket(SOCK, PF_INET, SOCK_STREAM, $proto) || die "socket: $!"; connect(SOCK, $paddr) || die "connect: $!"; while (defined($line = <SOCK>)) { print $line; } close (SOCK) || die "close: $!"; exit; Any suggestions about where this machine is broken? It's running SUSE-10.2. A: I would check firewall configuration on that machine. It is possible for iptables (I guess your SUSE has iptables firewall) to be setup to let trough only ping ICMP packets. A: Is the firewall turned off? A: Firewall is always possible, but it does say that ssh can connect, so that seems unlikely. I'd say have a look at the routes ("route" command on Linux), and make sure you don't have like two default routes, or weird ones or whatever. All in all I'd say test ping and ssh and your program on the same distant IP, and if they all fail, you have a route problem. If only your program fails, you probably have either a firewall problem or program problem :) A: Try pointing connect() to the same host:port where your SSH command works. Also, keep in mind that some firewalls can apply different rules for different user accounts (and sometimes for different executables). Therefore, make sure you run ssh and your test app under the same user account and that SUID isn't set for SSH.
Unix commands like ping, ssh, work fine but socket-based programs are failing in connect
I got a call from a tester about a machine that was failing our software. When I examined the problem machine, I quickly realized the problem was fairly low level: Inbound network traffic works fine. Basic outbound command like ping and ssh are working fine, but anything involving the connect() call is failing with "No route to host". For example - on this particular machine this program will fail on the connect() statement for any IP address other than 127.0.0.1: #!/usr/bin/perl -w use strict; use Socket; my ($remote,$port, $iaddr, $paddr, $proto, $line); $remote = shift || 'localhost'; $port = shift || 2345; # random port if ($port =~ /\D/) { $port = getservbyname($port, 'tcp') } die "No port" unless $port; $iaddr = inet_aton($remote) || die "no host: $remote"; $paddr = sockaddr_in($port, $iaddr); $proto = getprotobyname('tcp'); socket(SOCK, PF_INET, SOCK_STREAM, $proto) || die "socket: $!"; connect(SOCK, $paddr) || die "connect: $!"; while (defined($line = <SOCK>)) { print $line; } close (SOCK) || die "close: $!"; exit; Any suggestions about where this machine is broken? It's running SUSE-10.2.
[ "I would check firewall configuration on that machine. It is possible for iptables (I guess your SUSE has iptables firewall) to be setup to let trough only ping ICMP packets.\n", "Is the firewall turned off?\n", "Firewall is always possible, but it does say that ssh can connect, so that seems unlikely.\nI'd say have a look at the routes (\"route\" command on Linux), and make sure you don't have like two default routes, or weird ones or whatever. All in all I'd say test ping and ssh and your program on the same distant IP, and if they all fail, you have a route problem. If only your program fails, you probably have either a firewall problem or program problem :) \n", "Try pointing connect() to the same host:port where your SSH command works. Also, keep in mind that some firewalls can apply different rules for different user accounts (and sometimes for different executables). Therefore, make sure you run ssh and your test app under the same user account and that SUID isn't set for SSH.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "linux", "sockets", "suse" ]
stackoverflow_0000096113_linux_sockets_suse.txt
Q: How do I stop the "Found new hardware wizard" appearing? As part of our product we use 3rd party hardware and drivers. Unfortunately, these drivers aren't signed so up pops the "Found new hardware wizard" when installing or upgrading our product. Our product is web based and allows the users access to everything they need remotely, apart from this one case. Is there a registry hack or other OS setting that will stop the wizard appearing? Can we sign the drivers ourselves? Could we write a program that would click "Next, Next, Next" on the wizard that will work on all language variants of Windows? A: There is 2 ways to get silent installation: 1) Sign the driver and that can be hard/impossible if you don't have the driver source code. 2) You can write a co-installer dll using this api's. The problem that this is not reliable and from our experience there is a lot of workarounds for different Windows flavors. The only 100% reliable option will be option one.
How do I stop the "Found new hardware wizard" appearing?
As part of our product we use 3rd party hardware and drivers. Unfortunately, these drivers aren't signed so up pops the "Found new hardware wizard" when installing or upgrading our product. Our product is web based and allows the users access to everything they need remotely, apart from this one case. Is there a registry hack or other OS setting that will stop the wizard appearing? Can we sign the drivers ourselves? Could we write a program that would click "Next, Next, Next" on the wizard that will work on all language variants of Windows?
[ "There is 2 ways to get silent installation:\n1) Sign the driver and that can be hard/impossible if you don't have the driver source code.\n2) You can write a co-installer dll using this api's. The problem that this is not reliable and from our experience there is a lot of workarounds for different Windows flavors.\nThe only 100% reliable option will be option one. \n" ]
[ 1 ]
[]
[]
[ "drivers", "hardware", "windows" ]
stackoverflow_0000094221_drivers_hardware_windows.txt
Q: Any suggestions for effectively testing AJAX enabled web pages using MSVS Tester Edition Tools? It seems like MS really left a massive gaping hole in their automated testing tools in Visual Studio for web pages with AJAX components and I have been hard pressed to find any commentary or third party add-ons that remedy the problem. Anyone have any advice on automating web tests in MSVS for AJAX pages? A: I eventually gave up trying, and just stuck with WATIR A: I don't know if this will help, but you can try this: https://github.com/pivotal/jsunit EDIT:Sorry I reread your Q and realized you meant specific to VS. I don't know if you are familiar with Script#, but I had read some talk a little while back that someone was building a testing framework to use with that, and Script# can be used with MSAjax. Might be worth some investigation. http://scriptsharp.com/
Any suggestions for effectively testing AJAX enabled web pages using MSVS Tester Edition Tools?
It seems like MS really left a massive gaping hole in their automated testing tools in Visual Studio for web pages with AJAX components and I have been hard pressed to find any commentary or third party add-ons that remedy the problem. Anyone have any advice on automating web tests in MSVS for AJAX pages?
[ "I eventually gave up trying, and just stuck with WATIR\n", "I don't know if this will help, but you can try this:\nhttps://github.com/pivotal/jsunit\nEDIT:Sorry I reread your Q and realized you meant specific to VS. I don't know if you are familiar with Script#, but I had read some talk a little while back that someone was building a testing framework to use with that, and Script# can be used with MSAjax. Might be worth some investigation.\nhttp://scriptsharp.com/\n" ]
[ 1, 0 ]
[]
[]
[ "ajax", "testing", "visual_studio" ]
stackoverflow_0000096719_ajax_testing_visual_studio.txt
Q: Is there an elegant way to instantiate a variable type with parameters? This isn't legal: public class MyBaseClass { public MyBaseClass() {} public MyBaseClass(object arg) {} } public void ThisIsANoNo<T>() where T : MyBaseClass { T foo = new T("whoops!"); } In order to do this, you have to do some reflection on the type object for T or you have to use Activator.CreateInstance. Both are pretty nasty. Is there a better way? A: Nope. If you weren't passing in parameters, then you could constrain your type param to require a parameterless constructor. But, if you need to pass arguments you are out of luck. A: You can't constrain T to have a particular constructor signature other than an empty constructor, but you can constrain T to have a factory method with the desired signature: public abstract class MyBaseClass { protected MyBaseClass() {} protected abstract MyBaseClass CreateFromObject(object arg); } public void ThisWorksButIsntGreat<T>() where T : MyBaseClass, new() { T foo = new T().CreateFromObject("whoopee!") as T; } However, I would suggest perhaps using a different creational pattern such as Abstract Factory for this scenario. A: where T : MyBaseClass, new() only works w/ parameterless public constructor. beyond that, back to activator.CreateInstance (which really isn't THAT bad).
Is there an elegant way to instantiate a variable type with parameters?
This isn't legal: public class MyBaseClass { public MyBaseClass() {} public MyBaseClass(object arg) {} } public void ThisIsANoNo<T>() where T : MyBaseClass { T foo = new T("whoops!"); } In order to do this, you have to do some reflection on the type object for T or you have to use Activator.CreateInstance. Both are pretty nasty. Is there a better way?
[ "Nope. If you weren't passing in parameters, then you could constrain your type param to require a parameterless constructor. But, if you need to pass arguments you are out of luck.\n", "You can't constrain T to have a particular constructor signature other than an empty constructor, but you can constrain T to have a factory method with the desired signature:\npublic abstract class MyBaseClass\n{\n protected MyBaseClass() {}\n protected abstract MyBaseClass CreateFromObject(object arg);\n}\n\npublic void ThisWorksButIsntGreat<T>() where T : MyBaseClass, new()\n{\n T foo = new T().CreateFromObject(\"whoopee!\") as T;\n}\n\nHowever, I would suggest perhaps using a different creational pattern such as Abstract Factory for this scenario.\n", "where T : MyBaseClass, new()\n\nonly works w/ parameterless public constructor. beyond that, back to activator.CreateInstance (which really isn't THAT bad).\n" ]
[ 2, 1, 0 ]
[ "I can see that not working.\nBut what is stopping you from doing this?\npublic void ThisIsANoNo<T>() where T : MyBaseClass\n{\n MyBaseClass foo = new MyBaseClass(\"whoops!\");\n}\n\nSince everything is going to inherit from MyBaseClass they will al be MyBaseClass, right?\nI tried it and this works.\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\n\nnamespace ConsoleApplication1\n{\n class Program\n {\n static void Main(string[] args)\n {\n ThisIsANoNo<MyClass>();\n ThisIsANoNo<MyBaseClass>();\n }\n\n public class MyBaseClass\n {\n public MyBaseClass() { }\n public MyBaseClass(object arg) { }\n }\n\n public class MyClass :MyBaseClass\n {\n public MyClass() { }\n public MyClass(object arg, Object arg2) { }\n }\n\n public static void ThisIsANoNo<T>() where T : MyBaseClass\n {\n MyBaseClass foo = new MyBaseClass(\"whoops!\");\n }\n }\n}\n\n" ]
[ -2 ]
[ ".net", "generics", "instantiation" ]
stackoverflow_0000096541_.net_generics_instantiation.txt
Q: How do I append to an alist in scheme? Adding an element to the head of an alist (Associative list) is simple enough: > (cons '(ding . 53) '((foo . 42) (bar . 27))) ((ding . 53) (foo . 42) (bar . 27)) Appending to the tail of an alist is a bit trickier though. After some experimenting, I produced this: > (define (alist-append alist pair) `(,@alist ,pair)) > (alist-append '((foo . 42) (bar . 27)) '(ding . 53)) '((foo . 42) (bar . 27) (ding . 53)) However, it seems to me, that this isn't the idiomatic solution. So how is this usually done in scheme? Or is this in fact the way? A: Common Lisp defines a function called ACONS for exactly this purpose, where (acons key value alist) is equivalent to: (cons (cons key value) alist) This strongly suggests that simply consing onto an alist is idiomatic. Note that this means two things: As searches are usually performed from front to back, recently added associations take precedence over older ones. This can be used for a naive implementation of both lexical and dynamic environments. While consing onto a list is O(1), appending is generally O(n) where n is the length of the list, so the idiomatic usage is best for performance as well as being stylistically preferable. A: You don't append to an a-list. You cons onto an a-list. An a-list is logically a set of associations. You don't care about the order of elements in a set. All you care about is presence or absence of a particular element. In the case of an a-list, all you care about is whether there exists an association for a given tag (i.e., a pair whose CAR is the specified value), and, given that association, the associated value (i.e., in this implementation, the CDR of the pair).
How do I append to an alist in scheme?
Adding an element to the head of an alist (Associative list) is simple enough: > (cons '(ding . 53) '((foo . 42) (bar . 27))) ((ding . 53) (foo . 42) (bar . 27)) Appending to the tail of an alist is a bit trickier though. After some experimenting, I produced this: > (define (alist-append alist pair) `(,@alist ,pair)) > (alist-append '((foo . 42) (bar . 27)) '(ding . 53)) '((foo . 42) (bar . 27) (ding . 53)) However, it seems to me, that this isn't the idiomatic solution. So how is this usually done in scheme? Or is this in fact the way?
[ "Common Lisp defines a function called ACONS for exactly this purpose, where\n(acons key value alist)\n\nis equivalent to:\n(cons (cons key value) alist)\n\nThis strongly suggests that simply consing onto an alist is idiomatic. Note that this means two things:\n\nAs searches are usually performed from front to back, recently added associations take precedence over older ones. This can be used for a naive implementation of both lexical and dynamic environments.\nWhile consing onto a list is O(1), appending is generally O(n) where n is the length of the list, so the idiomatic usage is best for performance as well as being stylistically preferable.\n\n", "You don't append to an a-list. You cons onto an a-list.\nAn a-list is logically a set of associations. You don't care about the order of elements in a set. All you care about is presence or absence of a particular element. In the case of an a-list, all you care about is whether there exists an association for a given tag (i.e., a pair whose CAR is the specified value), and, given that association, the associated value (i.e., in this implementation, the CDR of the pair).\n" ]
[ 8, 5 ]
[]
[]
[ "associative", "lisp", "list", "scheme" ]
stackoverflow_0000096249_associative_lisp_list_scheme.txt
Q: Is there anything wrong with returning default constructed values? Suppose I have the following code: class some_class{}; some_class some_function() { return some_class(); } This seems to work pretty well and saves me the trouble of having to declare a variable just to make a return value. But I don't think I've ever seen this in any kind of tutorial or reference. Is this a compiler-specific thing (visual C++)? Or is this doing something wrong? A: No this is perfectly valid. This will also be more efficient as the compiler is actually able to optimise away the temporary. A: Returning objects from a function call is the "Factory" Design Pattern, and is used extensively. However, you will want to be careful whether you return objects, or pointers to objects. The former of these will introduce you to copy constructors / assignment operators, which can be a pain. A: It is valid, but performance may not be ideal depending on how it is called. For example: A a; a = fn(); and A a = fn(); are not the same. In the first case the default constructor is called, and then the assignment operator is invoked on a which requires a temporary variable to be constructed. In the second case the copy constructor is used. An intelligent enough compiler will work out what optimizations are possible. But, if the copy constructor is user supplied then I don't see how the compiler can optimize out the temporary variable. It has to invoke the copy constructor, and to do that it has to have another instance. A: The difference between Rob Walker's example is called Return Value Optimisation (RVO) if you want to google for it. Incidentally, if you want to enure your object gets returned in the most efficient manner, create the object on the heap (ie via new) using a shared_ptr and return a shared_ptr instead. The pointer gets returned and reference counts correctly. A: That is perfectly reasonable C++. A: This is perfectly legal C++ and any compiler should accept it. What makes you think it might be doing something wrong? A: That's the best way to do it if your class is pretty lightweight - I mean that it isn't very expensive to make a copy of it. One side effect of that method though is that it does tend to make it more likely to have temporary objects created, although that can depend on how well the compiler can optimize things. For more heavyweight classes that you want to make sure are not copied (say for example a large bitmap image) then it is a good idea to pass stuff like that around as a reference parameter which then gets filled in, just to make absolutely sure that there won't be any temporary objects created. Overall it can happen that simplifying syntax and making things turned more directly can have a side effect of creating more temporary objects in expressions, just something that you should keep in mind when designing the interfaces for more heavyweight objects.
Is there anything wrong with returning default constructed values?
Suppose I have the following code: class some_class{}; some_class some_function() { return some_class(); } This seems to work pretty well and saves me the trouble of having to declare a variable just to make a return value. But I don't think I've ever seen this in any kind of tutorial or reference. Is this a compiler-specific thing (visual C++)? Or is this doing something wrong?
[ "No this is perfectly valid. This will also be more efficient as the compiler is actually able to optimise away the temporary.\n", "Returning objects from a function call is the \"Factory\" Design Pattern, and is used extensively.\nHowever, you will want to be careful whether you return objects, or pointers to objects. The former of these will introduce you to copy constructors / assignment operators, which can be a pain.\n", "It is valid, but performance may not be ideal depending on how it is called.\nFor example:\nA a;\na = fn();\n\nand \nA a = fn();\n\nare not the same.\nIn the first case the default constructor is called, and then the assignment operator is invoked on a which requires a temporary variable to be constructed.\nIn the second case the copy constructor is used.\nAn intelligent enough compiler will work out what optimizations are possible. But, if the copy constructor is user supplied then I don't see how the compiler can optimize out the temporary variable. It has to invoke the copy constructor, and to do that it has to have another instance.\n", "The difference between Rob Walker's example is called Return Value Optimisation (RVO) if you want to google for it.\nIncidentally, if you want to enure your object gets returned in the most efficient manner, create the object on the heap (ie via new) using a shared_ptr and return a shared_ptr instead. The pointer gets returned and reference counts correctly.\n", "That is perfectly reasonable C++.\n", "This is perfectly legal C++ and any compiler should accept it. What makes you think it might be doing something wrong?\n", "That's the best way to do it if your class is pretty lightweight - I mean that it isn't very expensive to make a copy of it.\nOne side effect of that method though is that it does tend to make it more likely to have temporary objects created, although that can depend on how well the compiler can optimize things.\nFor more heavyweight classes that you want to make sure are not copied (say for example a large bitmap image) then it is a good idea to pass stuff like that around as a reference parameter which then gets filled in, just to make absolutely sure that there won't be any temporary objects created.\nOverall it can happen that simplifying syntax and making things turned more directly can have a side effect of creating more temporary objects in expressions, just something that you should keep in mind when designing the interfaces for more heavyweight objects.\n" ]
[ 16, 5, 2, 2, 1, 1, 1 ]
[]
[]
[ "c++", "constructor", "oop", "visual_c++" ]
stackoverflow_0000096500_c++_constructor_oop_visual_c++.txt
Q: What are the best remoting technologies for mobile applications? I have a java back-end that needs to expose services to clients running in the following environments : J2ME Windows Mobile iPhone I am looking for the best tool for each platform. I do not search a technology that works everywhere. I need something "light" adapted to low speed internet access. Right now I am using SOAP. It is verbose and not easy to parse on the mobile. The problem is that I have not seen any real alternative. Is there a format that works "out of the box" with one of these platforms ? I would rather not use a bloated library that will increase tremendously the download time of the application. Everybody seems to agree on JSON. Does anyone has implemented a solution based on JSON running with Objective-C, J2ME, Windows Mobile ? Note : so far the best solution seems to be Hessian. It works well on Windows Mobile and Objective-C/iPhone . The big problem is J2ME. The J2ME implementation of Hessian has serious limitations. It does not support complex objects. I had written another question about it. If you have any ideas, there are very welcome. A: JSON is fairly compact, and supported by most frameworks. You can transfer data over HTTP using standard REST techniques. There are JSON libraries for Java, Objective C, and many other languages (scroll down). You should have no problem finding framework support on the server side, because JSON is used for web applications. Older alternatives include plain XML and XML-RPC (like SOAP, but much simpler, and with libraries for most languages). A: Hessian. http://hessian.caucho.com. Implementations in multiple languages (including ObjC), super light weight, and doesn't require reliance on dom/xml parsers for translation from wire to object models. Once we found Hessian, we forgot we ever knew XML. A: REST + XML or JSON would be a good alternative. It is making big strides in the RIA world and the beauty of it is in it's simplicity. It is very easy to use without needing any special tooling. SOAP has it's strong points, but it works best in an environment with strong tooling support for it. I'm guessing from your question that's not the case. A: Seconding JSON. I ported the Stringtree JSON reader to J2ME. It's a single class JSON reader that compiles into a 5KB class file, and directly maps the JSON structure into native CLDC types like Hashtable and Vector. Now I can use the same server for both my desktop browser AJAX frontend and my J2ME client. A: How about plain old XML (somewhat unfortunately referred to as POX)? Another very useful option would be JSON. There are libraries for every single programming language out there. Possibly, since you are working in an environment that is constrained in terms of both computing and networking resources, and with a statically typed language, Google’s protocol buffers would be preferrable for you. (Just disregard the RPC crud in there; RPC is an attractive nuisance, not a useful technology.) The problem with your question is that you haven’t provided a whole lot of context about what kind of data this is and what your use cases are, so it’s hard to speak in anything but very vague generalities.
What are the best remoting technologies for mobile applications?
I have a java back-end that needs to expose services to clients running in the following environments : J2ME Windows Mobile iPhone I am looking for the best tool for each platform. I do not search a technology that works everywhere. I need something "light" adapted to low speed internet access. Right now I am using SOAP. It is verbose and not easy to parse on the mobile. The problem is that I have not seen any real alternative. Is there a format that works "out of the box" with one of these platforms ? I would rather not use a bloated library that will increase tremendously the download time of the application. Everybody seems to agree on JSON. Does anyone has implemented a solution based on JSON running with Objective-C, J2ME, Windows Mobile ? Note : so far the best solution seems to be Hessian. It works well on Windows Mobile and Objective-C/iPhone . The big problem is J2ME. The J2ME implementation of Hessian has serious limitations. It does not support complex objects. I had written another question about it. If you have any ideas, there are very welcome.
[ "JSON is fairly compact, and supported by most frameworks. You can transfer data over HTTP using standard REST techniques.\nThere are JSON libraries for Java, Objective C, and many other languages (scroll down). You should have no problem finding framework support on the server side, because JSON is used for web applications.\nOlder alternatives include plain XML and XML-RPC (like SOAP, but much simpler, and with libraries for most languages).\n", "Hessian. http://hessian.caucho.com. Implementations in multiple languages (including ObjC), super light weight, and doesn't require reliance on dom/xml parsers for translation from wire to object models. Once we found Hessian, we forgot we ever knew XML.\n", "REST + XML or JSON would be a good alternative. It is making big strides in the RIA world and the beauty of it is in it's simplicity. It is very easy to use without needing any special tooling. SOAP has it's strong points, but it works best in an environment with strong tooling support for it. I'm guessing from your question that's not the case.\n", "Seconding JSON. I ported the Stringtree JSON reader to J2ME. It's a single class JSON reader that compiles into a 5KB class file, and directly maps the JSON structure into native CLDC types like Hashtable and Vector. Now I can use the same server for both my desktop browser AJAX frontend and my J2ME client.\n", "How about plain old XML (somewhat unfortunately referred to as POX)?\nAnother very useful option would be JSON. There are libraries for every single programming language out there.\nPossibly, since you are working in an environment that is constrained in terms of both computing and networking resources, and with a statically typed language, Google’s protocol buffers would be preferrable for you. (Just disregard the RPC crud in there; RPC is an attractive nuisance, not a useful technology.)\nThe problem with your question is that you haven’t provided a whole lot of context about what kind of data this is and what your use cases are, so it’s hard to speak in anything but very vague generalities.\n" ]
[ 9, 5, 2, 2, 1 ]
[]
[]
[ "iphone", "java", "java_me", "mobile", "windows_mobile" ]
stackoverflow_0000082691_iphone_java_java_me_mobile_windows_mobile.txt
Q: Organizing Extension Methods How do you organize your Extension Methods? Say if I had extensions for the object class and string class I'm tempted to separate these extension methods into classes IE: public class ObjectExtensions { ... } public class StringExtensions { ... } am I making this too complicated or does this make sense? A: I organize extension methods using a combination of namespace and class name, and it's similar to the way you describe in the question. Generally I have some sort of "primary assembly" in my solution that provides the majority of the shared functionality (like extension methods). We'll call this assembly "Framework" for the sake of discussion. Within the Framework assembly, I try to mimic the namespaces of the things for which I have extension methods. For example, if I'm extending System.Web.HttpApplication, I'd have a "Framework.Web" namespace. Classes like "String" and "Object," being in the "System" namespace, translate to the root "Framework" namespace in that assembly. Finally, naming goes along the lines you've specified in the question - the type name with "Extensions" as a suffix. This yields a class hierarchy like this: Framework (namespace) Framework.ObjectExtensions (class) Framework.StringExtensions (class) Framework.Web (namespace) Framework.Web.HttpApplicationExtensions (class) The benefit is that, from a maintenance perspective, it's really easy later to go find the extension methods for a given type. A: There are two ways that I organize the extension methods which I use, 1) If the extension is specific to the project I am working on, then I keep it in the same project/assembly, but in its own namespace. 2) If the extension is of a kind so that I may or is using it in other projects too, then I separate them in a common assembly for extensions. The most important thing to keep in mind is, what is the scope which I will be using these in? Organizing them isn't hard if I just keep this in mind.
Organizing Extension Methods
How do you organize your Extension Methods? Say if I had extensions for the object class and string class I'm tempted to separate these extension methods into classes IE: public class ObjectExtensions { ... } public class StringExtensions { ... } am I making this too complicated or does this make sense?
[ "I organize extension methods using a combination of namespace and class name, and it's similar to the way you describe in the question.\nGenerally I have some sort of \"primary assembly\" in my solution that provides the majority of the shared functionality (like extension methods). We'll call this assembly \"Framework\" for the sake of discussion.\nWithin the Framework assembly, I try to mimic the namespaces of the things for which I have extension methods. For example, if I'm extending System.Web.HttpApplication, I'd have a \"Framework.Web\" namespace. Classes like \"String\" and \"Object,\" being in the \"System\" namespace, translate to the root \"Framework\" namespace in that assembly.\nFinally, naming goes along the lines you've specified in the question - the type name with \"Extensions\" as a suffix. This yields a class hierarchy like this:\n\nFramework (namespace)\n\n\nFramework.ObjectExtensions (class)\nFramework.StringExtensions (class)\nFramework.Web (namespace)\n\n\nFramework.Web.HttpApplicationExtensions (class)\n\n\n\nThe benefit is that, from a maintenance perspective, it's really easy later to go find the extension methods for a given type.\n", "There are two ways that I organize the extension methods which I use,\n1) If the extension is specific to the project I am working on, then I keep it in the same project/assembly, but in its own namespace.\n2) If the extension is of a kind so that I may or is using it in other projects too, then I separate them in a common assembly for extensions.\nThe most important thing to keep in mind is, what is the scope which I will be using these in? Organizing them isn't hard if I just keep this in mind.\n" ]
[ 13, 3 ]
[]
[]
[ "c#", "code_organization", "extension_methods" ]
stackoverflow_0000096718_c#_code_organization_extension_methods.txt
Q: How to associated the cn in an ssl cert of pyOpenSSL verify_cb to a generated socket I am a little new to pyOpenSSL. I am trying to figure out how to associate the generated socket to an ssl cert. verify_cb gets called which give me access to the cert and a conn but how do I associate those things when this happens: cli,addr = self.server.accept() A: After the handshake is complete, you can get the client certificate. While the client certificate is also available in the verify callback (verify_cb), there's not really any reason to try to do anything aside from verify the certificate in that callback. Setting up an application-specific mapping is better done after the handshake has completely successfully. So, consider using the OpenSSL.SSL.Connection instance returned by the accept method to get the certificate (and from there, the commonName) and associate it with the connection object at that point. For example, client, clientAddress = self.server.accept() client.do_handshake() commonNamesToConnections[client.get_peer_certificate().commonName] = client You might want to check the mapping to make sure you're not overwriting any existing connection (perhaps using a list of connections instead of just mapping each common name to one). And of course you need to remove entries when connections are lost. The `do_handshake´ call forces the handshake to actually happen. Without this, the handshake will happen when application data is first transferred over the connection. That's fine, but it would make setting up this mapping slightly more complicated.
How to associated the cn in an ssl cert of pyOpenSSL verify_cb to a generated socket
I am a little new to pyOpenSSL. I am trying to figure out how to associate the generated socket to an ssl cert. verify_cb gets called which give me access to the cert and a conn but how do I associate those things when this happens: cli,addr = self.server.accept()
[ "After the handshake is complete, you can get the client certificate. While the client certificate is also available in the verify callback (verify_cb), there's not really any reason to try to do anything aside from verify the certificate in that callback. Setting up an application-specific mapping is better done after the handshake has completely successfully. So, consider using the OpenSSL.SSL.Connection instance returned by the accept method to get the certificate (and from there, the commonName) and associate it with the connection object at that point. For example,\nclient, clientAddress = self.server.accept()\nclient.do_handshake()\ncommonNamesToConnections[client.get_peer_certificate().commonName] = client\n\nYou might want to check the mapping to make sure you're not overwriting any existing connection (perhaps using a list of connections instead of just mapping each common name to one). And of course you need to remove entries when connections are lost.\nThe `do_handshake´ call forces the handshake to actually happen. Without this, the handshake will happen when application data is first transferred over the connection. That's fine, but it would make setting up this mapping slightly more complicated.\n" ]
[ 5 ]
[]
[]
[ "pyopenssl", "python" ]
stackoverflow_0000096508_pyopenssl_python.txt
Q: Change ContextMenu Font Size in C# Is it possible to change the font size used in a ContextMenu using the .NET Framework 3.5 and C# for a desktop application? It seems it's a system-wide setting, but I would like to change it only within my application. A: If you are defining your own context menu via a ContextMenuStrip in Windows Forms, use the Font property. If you are defining your own context menu via a ContextMenu in WPF, use the various Fontxxx properties such as FontFamily and FontSize. You cannot change the default context menus that come with controls; those are determined by system settings. So if you want the "Copy/Cut/Paste/etc." menu with a custom font size for a WinForms TextBox, you'll have to create a ContextMenuStrip with the appropriate font size and assign it to the TextBox's ContextMenuStrip property. A: In WPF: <Window.ContextMenu FontSize="36"> <!-- ... --> </Window.ContextMenu In WinForms: contextMenuStrip1.Font = new System.Drawing.Font("Segoe UI", 24F); A: You can change the font size of a System.Windows.Forms.ContextMenuStrip. If you need to change the font size of the default Cut/Copy/Paste context menu on text boxes I guess you need to set the ContextMenu property to a custom menu that replaces the default menu. A: You mention .NET 3.5 - are you writing in WPF? If so, you can specify font size for the TextBlock.FontSize attached property <Whatever.ContextMenu TextBlock.FontSize="12"> <MenuItem ... /> <!-- Will get the font size from parent --> </Whatever.ContextMenu> Or, you could specify it in a style that affects all menu items <Style TargetType="MenuItem"> <Setter Property="TextBlock.FontSize" Value="12" /> </Style> Of course, it's always better to let the system setting determine the font size. Some people may have changed it to better fit their physical condition (like poor eye sight) or hardware (big/small screen). Whatever you force in your code will be the wrong choice for some people, while you give them no way to change it.
Change ContextMenu Font Size in C#
Is it possible to change the font size used in a ContextMenu using the .NET Framework 3.5 and C# for a desktop application? It seems it's a system-wide setting, but I would like to change it only within my application.
[ "If you are defining your own context menu via a ContextMenuStrip in Windows Forms, use the Font property.\nIf you are defining your own context menu via a ContextMenu in WPF, use the various Fontxxx properties such as FontFamily and FontSize.\nYou cannot change the default context menus that come with controls; those are determined by system settings. So if you want the \"Copy/Cut/Paste/etc.\" menu with a custom font size for a WinForms TextBox, you'll have to create a ContextMenuStrip with the appropriate font size and assign it to the TextBox's ContextMenuStrip property.\n", "In WPF:\n<Window.ContextMenu FontSize=\"36\">\n <!-- ... -->\n</Window.ContextMenu\n\nIn WinForms:\ncontextMenuStrip1.Font = new System.Drawing.Font(\"Segoe UI\", 24F);\n\n", "You can change the font size of a System.Windows.Forms.ContextMenuStrip.\nIf you need to change the font size of the default Cut/Copy/Paste context menu on text boxes I guess you need to set the ContextMenu property to a custom menu that replaces the default menu.\n", "You mention .NET 3.5 - are you writing in WPF? If so, you can specify font size for the TextBlock.FontSize attached property\n<Whatever.ContextMenu TextBlock.FontSize=\"12\">\n <MenuItem ... /> <!-- Will get the font size from parent -->\n</Whatever.ContextMenu>\n\nOr, you could specify it in a style that affects all menu items\n<Style TargetType=\"MenuItem\">\n <Setter Property=\"TextBlock.FontSize\" Value=\"12\" />\n</Style>\n\nOf course, it's always better to let the system setting determine the font size. Some people may have changed it to better fit their physical condition (like poor eye sight) or hardware (big/small screen). Whatever you force in your code will be the wrong choice for some people, while you give them no way to change it.\n" ]
[ 6, 2, 1, 0 ]
[]
[]
[ ".net", "c#", "contextmenu", "font_size" ]
stackoverflow_0000096525_.net_c#_contextmenu_font_size.txt
Q: .NET Compact Framework Printing libraries Can anyone point to libraries that can be used for Printing from Compact .Net Framework 1.0? Criteria: I need to be able to print Text and Bar codes. The library should preferably be upgradable to .Net 2.0 or above with minimal disruption. Can be either Open Source [that can be distributed as part of Commercial application] or that can be purchased. Edit More information: We are an ISV and this application is sold to our customers. This application is usually installed on Symbol, Opticon devices. But occasionally this is installed on a generic Windows Mobile PDA or Phone devices. I want the library to work with Printers from multiple vendors. [I now have printers from O'Neil and Citizen-Systems for testing]. We want the printers to be connected using bluetooth. I guess the library should in general work with any serial port connections. PrinterCE.NetCF from FieldSoftware appears to fit the bill. Thanks ctacke. I am looking for something similar. Thanks, Kishore A: You've not given us much detail, like the device you're using or the printer type you want to print to (local, lan, serial, network, etc), however I'll see if I can at least point you in the right direction. The de-facto standard for CF printing is PrinterCE from Field Software. PrintBoy from Bachmann Software also works well. I'm not certain if eitehr has the ability to print barcodes though. Now if you're printing barcodes, that suggests that you're using a device like a Symbol (now Motorola) or Intermec handheld. If that is the case then those manufacturers have their own SDKs that allow printing. If you are printing to something like a Zebra barcode printer, they typically have some serial PCL commands for printing barcodes as well, so you don't actually need to "print" the barcode. Instead you send the PCL command to tell the printer that the data should be output a barcode instead of text. The printer manufacturer can provide a PCL reference, as the PCL for these types of things isn't standardized.
.NET Compact Framework Printing libraries
Can anyone point to libraries that can be used for Printing from Compact .Net Framework 1.0? Criteria: I need to be able to print Text and Bar codes. The library should preferably be upgradable to .Net 2.0 or above with minimal disruption. Can be either Open Source [that can be distributed as part of Commercial application] or that can be purchased. Edit More information: We are an ISV and this application is sold to our customers. This application is usually installed on Symbol, Opticon devices. But occasionally this is installed on a generic Windows Mobile PDA or Phone devices. I want the library to work with Printers from multiple vendors. [I now have printers from O'Neil and Citizen-Systems for testing]. We want the printers to be connected using bluetooth. I guess the library should in general work with any serial port connections. PrinterCE.NetCF from FieldSoftware appears to fit the bill. Thanks ctacke. I am looking for something similar. Thanks, Kishore
[ "You've not given us much detail, like the device you're using or the printer type you want to print to (local, lan, serial, network, etc), however I'll see if I can at least point you in the right direction.\nThe de-facto standard for CF printing is PrinterCE from Field Software. PrintBoy from Bachmann Software also works well. I'm not certain if eitehr has the ability to print barcodes though.\nNow if you're printing barcodes, that suggests that you're using a device like a Symbol (now Motorola) or Intermec handheld. If that is the case then those manufacturers have their own SDKs that allow printing.\nIf you are printing to something like a Zebra barcode printer, they typically have some serial PCL commands for printing barcodes as well, so you don't actually need to \"print\" the barcode. Instead you send the PCL command to tell the printer that the data should be output a barcode instead of text. The printer manufacturer can provide a PCL reference, as the PCL for these types of things isn't standardized.\n" ]
[ 7 ]
[]
[]
[ "c#", "compact_framework", "windows_mobile" ]
stackoverflow_0000096218_c#_compact_framework_windows_mobile.txt
Q: Using the javax.script package for javascript with an external src attribute Say I have some javascript that if run in a browser would be typed like this... <script type="text/javascript" src="http://someplace.net/stuff.ashx"></script> <script type="text/javascript"> var stuff = null; stuff = new TheStuff('myStuff'); </script> ... and I want to use the javax.script package in java 1.6 to run this code within a jvm (not within an applet) and get the stuff. How do I let the engine know the source of the classes to be constructed is found within the remote .ashx file? For instance, I know to write the java code as... ScriptEngineManager mgr = new ScriptEngineManager(); ScriptEngine engine = mgr.getEngineByName("JavaScript"); engine.eval( "stuff = new TheStuff('myStuff');" ); Object obj = engine.get("stuff"); ...but the "JavaScript" engine doesn't know anything by default about the TheStuff class because that information is in the remote .ashx file. Can I make it look to the above src string for this? A: It seems like you're asking: How can I get ScriptEngine to evaluate the contents of a URL instead of just a string? Is that accurate? ScriptEngine doesn't provide a facility for downloading and evaluating the contents of a URL, but it's fairly easy to do. ScriptEngine allows you to pass in a Reader object that it will use to read the script. Try something like this: URL url = new URL( "http://someplace.net/stuff.ashx" ); InputStreamReader reader = new InputStreamReader( url.openStream() ); engine.eval( reader ); A: Are you trying to access the javascript object in the browser page from a java 1.6 applet? If so, you're going about it in the wrong way. That's not what the scripting engine's for. It's for running javascript within a jvm, not for an applet to accesses javascript from with in a browser. Here's a blog entry that might get you somewhere, but it doesn't look like there's much support.
Using the javax.script package for javascript with an external src attribute
Say I have some javascript that if run in a browser would be typed like this... <script type="text/javascript" src="http://someplace.net/stuff.ashx"></script> <script type="text/javascript"> var stuff = null; stuff = new TheStuff('myStuff'); </script> ... and I want to use the javax.script package in java 1.6 to run this code within a jvm (not within an applet) and get the stuff. How do I let the engine know the source of the classes to be constructed is found within the remote .ashx file? For instance, I know to write the java code as... ScriptEngineManager mgr = new ScriptEngineManager(); ScriptEngine engine = mgr.getEngineByName("JavaScript"); engine.eval( "stuff = new TheStuff('myStuff');" ); Object obj = engine.get("stuff"); ...but the "JavaScript" engine doesn't know anything by default about the TheStuff class because that information is in the remote .ashx file. Can I make it look to the above src string for this?
[ "It seems like you're asking:\n\nHow can I get ScriptEngine to evaluate the contents of a URL instead of just a string?\n\nIs that accurate?\nScriptEngine doesn't provide a facility for downloading and evaluating the contents of a URL, but it's fairly easy to do. ScriptEngine allows you to pass in a Reader object that it will use to read the script.\nTry something like this:\nURL url = new URL( \"http://someplace.net/stuff.ashx\" );\nInputStreamReader reader = new InputStreamReader( url.openStream() );\nengine.eval( reader );\n\n", "Are you trying to access the javascript object in the browser page from a java 1.6 applet? If so, you're going about it in the wrong way. That's not what the scripting engine's for. It's for running javascript within a jvm, not for an applet to accesses javascript from with in a browser.\nHere's a blog entry that might get you somewhere, but it doesn't look like there's much support.\n" ]
[ 2, 0 ]
[]
[]
[ "java", "javascript", "javax.script" ]
stackoverflow_0000094582_java_javascript_javax.script.txt
Q: Displaying the correct size in Windows' Add/Remove Programs I have a need to manually setup the registry settings for an entry in Window's Add/Remove Programs (for XP and Vista). Everything works except for the displayed size. According to this 2004 post by Raymond Chen it should be possible by setting the EstimatedSize registry value but it doesn't work. This more recent MSDN page says the EstimatedSize value is "Determined and set by the Windows Installer." Does anyhow know how I can manually set the size value outside the Windows Installer? (Suggestions to use a single large MSI are appreciated but we have done that in the past and its proven difficult and inflexible. Our current approach is a custom application to manage hundreds of smaller MSI packages, but this means the application itself has to write out the registry settings for Add/Remove Programs.) A: you could try building the sub-projects into msm (merge modules) and then linking the lot into a single msi - you get the benefits of having individual modules, and a single msi that way.
Displaying the correct size in Windows' Add/Remove Programs
I have a need to manually setup the registry settings for an entry in Window's Add/Remove Programs (for XP and Vista). Everything works except for the displayed size. According to this 2004 post by Raymond Chen it should be possible by setting the EstimatedSize registry value but it doesn't work. This more recent MSDN page says the EstimatedSize value is "Determined and set by the Windows Installer." Does anyhow know how I can manually set the size value outside the Windows Installer? (Suggestions to use a single large MSI are appreciated but we have done that in the past and its proven difficult and inflexible. Our current approach is a custom application to manage hundreds of smaller MSI packages, but this means the application itself has to write out the registry settings for Add/Remove Programs.)
[ "you could try building the sub-projects into msm (merge modules) and then linking the lot into a single msi - you get the benefits of having individual modules, and a single msi that way.\n" ]
[ 0 ]
[]
[]
[ "windows_installer" ]
stackoverflow_0000096653_windows_installer.txt
Q: What's the name of Visual Studio Import UI Widget (picture inside) What's the name of the circled UI element here? And how do I access it using keyboard shortcuts? Sometimes it's nearly impossible to get the mouse to focus on it. catch (ItemNotFoundException e) { } A: I don't know the name, but the shortcuts are CTRL-period (.) and ALT-SHIFT-F10. Handy to know :) A: It's called a SmartTag
What's the name of Visual Studio Import UI Widget (picture inside)
What's the name of the circled UI element here? And how do I access it using keyboard shortcuts? Sometimes it's nearly impossible to get the mouse to focus on it. catch (ItemNotFoundException e) { }
[ "I don't know the name, but the shortcuts are CTRL-period (.) and ALT-SHIFT-F10. Handy to know :)\n", "It's called a SmartTag\n" ]
[ 6, 4 ]
[]
[]
[ "visual_studio" ]
stackoverflow_0000096923_visual_studio.txt
Q: What are appropriate library naming conventions? There are two popular naming conventions: vc90/win64/debug/foo.dll foo-vc90-win64-debug.dll Please discuss the problems/benefits associated with either approach. I am also wondering if it is possible to expose meta-data (i.e. compiler, platform, build-type) in approach #1 in an easy to use, cross-platform manner. A: #2 is good for distribution, where several variation will be packaged in the same folder/zip file together. However, you probably don't want all that information in the file name itself, as it make it difficult to vary those via parameters to your makefile/csproj/nant script etc. It would be easier to have several files called "foo" in different folders (where you can decide the folder structure) A: For .NET assemblies, you can store this information in the assembly itself: http://www.codinghorror.com/blog/archives/000142.html I'm not familiar enough with other assembly types to know what they provide.
What are appropriate library naming conventions?
There are two popular naming conventions: vc90/win64/debug/foo.dll foo-vc90-win64-debug.dll Please discuss the problems/benefits associated with either approach. I am also wondering if it is possible to expose meta-data (i.e. compiler, platform, build-type) in approach #1 in an easy to use, cross-platform manner.
[ "#2 is good for distribution, where several variation will be packaged in the same folder/zip file together. However, you probably don't want all that information in the file name itself, as it make it difficult to vary those via parameters to your makefile/csproj/nant script etc. It would be easier to have several files called \"foo\" in different folders (where you can decide the folder structure)\n", "For .NET assemblies, you can store this information in the assembly itself:\nhttp://www.codinghorror.com/blog/archives/000142.html\nI'm not familiar enough with other assembly types to know what they provide.\n" ]
[ 3, 0 ]
[]
[]
[ "naming_conventions" ]
stackoverflow_0000096811_naming_conventions.txt
Q: Widget notifying other widget(s) How should widgets in GWT inform other widgets to refresh themselfs or perform some other action. Should I use sinkEvent / onBrowserEvent? And if so is there a way to create custom Events? A: It's a very open ended question - for example, you could create your own static event Handler class which widgets subscribe themselves to. e.g: Class newMessageHandler { void update(Widget caller, Widget subscriber) { ... } } customEventHandler.addEventType("New Message", newMessageHandler); Widget w; customEventHandler.subscribe(w, "New Message"); ... Widget caller; // Fire "New Message" event for all widgets which have // subscribed customEventHandler.fireEvent(caller, "New Message"); Where customEventHandler keeps track of all widgets subscribing to each named event, and calls the update method on the named class, which could then call any additional methods you want. You might want to call unsubscribe in the destructor - but you could make it as fancy as you want. A: I have solved this problem using the Observer Pattern and a central Controller. The central controller is the only class that has knowledge of all widgets in the application and determines the way they fit together. If someone changes something on widget A, widget A fires an event. In the eventhandler you call the central controller through the 'notifyObservers()' call, which informes the central controller (and optionally others, but for simplicity I'm not going into that) that a certain action (passing a 'MyEvent' enum instance) has occurred. This way, application flow logic is contained in a single central class and widgets don't need a spaghetti of references to eachother. A: So here is my (sample) implementation, first let's create a new event: import java.util.EventObject; import com.google.gwt.user.client.ui.Widget; public class NotificationEvent extends EventObject { public NotificationEvent(String data) { super(data); } } Then we create an event handler interface: import com.google.gwt.user.client.EventListener; public interface NotificationHandler extends EventListener { void onNotification(NotificationEvent event); } If we now have a widget implementing the NotificationHanlder, we can trigger the event by calling: ((NotificationHandler)widget).onNotification(event);
Widget notifying other widget(s)
How should widgets in GWT inform other widgets to refresh themselfs or perform some other action. Should I use sinkEvent / onBrowserEvent? And if so is there a way to create custom Events?
[ "It's a very open ended question - for example, you could create your own static event Handler class which widgets subscribe themselves to. e.g:\nClass newMessageHandler {\n void update(Widget caller, Widget subscriber) {\n ...\n }\n}\n\ncustomEventHandler.addEventType(\"New Message\", newMessageHandler);\n\nWidget w;\ncustomEventHandler.subscribe(w, \"New Message\");\n\n...\nWidget caller;\n\n// Fire \"New Message\" event for all widgets which have\n// subscribed\ncustomEventHandler.fireEvent(caller, \"New Message\");\n\nWhere customEventHandler keeps track of all widgets subscribing to each named event, and calls the update method on the named class, which could then call any additional methods you want. You might want to call unsubscribe in the destructor - but you could make it as fancy as you want.\n", "I have solved this problem using the Observer Pattern and a central Controller. The central controller is the only class that has knowledge of all widgets in the application and determines the way they fit together. If someone changes something on widget A, widget A fires an event. In the eventhandler you call the central controller through the 'notifyObservers()' call, which informes the central controller (and optionally others, but for simplicity I'm not going into that) that a certain action (passing a 'MyEvent' enum instance) has occurred.\nThis way, application flow logic is contained in a single central class and widgets don't need a spaghetti of references to eachother. \n", "So here is my (sample) implementation,\nfirst let's create a new event:\nimport java.util.EventObject;\nimport com.google.gwt.user.client.ui.Widget;\n\npublic class NotificationEvent extends EventObject {\n public NotificationEvent(String data) {\n super(data);\n }\n}\n\nThen we create an event handler interface:\nimport com.google.gwt.user.client.EventListener;\n\npublic interface NotificationHandler extends EventListener {\n void onNotification(NotificationEvent event);\n}\n\nIf we now have a widget implementing the NotificationHanlder, we can\ntrigger the event by calling:\n((NotificationHandler)widget).onNotification(event);\n\n" ]
[ 3, 3, 1 ]
[]
[]
[ "gwt" ]
stackoverflow_0000083106_gwt.txt
Q: Many to Many Relationship in MS Dynamics CRM 4.0 - How to? I'm working on a MS CRM server for a project at my university. What I'm trying to do is to let the user of the CRM to tag some contacts, I thought of creating an entity to archive the tags an to create an N:N relationship between the tag entity and the contact one. I've created and published the new entity and the relationship, but I don't know how to add a lookup field to the contact form, so that the user can see the tags related to one contact and add a new one. Can anyone help me? If you couldn't understand what I'm trying to do tell me, I'll reformulate. Thanks A: I started to write up an answer, but realized you were talking about 4.0 and I've only done it in 3.0. I was able to find a screen cast about the new many to many feature in 4.0 however http://www.philiprichardson.org/blog/post/Titan-Many-to-Many-Relationships.aspx
Many to Many Relationship in MS Dynamics CRM 4.0 - How to?
I'm working on a MS CRM server for a project at my university. What I'm trying to do is to let the user of the CRM to tag some contacts, I thought of creating an entity to archive the tags an to create an N:N relationship between the tag entity and the contact one. I've created and published the new entity and the relationship, but I don't know how to add a lookup field to the contact form, so that the user can see the tags related to one contact and add a new one. Can anyone help me? If you couldn't understand what I'm trying to do tell me, I'll reformulate. Thanks
[ "I started to write up an answer, but realized you were talking about 4.0 and I've only done it in 3.0.\nI was able to find a screen cast about the new many to many feature in 4.0 however\nhttp://www.philiprichardson.org/blog/post/Titan-Many-to-Many-Relationships.aspx\n" ]
[ 0 ]
[]
[]
[ "crm", "dynamics_crm_4", "entity_relationship", "many_to_many" ]
stackoverflow_0000092617_crm_dynamics_crm_4_entity_relationship_many_to_many.txt
Q: How to use oAuth tokens I'm using a library to get an 'oAuth_token' and 'oAuth_token_secret'. If I make a request to a REST web service how are those two keys leveraged to verify authentication? A: Providing a C# example is a little difficult because there are a number of variables i.e. the signature method being used, additional parameters the service might be expecting etc. which would affect the complexity of the example. I've developed an open source OAuth library for .Net and posted an article on beginning to use OAuth that might help to get you started - I tried to find a developers page / API specification to brightkite - but because it's a beta service I don't have access - so perhaps post me a invite to this service via my blog and I can have a go at developing an example brightkite client at which point this answer can be revisited with some concrete example code useful to others.
How to use oAuth tokens
I'm using a library to get an 'oAuth_token' and 'oAuth_token_secret'. If I make a request to a REST web service how are those two keys leveraged to verify authentication?
[ "Providing a C# example is a little difficult because there are a number of variables i.e. the signature method being used, additional parameters the service might be expecting etc. which would affect the complexity of the example.\nI've developed an open source OAuth library for .Net and posted an article on beginning to use OAuth that might help to get you started - I tried to find a developers page / API specification to brightkite - but because it's a beta service I don't have access - so perhaps post me a invite to this service via my blog and I can have a go at developing an example brightkite client at which point this answer can be revisited with some concrete example code useful to others.\n" ]
[ 7 ]
[]
[]
[ "c#", "oauth", "rest" ]
stackoverflow_0000095651_c#_oauth_rest.txt
Q: What is the most efficient way to handle the lifecycle of an object with COM interop? I have a Windows Workflow application that uses classes I've written for COM automation. I'm opening Word and Excel from my classes using COM. I'm currently implementing IDisposable in my COM helper and using Marshal.ReleaseComObject(). However, if my Workflow fails, the Dispose() method isn't being called and the Word or Excel handles stay open and my application hangs. The solution to this problem is pretty straightforward, but rather than just solve it, I'd like to learn something and gain insight into the right way to work with COM. I'm looking for the "best" or most efficient and safest way to handle the lifecycle of the classes that own the COM handles. Patterns, best practices, or sample code would be helpful. A: I can not see what failure you have that does not calls the Dispose() method. I made a test with a sequential workflow that contains only a code activity which just throws an exception and the Dispose() method of my workflow is called twice (this is because of the standard WorkflowTerminated event handler). Check the following code: Program.cs class Program { static void Main(string[] args) { using(WorkflowRuntime workflowRuntime = new WorkflowRuntime()) { AutoResetEvent waitHandle = new AutoResetEvent(false); workflowRuntime.WorkflowCompleted += delegate(object sender, WorkflowCompletedEventArgs e) { waitHandle.Set(); }; workflowRuntime.WorkflowTerminated += delegate(object sender, WorkflowTerminatedEventArgs e) { Console.WriteLine(e.Exception.Message); waitHandle.Set(); }; WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(WorkflowConsoleApplication1.Workflow1)); instance.Start(); waitHandle.WaitOne(); } Console.ReadKey(); } } Workflow1.cs public sealed partial class Workflow1: SequentialWorkflowActivity { public Workflow1() { InitializeComponent(); this.codeActivity1.ExecuteCode += new System.EventHandler(this.codeActivity1_ExecuteCode); } [DebuggerStepThrough()] private void codeActivity1_ExecuteCode(object sender, EventArgs e) { Console.WriteLine("Throw ApplicationException."); throw new ApplicationException(); } protected override void Dispose(bool disposing) { if (disposing) { // Here you must free your resources // by calling your COM helper Dispose() method Console.WriteLine("Object disposed."); } } } Am I missing something? Concerning the lifecycle-related methods of an Activity (and consequently of a Workflow) object, please check this post: Activity "Lifetime" Methods. If you just want a generic article about disposing, check this. A: Basically, you should not rely on hand code to call Dispose() on your object at the end of the work. You probably have something like this right now: MyComHelper helper = new MyComHelper(); helper.DoStuffWithExcel(); helper.Dispose(); ... Instead, you need to use try blocks to catch any exception that might be triggered and call dispose at that point. This is the canonical way: MyComHelper helper = new MyComHelper(); try { helper.DoStuffWithExcel(); } finally() { helper.Dispose(); } This is so common that C# has a special construct that generates the same exact code [see note] as shown above; this is what you should be doing most of the time (unless you have some special object construction semantics that make a manual pattern like the above easier to work with): using(MyComHelper helper = new MyComHelper()) { helper.DoStuffWithExcel(); } EDIT: NOTE: The actual code generated is a tiny bit more complicated than the second example above, because it also introduces a new local scope that makes the helper object unavailable after the using block. It's like if the second code block was surrounded by { }'s. That was omitted for clarify of the explanation.
What is the most efficient way to handle the lifecycle of an object with COM interop?
I have a Windows Workflow application that uses classes I've written for COM automation. I'm opening Word and Excel from my classes using COM. I'm currently implementing IDisposable in my COM helper and using Marshal.ReleaseComObject(). However, if my Workflow fails, the Dispose() method isn't being called and the Word or Excel handles stay open and my application hangs. The solution to this problem is pretty straightforward, but rather than just solve it, I'd like to learn something and gain insight into the right way to work with COM. I'm looking for the "best" or most efficient and safest way to handle the lifecycle of the classes that own the COM handles. Patterns, best practices, or sample code would be helpful.
[ "I can not see what failure you have that does not calls the Dispose() method. I made a test with a sequential workflow that contains only a code activity which just throws an exception and the Dispose() method of my workflow is called twice (this is because of the standard WorkflowTerminated event handler). Check the following code:\nProgram.cs\n class Program\n {\n static void Main(string[] args)\n {\n using(WorkflowRuntime workflowRuntime = new WorkflowRuntime())\n {\n AutoResetEvent waitHandle = new AutoResetEvent(false);\n workflowRuntime.WorkflowCompleted += delegate(object sender, WorkflowCompletedEventArgs e) \n {\n waitHandle.Set();\n };\n workflowRuntime.WorkflowTerminated += delegate(object sender, WorkflowTerminatedEventArgs e)\n {\n Console.WriteLine(e.Exception.Message);\n waitHandle.Set();\n };\n\n WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(WorkflowConsoleApplication1.Workflow1));\n instance.Start();\n\n waitHandle.WaitOne();\n }\n Console.ReadKey();\n }\n }\n\nWorkflow1.cs\n public sealed partial class Workflow1: SequentialWorkflowActivity\n {\n public Workflow1()\n {\n InitializeComponent();\n this.codeActivity1.ExecuteCode += new System.EventHandler(this.codeActivity1_ExecuteCode);\n }\n\n [DebuggerStepThrough()]\n private void codeActivity1_ExecuteCode(object sender, EventArgs e)\n {\n Console.WriteLine(\"Throw ApplicationException.\");\n throw new ApplicationException();\n }\n\n protected override void Dispose(bool disposing)\n {\n if (disposing)\n {\n // Here you must free your resources \n // by calling your COM helper Dispose() method\n Console.WriteLine(\"Object disposed.\");\n }\n }\n }\n\nAm I missing something? Concerning the lifecycle-related methods of an Activity (and consequently of a Workflow) object, please check this post: Activity \"Lifetime\" Methods. If you just want a generic article about disposing, check this.\n", "Basically, you should not rely on hand code to call Dispose() on your object at the end of the work. You probably have something like this right now:\nMyComHelper helper = new MyComHelper();\nhelper.DoStuffWithExcel();\nhelper.Dispose();\n...\n\nInstead, you need to use try blocks to catch any exception that might be triggered and call dispose at that point. This is the canonical way:\nMyComHelper helper = new MyComHelper();\ntry\n{\n helper.DoStuffWithExcel();\n}\nfinally()\n{\n helper.Dispose();\n}\n\nThis is so common that C# has a special construct that generates the same exact code [see note] as shown above; this is what you should be doing most of the time (unless you have some special object construction semantics that make a manual pattern like the above easier to work with):\nusing(MyComHelper helper = new MyComHelper())\n{\n helper.DoStuffWithExcel();\n}\n\nEDIT:\nNOTE: The actual code generated is a tiny bit more complicated than the second example above, because it also introduces a new local scope that makes the helper object unavailable after the using block. It's like if the second code block was surrounded by { }'s. That was omitted for clarify of the explanation.\n" ]
[ 1, 0 ]
[]
[]
[ ".net_3.0", "com", "interop", "marshalling", "workflow_foundation" ]
stackoverflow_0000095834_.net_3.0_com_interop_marshalling_workflow_foundation.txt
Q: How to publish wmi classes in .net? I've created a seperate assembly with a class that is intended to be published through wmi. Then I've created a windows forms app that references that assembly and attempts to publish the class. When I try to publish the class, I get an exception of type System.Management.Instrumentation.WmiProviderInstallationException. The message of the exception says "Exception of type 'System.Management.Instrumentation.WMIInfraException' was thrown.". I have no idea what this means. I've tried .Net2.0 and .Net3.5 (sp1 too) and get the same results. Below is my wmi class, followed by the code I used to publish it. //Interface.cs in assembly WMI.Interface.dll using System; using System.Collections.Generic; using System.Text; [assembly: System.Management.Instrumentation.WmiConfiguration(@"root\Test", HostingModel = System.Management.Instrumentation.ManagementHostingModel.Decoupled)] namespace WMI { [System.ComponentModel.RunInstaller(true)] public class MyApplicationManagementInstaller : System.Management.Instrumentation.DefaultManagementInstaller { } [System.Management.Instrumentation.ManagementEntity(Singleton = true)] [System.Management.Instrumentation.ManagementQualifier("Description", Value = "Obtain processor information.")] public class Interface { [System.Management.Instrumentation.ManagementBind] public Interface() { } [System.Management.Instrumentation.ManagementProbe] [System.Management.Instrumentation.ManagementQualifier("Descriiption", Value="The number of processors.")] public int ProcessorCount { get { return Environment.ProcessorCount; } } } } //Button click in windows forms application to publish class try { System.Management.Instrumentation.InstrumentationManager.Publish(new WMI.Interface()); } catch (System.Management.Instrumentation.InstrumentationException exInstrumentation) { MessageBox.Show(exInstrumentation.ToString()); } catch (System.Management.Instrumentation.WmiProviderInstallationException exProvider) { MessageBox.Show(exProvider.ToString()); } catch (Exception exPublish) { MessageBox.Show(exPublish.ToString()); } A: To summarize, this is the final code that works: Provider class, in it's own assembly: // the namespace used for publishing the WMI classes and object instances [assembly: Instrumented("root/mytest")] using System; using System.Collections.Generic; using System.Text; using System.Management; using System.Management.Instrumentation; using System.Configuration.Install; using System.ComponentModel; namespace WMITest { [InstrumentationClass(System.Management.Instrumentation.InstrumentationType.Instance)] //[ManagementEntity()] //[ManagementQualifier("Description",Value = "Obtain processor information.")] public class MyWMIInterface { //[System.Management.Instrumentation.ManagementBind] public MyWMIInterface() { } //[ManagementProbe] //[ManagementQualifier("Descriiption", Value="The number of processors.")] public int ProcessorCount { get { return Environment.ProcessorCount; } } } /// <summary> /// This class provides static methods to publish messages to WMI /// </summary> public static class InstrumentationProvider { /// <summary> /// publishes a message to the WMI repository /// </summary> /// <param name="MessageText">the message text</param> /// <param name="Type">the message type</param> public static MyWMIInterface Publish() { // create a new message MyWMIInterface pInterface = new MyWMIInterface(); Instrumentation.Publish(pInterface); return pInterface; } /// <summary> /// revoke a previously published message from the WMI repository /// </summary> /// <param name="Message">the message to revoke</param> public static void Revoke(MyWMIInterface pInterface) { Instrumentation.Revoke(pInterface); } } /// <summary> /// Installer class which will publish the InfoMessage to the WMI schema /// (the assembly attribute Instrumented defines the namespace this /// class gets published too /// </summary> [RunInstaller(true)] public class WMITestManagementInstaller : DefaultManagementProjectInstaller { } } Windows forms application main form, publishes provider class: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Management; using System.Management.Instrumentation; namespace WMI { public partial class Form1 : Form { public Form1() { InitializeComponent(); } WMITest.MyWMIInterface pIntf_m; private void btnPublish_Click(object sender, EventArgs e) { try { pIntf_m = WMITest.InstrumentationProvider.Publish(); } catch (ManagementException exManagement) { MessageBox.Show(exManagement.ToString()); } catch (Exception exPublish) { MessageBox.Show(exPublish.ToString()); } } } } Test web application, consumer: using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Management.Instrumentation; using System.Management; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { ManagementClass pWMIClass = null; pWMIClass = new ManagementClass(@"root\interiorhealth:MyWMIInterface"); lblOutput.Text = "ClassName: " + pWMIClass.ClassPath.ClassName + "<BR/>" + "IsClass: " + pWMIClass.ClassPath.IsClass + "<BR/>" + "IsInstance: " + pWMIClass.ClassPath.IsInstance + "<BR/>" + "IsSingleton: " + pWMIClass.ClassPath.IsSingleton + "<BR/>" + "Namespace Path: " + pWMIClass.ClassPath.NamespacePath + "<BR/>" + "Path: " + pWMIClass.ClassPath.Path + "<BR/>" + "Relative Path: " + pWMIClass.ClassPath.RelativePath + "<BR/>" + "Server: " + pWMIClass.ClassPath.Server + "<BR/>"; //GridView control this.gvProperties.DataSource = pWMIClass.Properties; this.gvProperties.DataBind(); //GridView control this.gvSystemProperties.DataSource = pWMIClass.SystemProperties; this.gvSystemProperties.DataBind(); //GridView control this.gvDerivation.DataSource = pWMIClass.Derivation; this.gvDerivation.DataBind(); //GridView control this.gvMethods.DataSource = pWMIClass.Methods; this.gvMethods.DataBind(); //GridView control this.gvQualifiers.DataSource = pWMIClass.Qualifiers; this.gvQualifiers.DataBind(); } } } A: I used gacutil - installutil to to test your class (as a dll). The gacutil part worked, but installutil (actually mofcomp) complained about a syntax error: ... error SYNTAX 0X80044014: Unexpected character in class name (must be an identifier) Compiler returned error 0x80044014 ... So I changed the class name to 'MyInterface' the installutil part worked, but the class didn't return any instances. Finally I changed the hosting model to Network Service and got it to work.
How to publish wmi classes in .net?
I've created a seperate assembly with a class that is intended to be published through wmi. Then I've created a windows forms app that references that assembly and attempts to publish the class. When I try to publish the class, I get an exception of type System.Management.Instrumentation.WmiProviderInstallationException. The message of the exception says "Exception of type 'System.Management.Instrumentation.WMIInfraException' was thrown.". I have no idea what this means. I've tried .Net2.0 and .Net3.5 (sp1 too) and get the same results. Below is my wmi class, followed by the code I used to publish it. //Interface.cs in assembly WMI.Interface.dll using System; using System.Collections.Generic; using System.Text; [assembly: System.Management.Instrumentation.WmiConfiguration(@"root\Test", HostingModel = System.Management.Instrumentation.ManagementHostingModel.Decoupled)] namespace WMI { [System.ComponentModel.RunInstaller(true)] public class MyApplicationManagementInstaller : System.Management.Instrumentation.DefaultManagementInstaller { } [System.Management.Instrumentation.ManagementEntity(Singleton = true)] [System.Management.Instrumentation.ManagementQualifier("Description", Value = "Obtain processor information.")] public class Interface { [System.Management.Instrumentation.ManagementBind] public Interface() { } [System.Management.Instrumentation.ManagementProbe] [System.Management.Instrumentation.ManagementQualifier("Descriiption", Value="The number of processors.")] public int ProcessorCount { get { return Environment.ProcessorCount; } } } } //Button click in windows forms application to publish class try { System.Management.Instrumentation.InstrumentationManager.Publish(new WMI.Interface()); } catch (System.Management.Instrumentation.InstrumentationException exInstrumentation) { MessageBox.Show(exInstrumentation.ToString()); } catch (System.Management.Instrumentation.WmiProviderInstallationException exProvider) { MessageBox.Show(exProvider.ToString()); } catch (Exception exPublish) { MessageBox.Show(exPublish.ToString()); }
[ "To summarize, this is the final code that works:\nProvider class, in it's own assembly:\n// the namespace used for publishing the WMI classes and object instances \n[assembly: Instrumented(\"root/mytest\")]\n\nusing System;\nusing System.Collections.Generic;\nusing System.Text;\nusing System.Management;\nusing System.Management.Instrumentation;\nusing System.Configuration.Install;\nusing System.ComponentModel;\n\nnamespace WMITest\n{\n\n [InstrumentationClass(System.Management.Instrumentation.InstrumentationType.Instance)] \n //[ManagementEntity()]\n //[ManagementQualifier(\"Description\",Value = \"Obtain processor information.\")]\n public class MyWMIInterface\n {\n //[System.Management.Instrumentation.ManagementBind]\n public MyWMIInterface()\n {\n }\n\n //[ManagementProbe]\n //[ManagementQualifier(\"Descriiption\", Value=\"The number of processors.\")]\n public int ProcessorCount\n {\n get { return Environment.ProcessorCount; }\n }\n }\n\n /// <summary>\n /// This class provides static methods to publish messages to WMI\n /// </summary>\n public static class InstrumentationProvider\n {\n /// <summary>\n /// publishes a message to the WMI repository\n /// </summary>\n /// <param name=\"MessageText\">the message text</param>\n /// <param name=\"Type\">the message type</param>\n public static MyWMIInterface Publish()\n {\n // create a new message\n MyWMIInterface pInterface = new MyWMIInterface();\n\n Instrumentation.Publish(pInterface);\n\n return pInterface;\n }\n\n /// <summary>\n /// revoke a previously published message from the WMI repository\n /// </summary>\n /// <param name=\"Message\">the message to revoke</param>\n public static void Revoke(MyWMIInterface pInterface)\n {\n Instrumentation.Revoke(pInterface);\n } \n }\n\n /// <summary>\n /// Installer class which will publish the InfoMessage to the WMI schema\n /// (the assembly attribute Instrumented defines the namespace this\n /// class gets published too\n /// </summary>\n [RunInstaller(true)]\n public class WMITestManagementInstaller :\n DefaultManagementProjectInstaller\n {\n }\n}\n\nWindows forms application main form, publishes provider class:\nusing System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Text;\nusing System.Windows.Forms;\nusing System.Management;\nusing System.Management.Instrumentation;\n\nnamespace WMI\n{\n public partial class Form1 : Form\n {\n public Form1()\n {\n InitializeComponent();\n }\n\n WMITest.MyWMIInterface pIntf_m;\n\n private void btnPublish_Click(object sender, EventArgs e)\n {\n try\n {\n pIntf_m = WMITest.InstrumentationProvider.Publish();\n }\n catch (ManagementException exManagement)\n {\n MessageBox.Show(exManagement.ToString());\n }\n catch (Exception exPublish)\n {\n MessageBox.Show(exPublish.ToString());\n }\n }\n }\n}\n\nTest web application, consumer:\nusing System;\nusing System.Data;\nusing System.Configuration;\nusing System.Web;\nusing System.Web.Security;\nusing System.Web.UI;\nusing System.Web.UI.WebControls;\nusing System.Web.UI.WebControls.WebParts;\nusing System.Web.UI.HtmlControls;\nusing System.Management.Instrumentation;\nusing System.Management;\n\npublic partial class _Default : System.Web.UI.Page \n{\n protected void Page_Load(object sender, EventArgs e)\n {\n if (!IsPostBack)\n {\n ManagementClass pWMIClass = null;\n\n pWMIClass = new ManagementClass(@\"root\\interiorhealth:MyWMIInterface\");\n\n lblOutput.Text = \"ClassName: \" + pWMIClass.ClassPath.ClassName + \"<BR/>\" +\n \"IsClass: \" + pWMIClass.ClassPath.IsClass + \"<BR/>\" +\n \"IsInstance: \" + pWMIClass.ClassPath.IsInstance + \"<BR/>\" +\n \"IsSingleton: \" + pWMIClass.ClassPath.IsSingleton + \"<BR/>\" +\n \"Namespace Path: \" + pWMIClass.ClassPath.NamespacePath + \"<BR/>\" +\n \"Path: \" + pWMIClass.ClassPath.Path + \"<BR/>\" +\n \"Relative Path: \" + pWMIClass.ClassPath.RelativePath + \"<BR/>\" +\n \"Server: \" + pWMIClass.ClassPath.Server + \"<BR/>\";\n\n //GridView control\n this.gvProperties.DataSource = pWMIClass.Properties;\n this.gvProperties.DataBind();\n\n //GridView control\n this.gvSystemProperties.DataSource = pWMIClass.SystemProperties;\n this.gvSystemProperties.DataBind();\n\n //GridView control\n this.gvDerivation.DataSource = pWMIClass.Derivation;\n this.gvDerivation.DataBind();\n\n //GridView control\n this.gvMethods.DataSource = pWMIClass.Methods;\n this.gvMethods.DataBind();\n\n //GridView control\n this.gvQualifiers.DataSource = pWMIClass.Qualifiers;\n this.gvQualifiers.DataBind();\n }\n }\n}\n\n", "I used gacutil - installutil to to test your class (as a dll). The gacutil part worked, but installutil (actually mofcomp) complained about a syntax error:\n...\nerror SYNTAX 0X80044014:\nUnexpected character in class name (must be an identifier)\nCompiler returned error 0x80044014\n...\nSo I changed the class name to 'MyInterface' the installutil part worked, but the class didn't return any instances. Finally I changed the hosting model to Network Service and got it to work.\n" ]
[ 3, 0 ]
[]
[]
[ ".net", ".net_2.0", ".net_3.5", "c#", "wmi" ]
stackoverflow_0000065364_.net_.net_2.0_.net_3.5_c#_wmi.txt