text
stringlengths
8
267k
meta
dict
Q: Combining values from different files into one CSV file I have a couple of files containing a value in each line. EDIT : I figured out the answer to this question while in the midst of writing the post and didn't realize I had posted it by mistake in its incomplete state. I was trying to do: paste -d ',' file1 file2 file 3 file 4 > file5.csv and was getting a weird output. I later realized that was happening because some files had both a carriage return and a newline character at the end of the line while others had only the newline character. I got to always remember to pay attention to those things. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: file 1: 1 2 3 file2: 2 4 6 paste --delimiters=\; file1 file2 Will yield: 1;2 3;4 5;6 A: I have a feeling you haven't finished typing your question yet, but I'll give it a shot still. ;) file1: file2: file3: 1 a A 2 b B 3 c C ~$ paste file{1,2,3} |sed 's/^\|$/"/g; s/\t/","/g' "1","a","A" "2","b","B" "3","c","C" Or, ~$ paste --delimiter , file{1,2,3} 1,a,A 2,b,B 3,c,C A: you probably need to clarify or retag your question but as it stands the answer is below. joining two files under Linux cat filetwo >> fileone A: Also don't forget about the ever versatile LogParser if you're on Windows. It can run SQL-like queries against flat text files to perform all sorts of merge operations. A: The previous answers using logparser or the commandline tools should work. If you want to do some more complicated operations to the records like filtering or joins, you could consider using an ETL Tool (Pentaho, Mapforce and Talend come to mind). These tools generally give you a graphical palette to define the relationships between data sources and any operations you want to perform on the rows.
{ "language": "en", "url": "https://stackoverflow.com/questions/25225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: 100% Min Height CSS layout What's the best way to make an element of 100% minimum height across a wide range of browsers ? In particular if you have a layout with a header and footer of fixed height, how do you make the middle content part fill 100% of the space in between with the footer fixed to the bottom ? A: A pure CSS solution (#content { min-height: 100%; }) will work in a lot of cases, but not in all of them - especially IE6 and IE7. Unfortunately, you will need to resort to a JavaScript solution in order to get the desired behavior. This can be done by calculating the desired height for your content <div> and setting it as a CSS property in a function: function resizeContent() { var contentDiv = document.getElementById('content'); var headerDiv = document.getElementById('header'); // This may need to be done differently on IE than FF, but you get the idea. var viewPortHeight = window.innerHeight - headerDiv.clientHeight; contentDiv.style.height = Math.max(viewportHeight, contentDiv.clientHeight) + 'px'; } You can then set this function as a handler for onLoad and onResize events: <body onload="resizeContent()" onresize="resizeContent()"> . . . </body> A: For min-height to work correctly with percentages, while inheriting it's parent node min-height, the trick would be to set the parent node height to 1px and then the child's min-height will work correctly. Demo page A: To set a custom height locked to somewhere: body, html { height: 100%; } #outerbox { width: 100%; position: absolute; /* to place it somewhere on the screen */ top: 130px; /* free space at top */ bottom: 0; /* makes it lock to the bottom */ } #innerbox { width: 100%; position: absolute; min-height: 100% !important; /* browser fill */ height: auto; /*content fill */ } <div id="outerbox"> <div id="innerbox"></div> </div> A: I agree with Levik as the parent container is set to 100% if you have sidebars and want them to fill the space to meet up with the footer you cannot set them to 100% because they will be 100 percent of the parent height as well which means that the footer ends up getting pushed down when using the clear function. Think of it this way if your header is say 50px height and your footer is 50px height and the content is just autofitted to the remaining space say 100px for example and the page container is 100% of this value its height will be 200px. Then when you set the sidebar height to 100% it is then 200px even though it is supposed to fit snug in between the header and footer. Instead it ends up being 50px + 200px + 50px so the page is now 300px because the sidebars are set to the same height as the page container. There will be a big white space in the contents of the page. I am using internet Explorer 9 and this is what I am getting as the effect when using this 100% method. I havent tried it in other browsers and I assume that it may work in some of the other options. but it will not be universal. A: First you should create a div with id='footer' after your content div and then simply do this. Your HTML should look like this: <html> <body> <div id="content"> ... </div> <div id="footer"></div> </body> </html> And the CSS: ​html, body { height: 100%; } #content { height: 100%; } #footer { clear: both; } A: Here is another solution based on vh, or viewpoint height, for details visit CSS units. It is based on this solution, which uses flex instead. * { /* personal preference */ margin: 0; padding: 0; } html { /* make sure we use up the whole viewport */ width: 100%; min-height: 100vh; /* for debugging, a red background lets us see any seams */ background-color: red; } body { /* make sure we use the full width but allow for more height */ width: 100%; min-height: 100vh; /* this helps with the sticky footer */ } main { /* for debugging, a blue background lets us see the content */ background-color: skyblue; min-height: calc(100vh - 2.5em); /* this leaves space for the sticky footer */ } footer { /* for debugging, a gray background lets us see the footer */ background-color: gray; min-height:2.5em; } <main> <p>This is the content. Resize the viewport vertically to see how the footer behaves.</p> <p>This is the content.</p> <p>This is the content.</p> <p>This is the content.</p> <p>This is the content.</p> <p>This is the content.</p> <p>This is the content.</p> <p>This is the content.</p> <p>This is the content.</p> <p>This is the content.</p> </main> <footer> <p>This is the footer. Resize the viewport horizontally to see how the height behaves when text wraps.</p> <p>This is the footer.</p> </footer> The units are vw , vh, vmax, vmin. Basically, each unit is equal to 1% of viewport size. So, as the viewport changes, the browser computes that value and adjusts accordingly. You may find more information here: Specifically: 1vw (viewport width) = 1% of viewport width 1vh (viewport height) = 1% of viewport height 1vmin (viewport minimum) = 1vw or 1vh, whatever is smallest 1vmax (viewport minimum) = 1vw or 1vh, whatever is largest A: Try this: body{ height: 100%; } #content { min-height: 500px; height: 100%; } #footer { height: 100px; clear: both !important; } The div element below the content div must have clear:both. A: Probably the shortest solution (works only in modern browsers) This small piece of CSS makes "the middle content part fill 100% of the space in between with the footer fixed to the bottom": html, body { height: 100%; } your_container { min-height: calc(100% - height_of_your_footer); } the only requirement is that you need to have a fixed height footer. For example for this layout: <html><head></head><body> <main> your main content </main> </footer> your footer content </footer> </body></html> you need this CSS: html, body { height: 100%; } main { min-height: calc(100% - 2em); } footer { height: 2em; } A: kleolb02's answer looks pretty good. another way would be a combination of the sticky footer and the min-height hack A: I am using the following one: CSS Layout - 100 % height Min-height The #container element of this page has a min-height of 100%. That way, if the content requires more height than the viewport provides, the height of #content forces #container to become longer as well. Possible columns in #content can then be visualised with a background image on #container; divs are not table cells, and you don't need (or want) the physical elements to create such a visual effect. If you're not yet convinced; think wobbly lines and gradients instead of straight lines and simple color schemes. Relative positioning Because #container has a relative position, #footer will always remain at its bottom; since the min-height mentioned above does not prevent #container from scaling, this will work even if (or rather especially when) #content forces #container to become longer. Padding-bottom Since it is no longer in the normal flow, padding-bottom of #content now provides the space for the absolute #footer. This padding is included in the scrolled height by default, so that the footer will never overlap the above content. Scale the text size a bit or resize your browser window to test this layout. html,body { margin:0; padding:0; height:100%; /* needed for container min-height */ background:gray; font-family:arial,sans-serif; font-size:small; color:#666; } h1 { font:1.5em georgia,serif; margin:0.5em 0; } h2 { font:1.25em georgia,serif; margin:0 0 0.5em; } h1, h2, a { color:orange; } p { line-height:1.5; margin:0 0 1em; } div#container { position:relative; /* needed for footer positioning*/ margin:0 auto; /* center, not in IE5 */ width:750px; background:#f0f0f0; height:auto !important; /* real browsers */ height:100%; /* IE6: treaded as min-height*/ min-height:100%; /* real browsers */ } div#header { padding:1em; background:#ddd url("../csslayout.gif") 98% 10px no-repeat; border-bottom:6px double gray; } div#header p { font-style:italic; font-size:1.1em; margin:0; } div#content { padding:1em 1em 5em; /* bottom padding for footer */ } div#content p { text-align:justify; padding:0 1em; } div#footer { position:absolute; width:100%; bottom:0; /* stick to bottom */ background:#ddd; border-top:6px double gray; } div#footer p { padding:1em; margin:0; } Works fine for me. A: You can try this: http://www.monkey-business.biz/88/horizontal-zentriertes-100-hohe-css-layout/ That's 100% height and horizontal center. A: just share what i've been used, and works nicely #content{ height: auto; min-height:350px; } A: As mentioned in Afshin Mehrabani's answer, you should set body and html's height to 100%, but to get the footer there, calculate the height of the wrapper: #pagewrapper{ /* Firefox */ height: -moz-calc(100% - 100px); /*assume i.e. your header above the wrapper is 80 and the footer bellow is 20*/ /* WebKit */ height: -webkit-calc(100% - 100px); /* Opera */ height: -o-calc(100% - 100px); /* Standard */ height: calc(100% - 100px); } A: As specified the height property in MDN docs it is not inherited. So you need it to set to 100%. As it is known any web page starts with html then body as its child, you need to set height: 100% in both of them. html, body { height: 100%; } <html> <body> </body> </html> And any child marked with height 100% will be the portion of 100 of parent height. In the example style above, we set the html tag as a 100%, it will be full height of the available screen height. Then the body is 100%, and so it will also be the full height, since it's parent is html.
{ "language": "en", "url": "https://stackoverflow.com/questions/25238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "184" }
Q: Inserting at the very end in FCKeditor FCKeditor has InsertHtml API (JavaScript API document) that inserts HTML in the current cursor position. How do I insert at the very end of the document? Do I need to start browser sniffing with something like this if ( element.insertAdjacentHTML ) // IE element.insertAdjacentHTML( 'beforeBegin', html ) ; else // Gecko { var oRange = document.createRange() ; oRange.setStartBefore( element ) ; var oFragment = oRange.createContextualFragment( html ); element.parentNode.insertBefore( oFragment, element ) ; } or is there a blessed way that I missed? Edit: Of course, I can rewrite the whole HTML, as answers suggest, but I cannot believe that is the "blessed" way. That means that the browser should destroy whatever it has and re-parse the document from scratch. That cannot be good. For example, I expect that to break the undo stack. A: It looks like you could use a combination of GetHTML and SetHTML to get the current contents, append your html and reinsert everything into the editor. Although it does say Note that when using this method, you will lose any listener that you may have previously registered on the editor.EditorDocument. Hope that helps! A: var oEditor = FCKeditorAPI.GetInstance('Editor_instance') ; OldText=oEditor.GetXHTML( true ); oEditor.SetData( OldText+"Your text"); A: replace the buggy line :element.insertAdjacentHTML('beforeBegin', html); with this jquery code: try { $(html).insertBefore($(element)); // element.insertAdjacentHTML('beforeBegin', html); } catch (err) { }
{ "language": "en", "url": "https://stackoverflow.com/questions/25240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unit testing IHttpModule How do you unit test a HttpModule in asp.net given that HttpApplication and HttpContext do no implement an interface ? A: In the past before moving to ASP.NET MVC, I used this library Phil Haack created for Unit Testing anything that used the HttpApplication and HttpContext. It in turned used a Duck Typing library. Unfortunately, this was the best way to do it. ASP.NET was not made to be easily testable. When they worked on ASP.NET MVC, one of the goals is to get rid of these headaches by making the framework more testable. A: Essentially you need to remove the HttpModule's reliance on HttpApplication and HttpContext, replacing them with an interface. You could create your own IHttpApplication and IHttpContext (along with IHttpResonse, IHttpRequest, etc) or use the ones mentioned by @Dale Ragan or use the shiny new ones in System.Web.Abstractions that are bundled with the asp.net mvc previews. A: You can use an Isolation (mocking) framework. I know of two tools that enable you to fake/mock any .NET objects - Typemock Isolator and Telerik JustMock i think that you can also use Moles. All of the above will enable you to fake any .NET object event if it does not implement an interface or even have a public c'tor.
{ "language": "en", "url": "https://stackoverflow.com/questions/25241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Remote Debugging PHP Command Line Scripts with Zend? I'm using Zend Studio to do remote debugging of my php scripts on a dev server. It works great for web code, but can I make it work with command line scripts? I have several helper apps to make my application run. It would be really useful to fire up the remote debugger through command line instead of a web browser so I can test these out. I assume it's possible, since I think Zend is using xdebug to talk to Eclipse. Apparently, it adds some parameters to the request to wake the Zend code up on a request. I'm guessing I'd need to tap into that? UPDATE I ended up using xdebug with protoeditor over X to do my debugging. A: I was able to get remote CLI debugging working in Eclipse, using xdebug, though I've not tried it with the zend debugger. I would assume this should work the same with ZSfE, if that's the "Zend Studio" you're using. A: Since this is more along the lines of product support, your best bet is probably emailing the support people. We bought Zend Studio at my last job and they were always able to help us in a matter of hours. Feel free to post the answer though, I am sure there are more people looking for it. :) A: There's an option to debug a php script, run->run as->php script I believe it also has to be in your project root though. Just for clarification, Zend studio uses their own debugger, while the eclipse pdt project you have the option for Xdebug or Zend's debugger. A: Haven't tried, but you can set the QUERY_STRING environment variable to the one that toggles the Zend debugger on. Per this article. export QUERY_STRING=start_debug=1&debug_host=<host name or IP of the local machine>&debug_port=<the port that is configured in your ZDE settings>&debug_stop=1 And then run the CLI script. A: Remote command-line debugging is possible, I just tried it. In my case I used Zend Studio + Zend Debugger. This official article here by the Zend people will help you out, it's what I used. It explains all the parameters that must go into the shell command. Make sure that you have the php.ini properly set on the remote server, and that it allows your IP address and it will work. Also, you don't need to export the QUERY_STRING variable. You can just do: QUERY_STRING="start_debug=1&debug_host=[127.0.0.1]&no_remote=0&debug_port=10137&debug_stop=0" /path/to/php/binary /your/php/script.php Running that on an SSH shell will light up your Zend Studio. Sweet!
{ "language": "en", "url": "https://stackoverflow.com/questions/25252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How does Stack Overflow generate its SEO-friendly URLs? What is a good complete regular expression or some other process that would take the title: How do you change a title to be part of the URL like Stack Overflow? and turn it into how-do-you-change-a-title-to-be-part-of-the-url-like-stack-overflow that is used in the SEO-friendly URLs on Stack Overflow? The development environment I am using is Ruby on Rails, but if there are some other platform-specific solutions (.NET, PHP, Django), I would love to see those too. I am sure I (or another reader) will come across the same problem on a different platform down the line. I am using custom routes, and I mainly want to know how to alter the string to all special characters are removed, it's all lowercase, and all whitespace is replaced. A: For good measure, here's the PHP function in WordPress that does it... I'd think that WordPress is one of the more popular platforms that uses fancy links. function sanitize_title_with_dashes($title) { $title = strip_tags($title); // Preserve escaped octets. $title = preg_replace('|%([a-fA-F0-9][a-fA-F0-9])|', '---$1---', $title); // Remove percent signs that are not part of an octet. $title = str_replace('%', '', $title); // Restore octets. $title = preg_replace('|---([a-fA-F0-9][a-fA-F0-9])---|', '%$1', $title); $title = remove_accents($title); if (seems_utf8($title)) { if (function_exists('mb_strtolower')) { $title = mb_strtolower($title, 'UTF-8'); } $title = utf8_uri_encode($title, 200); } $title = strtolower($title); $title = preg_replace('/&.+?;/', '', $title); // kill entities $title = preg_replace('/[^%a-z0-9 _-]/', '', $title); $title = preg_replace('/\s+/', '-', $title); $title = preg_replace('|-+|', '-', $title); $title = trim($title, '-'); return $title; } This function as well as some of the supporting functions can be found in wp-includes/formatting.php. A: I am not familiar with Ruby on Rails, but the following is (untested) PHP code. You can probably translate this very quickly to Ruby on Rails if you find it useful. $sURL = "This is a title to convert to URL-format. It has 1 number in it!"; // To lower-case $sURL = strtolower($sURL); // Replace all non-word characters with spaces $sURL = preg_replace("/\W+/", " ", $sURL); // Remove trailing spaces (so we won't end with a separator) $sURL = trim($sURL); // Replace spaces with separators (hyphens) $sURL = str_replace(" ", "-", $sURL); echo $sURL; // outputs: this-is-a-title-to-convert-to-url-format-it-has-1-number-in-it I hope this helps. A: If you are using Rails edge, you can rely on Inflector.parametrize - here's the example from the documentation: class Person def to_param "#{id}-#{name.parameterize}" end end @person = Person.find(1) # => #<Person id: 1, name: "Donald E. Knuth"> <%= link_to(@person.name, person_path(@person)) %> # => <a href="/person/1-donald-e-knuth">Donald E. Knuth</a> Also if you need to handle more exotic characters such as accents (éphémère) in previous version of Rails, you can use a mixture of PermalinkFu and DiacriticsFu: DiacriticsFu::escape("éphémère") => "ephemere" DiacriticsFu::escape("räksmörgås") => "raksmorgas" A: I don't much about Ruby or Rails, but in Perl, this is what I would do: my $title = "How do you change a title to be part of the url like Stackoverflow?"; my $url = lc $title; # Change to lower case and copy to URL. $url =~ s/^\s+//g; # Remove leading spaces. $url =~ s/\s+$//g; # Remove trailing spaces. $url =~ s/\s+/\-/g; # Change one or more spaces to single hyphen. $url =~ s/[^\w\-]//g; # Remove any non-word characters. print "$title\n$url\n"; I just did a quick test and it seems to work. Hopefully this is relatively easy to translate to Ruby. A: T-SQL implementation, adapted from dbo.UrlEncode: CREATE FUNCTION dbo.Slug(@string varchar(1024)) RETURNS varchar(3072) AS BEGIN DECLARE @count int, @c char(1), @i int, @slug varchar(3072) SET @string = replace(lower(ltrim(rtrim(@string))),' ','-') SET @count = Len(@string) SET @i = 1 SET @slug = '' WHILE (@i <= @count) BEGIN SET @c = substring(@string, @i, 1) IF @c LIKE '[a-z0-9--]' SET @slug = @slug + @c SET @i = @i +1 END RETURN @slug END A: I know it's very old question but since most of the browsers now support unicode urls I found a great solution in XRegex that converts everything except letters (in all languages to '-'). That can be done in several programming languages. The pattern is \\p{^L}+ and then you just need to use it to replace all non letters to '-'. Working example in node.js with xregex module. var text = 'This ! can @ have # several $ letters % from different languages such as עברית or Español'; var slugRegEx = XRegExp('((?!\\d)\\p{^L})+', 'g'); var slug = XRegExp.replace(text, slugRegEx, '-').toLowerCase(); console.log(slug) ==> "this-can-have-several-letters-from-different-languages-such-as-עברית-or-español" A: Here is my version of Jeff's code. I've made the following changes: * *The hyphens were appended in such a way that one could be added, and then need removing as it was the last character in the string. That is, we never want “my-slug-”. This means an extra string allocation to remove it on this edge case. I’ve worked around this by delay-hyphening. If you compare my code to Jeff’s the logic for this is easy to follow. *His approach is purely lookup based and missed a lot of characters I found in examples while researching on Stack Overflow. To counter this, I first peform a normalisation pass (AKA collation mentioned in Meta Stack Overflow question Non US-ASCII characters dropped from full (profile) URL), and then ignore any characters outside the acceptable ranges. This works most of the time... *... For when it doesn’t I’ve also had to add a lookup table. As mentioned above, some characters don’t map to a low ASCII value when normalised. Rather than drop these I’ve got a manual list of exceptions that is doubtless full of holes, but it is better than nothing. The normalisation code was inspired by Jon Hanna’s great post in Stack Overflow question How can I remove accents on a string?. *The case conversion is now also optional. public static class Slug { public static string Create(bool toLower, params string[] values) { return Create(toLower, String.Join("-", values)); } /// <summary> /// Creates a slug. /// References: /// http://www.unicode.org/reports/tr15/tr15-34.html /// https://meta.stackexchange.com/questions/7435/non-us-ascii-characters-dropped-from-full-profile-url/7696#7696 /// https://stackoverflow.com/questions/25259/how-do-you-include-a-webpage-title-as-part-of-a-webpage-url/25486#25486 /// https://stackoverflow.com/questions/3769457/how-can-i-remove-accents-on-a-string /// </summary> /// <param name="toLower"></param> /// <param name="normalised"></param> /// <returns></returns> public static string Create(bool toLower, string value) { if (value == null) return ""; var normalised = value.Normalize(NormalizationForm.FormKD); const int maxlen = 80; int len = normalised.Length; bool prevDash = false; var sb = new StringBuilder(len); char c; for (int i = 0; i < len; i++) { c = normalised[i]; if ((c >= 'a' && c <= 'z') || (c >= '0' && c <= '9')) { if (prevDash) { sb.Append('-'); prevDash = false; } sb.Append(c); } else if (c >= 'A' && c <= 'Z') { if (prevDash) { sb.Append('-'); prevDash = false; } // Tricky way to convert to lowercase if (toLower) sb.Append((char)(c | 32)); else sb.Append(c); } else if (c == ' ' || c == ',' || c == '.' || c == '/' || c == '\\' || c == '-' || c == '_' || c == '=') { if (!prevDash && sb.Length > 0) { prevDash = true; } } else { string swap = ConvertEdgeCases(c, toLower); if (swap != null) { if (prevDash) { sb.Append('-'); prevDash = false; } sb.Append(swap); } } if (sb.Length == maxlen) break; } return sb.ToString(); } static string ConvertEdgeCases(char c, bool toLower) { string swap = null; switch (c) { case 'ı': swap = "i"; break; case 'ł': swap = "l"; break; case 'Ł': swap = toLower ? "l" : "L"; break; case 'đ': swap = "d"; break; case 'ß': swap = "ss"; break; case 'ø': swap = "o"; break; case 'Þ': swap = "th"; break; } return swap; } } For more details, the unit tests, and an explanation of why Facebook's URL scheme is a little smarter than Stack Overflows, I've got an expanded version of this on my blog. A: Here's how we do it. Note that there are probably more edge conditions than you realize at first glance. This is the second version, unrolled for 5x more performance (and yes, I benchmarked it). I figured I'd optimize it because this function can be called hundreds of times per page. /// <summary> /// Produces optional, URL-friendly version of a title, "like-this-one". /// hand-tuned for speed, reflects performance refactoring contributed /// by John Gietzen (user otac0n) /// </summary> public static string URLFriendly(string title) { if (title == null) return ""; const int maxlen = 80; int len = title.Length; bool prevdash = false; var sb = new StringBuilder(len); char c; for (int i = 0; i < len; i++) { c = title[i]; if ((c >= 'a' && c <= 'z') || (c >= '0' && c <= '9')) { sb.Append(c); prevdash = false; } else if (c >= 'A' && c <= 'Z') { // tricky way to convert to lowercase sb.Append((char)(c | 32)); prevdash = false; } else if (c == ' ' || c == ',' || c == '.' || c == '/' || c == '\\' || c == '-' || c == '_' || c == '=') { if (!prevdash && sb.Length > 0) { sb.Append('-'); prevdash = true; } } else if ((int)c >= 128) { int prevlen = sb.Length; sb.Append(RemapInternationalCharToAscii(c)); if (prevlen != sb.Length) prevdash = false; } if (i == maxlen) break; } if (prevdash) return sb.ToString().Substring(0, sb.Length - 1); else return sb.ToString(); } To see the previous version of the code this replaced (but is functionally equivalent to, and 5x faster), view revision history of this post (click the date link). Also, the RemapInternationalCharToAscii method source code can be found here. A: Assuming that your model class has a title attribute, you can simply override the to_param method within the model, like this: def to_param title.downcase.gsub(/ /, '-') end This Railscast episode has all the details. You can also ensure that the title only contains valid characters using this: validates_format_of :title, :with => /^[a-z0-9-]+$/, :message => 'can only contain letters, numbers and hyphens' A: Brian's code, in Ruby: title.downcase.strip.gsub(/\ /, '-').gsub(/[^\w\-]/, '') downcase turns the string to lowercase, strip removes leading and trailing whitespace, the first gsub call globally substitutes spaces with dashes, and the second removes everything that isn't a letter or a dash. A: There is a small Ruby on Rails plugin called PermalinkFu, that does this. The escape method does the transformation into a string that is suitable for a URL. Have a look at the code; that method is quite simple. To remove non-ASCII characters it uses the iconv lib to translate to 'ascii//ignore//translit' from 'utf-8'. Spaces are then turned into dashes, everything is downcased, etc. A: You can use the following helper method. It can convert the Unicode characters. public static string ConvertTextToSlug(string s) { StringBuilder sb = new StringBuilder(); bool wasHyphen = true; foreach (char c in s) { if (char.IsLetterOrDigit(c)) { sb.Append(char.ToLower(c)); wasHyphen = false; } else if (char.IsWhiteSpace(c) && !wasHyphen) { sb.Append('-'); wasHyphen = true; } } // Avoid trailing hyphens if (wasHyphen && sb.Length > 0) sb.Length--; return sb.ToString().Replace("--","-"); } A: The stackoverflow solution is great, but modern browser (excluding IE, as usual) now handle nicely utf8 encoding: So I upgraded the proposed solution: public static string ToFriendlyUrl(string title, bool useUTF8Encoding = false) { ... else if (c >= 128) { int prevlen = sb.Length; if (useUTF8Encoding ) { sb.Append(HttpUtility.UrlEncode(c.ToString(CultureInfo.InvariantCulture),Encoding.UTF8)); } else { sb.Append(RemapInternationalCharToAscii(c)); } ... } Full Code on Pastebin Edit: Here's the code for RemapInternationalCharToAscii method (that's missing in the pastebin). A: Here's my (slower, but fun to write) version of Jeff's code: public static string URLFriendly(string title) { char? prevRead = null, prevWritten = null; var seq = from c in title let norm = RemapInternationalCharToAscii(char.ToLowerInvariant(c).ToString())[0] let keep = char.IsLetterOrDigit(norm) where prevRead.HasValue || keep let replaced = keep ? norm : prevWritten != '-' ? '-' : (char?)null where replaced != null let s = replaced + (prevRead == null ? "" : norm == '#' && "cf".Contains(prevRead.Value) ? "sharp" : norm == '+' ? "plus" : "") let _ = prevRead = norm from written in s let __ = prevWritten = written select written; const int maxlen = 80; return string.Concat(seq.Take(maxlen)).TrimEnd('-'); } public static string RemapInternationalCharToAscii(string text) { var seq = text.Normalize(NormalizationForm.FormD) .Where(c => CharUnicodeInfo.GetUnicodeCategory(c) != UnicodeCategory.NonSpacingMark); return string.Concat(seq).Normalize(NormalizationForm.FormC); } My test string: " I love C#, F#, C++, and... Crème brûlée!!! They see me codin'... they hatin'... tryin' to catch me codin' dirty... " A: You will want to setup a custom route to point the URL to the controller that will handle it. Since you are using Ruby on Rails, here is an introduction in using their routing engine. In Ruby, you will need a regular expression like you already know and here is the regular expression to use: def permalink_for(str) str.gsub(/[^\w\/]|[!\(\)\.]+/, ' ').strip.downcase.gsub(/\ +/, '-') end A: You can also use this JavaScript function for in-form generation of the slug's (this one is based on/copied from Django): function makeSlug(urlString, filter) { // Changes, e.g., "Petty theft" to "petty_theft". // Remove all these words from the string before URLifying if(filter) { removelist = ["a", "an", "as", "at", "before", "but", "by", "for", "from", "is", "in", "into", "like", "of", "off", "on", "onto", "per", "since", "than", "the", "this", "that", "to", "up", "via", "het", "de", "een", "en", "with"]; } else { removelist = []; } s = urlString; r = new RegExp('\\b(' + removelist.join('|') + ')\\b', 'gi'); s = s.replace(r, ''); s = s.replace(/[^-\w\s]/g, ''); // Remove unneeded characters s = s.replace(/^\s+|\s+$/g, ''); // Trim leading/trailing spaces s = s.replace(/[-\s]+/g, '-'); // Convert spaces to hyphens s = s.toLowerCase(); // Convert to lowercase return s; // Trim to first num_chars characters } A: I liked the way this is done without using regular expressions, so I ported it to PHP. I just added a function called is_between to check characters: function is_between($val, $min, $max) { $val = (int) $val; $min = (int) $min; $max = (int) $max; return ($val >= $min && $val <= $max); } function international_char_to_ascii($char) { if (mb_strpos('àåáâäãåa', $char) !== false) { return 'a'; } if (mb_strpos('èéêëe', $char) !== false) { return 'e'; } if (mb_strpos('ìíîïi', $char) !== false) { return 'i'; } if (mb_strpos('òóôõö', $char) !== false) { return 'o'; } if (mb_strpos('ùúûüuu', $char) !== false) { return 'u'; } if (mb_strpos('çccc', $char) !== false) { return 'c'; } if (mb_strpos('zzž', $char) !== false) { return 'z'; } if (mb_strpos('ssšs', $char) !== false) { return 's'; } if (mb_strpos('ñn', $char) !== false) { return 'n'; } if (mb_strpos('ýÿ', $char) !== false) { return 'y'; } if (mb_strpos('gg', $char) !== false) { return 'g'; } if (mb_strpos('r', $char) !== false) { return 'r'; } if (mb_strpos('l', $char) !== false) { return 'l'; } if (mb_strpos('d', $char) !== false) { return 'd'; } if (mb_strpos('ß', $char) !== false) { return 'ss'; } if (mb_strpos('Þ', $char) !== false) { return 'th'; } if (mb_strpos('h', $char) !== false) { return 'h'; } if (mb_strpos('j', $char) !== false) { return 'j'; } return ''; } function url_friendly_title($url_title) { if (empty($url_title)) { return ''; } $url_title = mb_strtolower($url_title); $url_title_max_length = 80; $url_title_length = mb_strlen($url_title); $url_title_friendly = ''; $url_title_dash_added = false; $url_title_char = ''; for ($i = 0; $i < $url_title_length; $i++) { $url_title_char = mb_substr($url_title, $i, 1); if (strlen($url_title_char) == 2) { $url_title_ascii = ord($url_title_char[0]) * 256 + ord($url_title_char[1]) . "\r\n"; } else { $url_title_ascii = ord($url_title_char); } if (is_between($url_title_ascii, 97, 122) || is_between($url_title_ascii, 48, 57)) { $url_title_friendly .= $url_title_char; $url_title_dash_added = false; } elseif(is_between($url_title_ascii, 65, 90)) { $url_title_friendly .= chr(($url_title_ascii | 32)); $url_title_dash_added = false; } elseif($url_title_ascii == 32 || $url_title_ascii == 44 || $url_title_ascii == 46 || $url_title_ascii == 47 || $url_title_ascii == 92 || $url_title_ascii == 45 || $url_title_ascii == 47 || $url_title_ascii == 95 || $url_title_ascii == 61) { if (!$url_title_dash_added && mb_strlen($url_title_friendly) > 0) { $url_title_friendly .= chr(45); $url_title_dash_added = true; } } else if ($url_title_ascii >= 128) { $url_title_previous_length = mb_strlen($url_title_friendly); $url_title_friendly .= international_char_to_ascii($url_title_char); if ($url_title_previous_length != mb_strlen($url_title_friendly)) { $url_title_dash_added = false; } } if ($i == $url_title_max_length) { break; } } if ($url_title_dash_added) { return mb_substr($url_title_friendly, 0, -1); } else { return $url_title_friendly; } } A: Now all Browser handle nicely utf8 encoding, so you can use WebUtility.UrlEncode Method , its like HttpUtility.UrlEncode used by @giamin but its work outside of a web application. A: I ported the code to TypeScript. It can easily be adapted to JavaScript. I am adding a .contains method to the String prototype, if you're targeting the latest browsers or ES6 you can use .includes instead. if (!String.prototype.contains) { String.prototype.contains = function (check) { return this.indexOf(check, 0) !== -1; }; } declare interface String { contains(check: string): boolean; } export function MakeUrlFriendly(title: string) { if (title == null || title == '') return ''; const maxlen = 80; let len = title.length; let prevdash = false; let result = ''; let c: string; let cc: number; let remapInternationalCharToAscii = function (c: string) { let s = c.toLowerCase(); if ("àåáâäãåą".contains(s)) { return "a"; } else if ("èéêëę".contains(s)) { return "e"; } else if ("ìíîïı".contains(s)) { return "i"; } else if ("òóôõöøőð".contains(s)) { return "o"; } else if ("ùúûüŭů".contains(s)) { return "u"; } else if ("çćčĉ".contains(s)) { return "c"; } else if ("żźž".contains(s)) { return "z"; } else if ("śşšŝ".contains(s)) { return "s"; } else if ("ñń".contains(s)) { return "n"; } else if ("ýÿ".contains(s)) { return "y"; } else if ("ğĝ".contains(s)) { return "g"; } else if (c == 'ř') { return "r"; } else if (c == 'ł') { return "l"; } else if (c == 'đ') { return "d"; } else if (c == 'ß') { return "ss"; } else if (c == 'Þ') { return "th"; } else if (c == 'ĥ') { return "h"; } else if (c == 'ĵ') { return "j"; } else { return ""; } }; for (let i = 0; i < len; i++) { c = title[i]; cc = c.charCodeAt(0); if ((cc >= 97 /* a */ && cc <= 122 /* z */) || (cc >= 48 /* 0 */ && cc <= 57 /* 9 */)) { result += c; prevdash = false; } else if ((cc >= 65 && cc <= 90 /* A - Z */)) { result += c.toLowerCase(); prevdash = false; } else if (c == ' ' || c == ',' || c == '.' || c == '/' || c == '\\' || c == '-' || c == '_' || c == '=') { if (!prevdash && result.length > 0) { result += '-'; prevdash = true; } } else if (cc >= 128) { let prevlen = result.length; result += remapInternationalCharToAscii(c); if (prevlen != result.length) prevdash = false; } if (i == maxlen) break; } if (prevdash) return result.substring(0, result.length - 1); else return result; } A: No, no, no. You are all so very wrong. Except for the diacritics-fu stuff, you're getting there, but what about Asian characters (shame on Ruby developers for not considering their nihonjin brethren). Firefox and Safari both display non-ASCII characters in the URL, and frankly they look great. It is nice to support links like 'http://somewhere.com/news/read/お前たちはアホじゃないかい'. So here's some PHP code that'll do it, but I just wrote it and haven't stress tested it. <?php function slug($str) { $args = func_get_args(); array_filter($args); //remove blanks $slug = mb_strtolower(implode('-', $args)); $real_slug = ''; $hyphen = ''; foreach(SU::mb_str_split($slug) as $c) { if (strlen($c) > 1 && mb_strlen($c)===1) { $real_slug .= $hyphen . $c; $hyphen = ''; } else { switch($c) { case '&': $hyphen = $real_slug ? '-and-' : ''; break; case 'a': case 'b': case 'c': case 'd': case 'e': case 'f': case 'g': case 'h': case 'i': case 'j': case 'k': case 'l': case 'm': case 'n': case 'o': case 'p': case 'q': case 'r': case 's': case 't': case 'u': case 'v': case 'w': case 'x': case 'y': case 'z': case 'A': case 'B': case 'C': case 'D': case 'E': case 'F': case 'G': case 'H': case 'I': case 'J': case 'K': case 'L': case 'M': case 'N': case 'O': case 'P': case 'Q': case 'R': case 'S': case 'T': case 'U': case 'V': case 'W': case 'X': case 'Y': case 'Z': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': $real_slug .= $hyphen . $c; $hyphen = ''; break; default: $hyphen = $hyphen ? $hyphen : ($real_slug ? '-' : ''); } } } return $real_slug; } Example: $str = "~!@#$%^&*()_+-=[]\{}|;':\",./<>?\n\r\t\x07\x00\x04 コリン ~!@#$%^&*()_+-=[]\{}|;':\",./<>?\n\r\t\x07\x00\x04 トーマス ~!@#$%^&*()_+-=[]\{}|;':\",./<>?\n\r\t\x07\x00\x04 アーノルド ~!@#$%^&*()_+-=[]\{}|;':\",./<>?\n\r\t\x07\x00\x04"; echo slug($str); Outputs: コリン-and-トーマス-and-アーノルド The '-and-' is because &'s get changed to '-and-'. A: Rewrite of Jeff's code to be more concise public static string RemapInternationalCharToAscii(char c) { var s = c.ToString().ToLowerInvariant(); var mappings = new Dictionary<string, string> { { "a", "àåáâäãåą" }, { "c", "çćčĉ" }, { "d", "đ" }, { "e", "èéêëę" }, { "g", "ğĝ" }, { "h", "ĥ" }, { "i", "ìíîïı" }, { "j", "ĵ" }, { "l", "ł" }, { "n", "ñń" }, { "o", "òóôõöøőð" }, { "r", "ř" }, { "s", "śşšŝ" }, { "ss", "ß" }, { "th", "Þ" }, { "u", "ùúûüŭů" }, { "y", "ýÿ" }, { "z", "żźž" } }; foreach(var mapping in mappings) { if (mapping.Value.Contains(s)) return mapping.Key; } return string.Empty; }
{ "language": "en", "url": "https://stackoverflow.com/questions/25259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "266" }
Q: Set and Oldset in sigprocmask() I haven't completely understood, how to use sigprocmask(). Particularly, how the set and oldset and its syntax work and how to use them. int sigprocmask(int how, const sigset_t *set, sigset_t *oldset); Please explain with an example, to block, say SIGUSR1 for a few seconds and then unblock and handle it. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: The idea is that you provide a mask in set, effectively a list of signals. The how argument says what you should do with the mask in set. You can either use SIG_BLOCK to block the signals in the set list, or SIG_UNBLOCK to unblock them. Neither of these changes the signals that aren't set in the list. SIG_SETMASK blocks the signals in the list, and unblocks the ones that aren't set in the list. For instance, assume that the old blocking list was {SIGSEGV, SIGSUSP} and you call sigprocmask with these arguments: sigset_t x; sigemptyset (&x); sigaddset(&x, SIGUSR1); sigprocmask(SIG_BLOCK, &x, NULL) The new blocking list will now be {SIGSEGV, SIGSUSP, SIGUSR1}. If you call sigprocmask with these arguments now: sigprocmask(SIG_UNBLOCK, &x, NULL) The new blocking list will go back to being {SIGSEGV, SIGSUSP}. If you call sigprocmask with these arguments now: sigprocmask(SIG_SETMASK, &x, NULL) The new blocking list will now be set to {SIGUSR1}. The oldset argument tells you what the previous blocking list was. If we have this declaration: sigset_t y; and we call the code in the previous examples like this: sigprocmask(SIG_BLOCK, &x, &y) now we have: y == {SIGSEGV, SIGSUSP} If we now do: sigprocmask(SIG_UNBLOCK, &x, &y) we'll get y == {SIGSEGV, SIGSUSP, SIGUSR1} and if we do: sigprocmask(SIG_SET, &x, &y) we'll get this: y == {SIGSEGV, SIGSUSP} because this is the previous value of the blocking set.
{ "language": "en", "url": "https://stackoverflow.com/questions/25261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: What are the best practices when using SWIG with C#? Has anybody out there used the SWIG library with C#? If you have, what pitfalls did you find and what is the best way to use the library? I am thinking about using it as a wrapper for a program that was written in C and I want to wrap the header files where I can use them in my .NET application. Edit: Some clarification on target OS's. I plan on running the application on Linux and Windows, therefore the reason I am looking into SWIG. P/Invoke is not an option. A: I did attempt to use SWIG to wrap a project C++ for using in .NET a few years ago. I didn't get very far as it was a massive giant pain to produce the configuration that SWIG required. At the time I just wanted a solution, not to learn another language/api/etc. SWIG may be easier to use these days, I couldn't tell you. We ended up using Managed C++ to wrap the C++ project. It worked really well. If you're just invoking functions straight out of a dll, I'd suggest not worrying about either of the above, and just using P/Invoke A: For my last project, here's the entire C# SWIG configuration file: %module mdProject %{ #include "mdProject.h" %} I compiled it in SWIG with: swig -csharp -c++ -I../../Include mdProject.i This generated a Project.cxx which I compiled and linked directly into the 'main' DLL, so I didn't need a second C++ 'helper' DLL. SWIG also generated a bunch of C# files which I compiled into a .NET DLL. My other wrappers (Java, PHP, etc) do use a helper DLL. As @patrick mentioned, SWIG uses P/Invoke, so if you have a problem with that, you'll need to find another solution. If you use types that stray from the ordinary (voids, structures, etc), you will have to do some extra work to get it right, but for the average API using int's, char*'s etc, it's fine. A: I think the mistake the earlier posters did was read the docs and not look at the examples. A few hours ago I needed to interface some C++ classes to C#. I looked in my Swig dir (I already had it for other work), found the directory Examples/csharp/class, browsed the code, loaded the solution, grokked it, copied it, put in my code, it worked, my job was done. With that said, generated P/Invoke code isn't a solution for all needs. Depending on your project, it may be just as simple to write some simple API wrappers yourself or write managed C++ (Look up SlimDX for a superb example of this). For my needs, it was simple and easy - I had mystuff.dll, and now in addition I can ship mystuffnet.dll. I'll agree that the doc is difficult to get into. Edit: I noticed the OP only mentioned C. For that, you don't really need Swig, just use the usual C#/C DLLImport interop syntax. Swig becomes useful when you want to let C++ classes be invoked from C#.
{ "language": "en", "url": "https://stackoverflow.com/questions/25268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Remotely starting and stopping a service on a W2008 server I'm having an amazing amount of trouble starting and stopping a service on my remote server from my msbuild script. SC.EXE and the ServiceController MSBuild task don't provide switches to allow a username/password so they won't authenticate, so I'm using RemoteService.exe from www.intelliadmin.com -Authenticating with \xx.xx.xx.xxx -Authentication complete -Stopping service -Error: Access Denied The user account details I'm specifying are for a local admin on the server, so whats up?! I'm tearing my hair out! Update: OK here's a bit more background. I have an an XP machine in the office running the CI server. The build script connects a VPN to the datacentre, where I have a Server 2008 machine. Neither of them are on a domain. A: Often, you can connect to the IPC$ "pseudo-share" on the machine to help establish the credentials before running commands like SC.EXE. Use a command like: C:\> net use \\xx.xx.xx.xx\ipc$ * /user:username The * tells it to prompt you for the password. A: I've disabled UAC and now it seems to work. A: If I understand your scenario correctly, it could help running the script with a domain account which is administrator on your remote machine (or better: has the right to start and stop the service). A: Quick followup question - can you use the "runas" command from an MSBuild script? If so, wouldn't you be able to simply impersonate another user with runas /user:dsfsdf /password:dfdf sc.exe ... (or similiar - I haven't researched the command-line options)?
{ "language": "en", "url": "https://stackoverflow.com/questions/25269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can you perform an AND search of keywords using FREETEXT() on SQL Server 2005? There is a request to make the SO search default to an AND style functionality over the current OR when multiple terms are used. The official response was: not as simple as it sounds; we use SQL Server 2005's FREETEXT() function, and I can't find a way to specify AND vs. OR -- can you? So, is there a way? There are a number of resources on it I can find, but I am not an expert. A: As far as I've seen, it is not possible to do AND when using FREETEXT() under SQL 2005 (nor 2008, afaik). A FREETEXT query ignores Boolean, proximity, and wildcard operators by design. However you could do this: WHERE FREETEXT('You gotta love MS-SQL') > 0 AND FREETEXT('You gotta love MySQL too...') > 0 Or that's what I think :) -- The idea is make it evaluate to Boolean, so you can use boolean operators. Don't know if this would give an error or not. I think it should work. But reference material is pointing to the fact that this is not possible by design. The use of CONTAINS() instead of FREETEXT() could help. A: I just started reading about freetext so bear with me. If what you are trying to do is allow searches for a tag, say VB, also find things tagged as VB6, Visual Basic, VisualBasic and VB.Net, wouldn't those values be set as synonyms in the DB's Thesaurus rather than query parameters? If that is indeed the case, this link on MSDN explains how to add items to the Thesaurus. A: OK, this change is in -- we now use CONTAINS() with implicit AND instead of FREETEXT() and its implicit OR.
{ "language": "en", "url": "https://stackoverflow.com/questions/25277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How would you implement a hashtable in language x? The point of this question is to collect a list of examples of hashtable implementations using arrays in different languages. It would also be nice if someone could throw in a pretty detailed overview of how they work, and what is happening with each example. Edit: Why not just use the built in hash functions in your specific language? Because we should know how hash tables work and be able to implement them. This may not seem like a super important topic, but knowing how one of the most used data structures works seems pretty important to me. If this is to become the wikipedia of programming, then these are some of the types of questions that I will come here for. I'm not looking for a CS book to be written here. I could go pull Intro to Algorithms off the shelf and read up on the chapter on hash tables and get that type of info. More specifically what I am looking for are code examples. Not only for me in particular, but also for others who would maybe one day be searching for similar info and stumble across this page. To be more specific: If you had to implement them, and could not use built-in functions, how would you do it? You don't need to put the code here. Put it in pastebin and just link it. A: A Java implementation in < 60 LoC import java.util.ArrayList; import java.util.List; import java.util.Random; public class HashTable { class KeyValuePair { Object key; Object value; public KeyValuePair(Object key, Object value) { this.key = key; this.value = value; } } private Object[] values; private int capacity; public HashTable(int capacity) { values = new Object[capacity]; this.capacity = capacity; } private int hash(Object key) { return Math.abs(key.hashCode()) % capacity; } public void add(Object key, Object value) throws IllegalArgumentException { if (key == null || value == null) throw new IllegalArgumentException("key or value is null"); int index = hash(key); List<KeyValuePair> list; if (values[index] == null) { list = new ArrayList<KeyValuePair>(); values[index] = list; } else { // collision list = (List<KeyValuePair>) values[index]; } list.add(new KeyValuePair(key, value)); } public Object get(Object key) { List<KeyValuePair> list = (List<KeyValuePair>) values[hash(key)]; if (list == null) { return null; } for (KeyValuePair kvp : list) { if (kvp.key.equals(key)) { return kvp.value; } } return null; } /** * Test */ public static void main(String[] args) { HashTable ht = new HashTable(100); for (int i = 1; i <= 1000; i++) { ht.add("key" + i, "value" + i); } Random random = new Random(); for (int i = 1; i <= 10; i++) { String key = "key" + random.nextInt(1000); System.out.println("ht.get(\"" + key + "\") = " + ht.get(key)); } } } A: A HashTable is a really simple concept: it is an array in which key and value pairs are placed into, (like an associative array) by the following scheme: A hash function hashes the key to a (hopefully) unused index into the array. the value is then placed into the array at that particular index. Data retrieval is easy, as the index into the array can be calculated via the hash function, thus look up is ~ O(1). A problem arises when a hash function maps 2 different keys to the same index...there are many ways of handling this which I will not detail here... Hash tables are a fundamental way of storing and retrieving data quickly, and are "built in" in nearly all programming language libraries. A: I was looking for a completely portable C hash table implementation and became interested in how to implement one myself. After searching around a bit I found: Julienne Walker's The Art of Hashing which has some great tutorials on hashing and hash tables. Implementing them is a bit more complex than I thought but it was a great read. A: A hash table a data structure that allows lookup of items in constant time. It works by hashing a value and converting that value to an offset in an array. The concept of a hash table is fairly easy to understand, but implementing is obviously harder. I'm not pasting the whole hash table here, but here are some snippets of a hash table I made in C a few weeks ago... One of the basics of creating a hash table is having a good hash function. I used the djb2 hash function in my hash table: int ComputeHash(char* key) { int hash = 5381; while (*key) hash = ((hash << 5) + hash) + *(key++); return hash % hashTable.totalBuckets; } Then comes the actual code itself for creating and managing the buckets in the table typedef struct HashTable{ HashTable* nextEntry; char* key; char* value; }HashBucket; typedef struct HashTableEntry{ int totalBuckets; // Total number of buckets allocated for the hash table HashTable** hashBucketArray; // Pointer to array of buckets }HashTableEntry; HashTableEntry hashTable; bool InitHashTable(int totalBuckets) { if(totalBuckets > 0) { hashTable.totalBuckets = totalBuckets; hashTable.hashBucketArray = (HashTable**)malloc(totalBuckets * sizeof(HashTable)); if(hashTable.hashBucketArray != NULL) { memset(hashTable.hashBucketArray, 0, sizeof(HashTable) * totalBuckets); return true; } } return false; } bool AddNode(char* key, char* value) { int offset = ComputeHash(key); if(hashTable.hashBucketArray[offset] == NULL) { hashTable.hashBucketArray[offset] = NewNode(key, value); if(hashTable.hashBucketArray[offset] != NULL) return true; } else { if(AppendLinkedNode(hashTable.hashBucketArray[offset], key, value) != NULL) return true; } return false; } HashTable* NewNode(char* key, char* value) { HashTable* tmpNode = (HashTable*)malloc(sizeof(HashTable)); if(tmpNode != NULL) { tmpNode->nextEntry = NULL; tmpNode->key = (char*)malloc(strlen(key)); tmpNode->value = (char*)malloc(strlen(value)); strcpy(tmpNode->key, key); strcpy(tmpNode->value, value); } return tmpNode; } AppendLinkedNode finds the last node in the linked list and appends a new node to it. The code would be used like this: if(InitHashTable(100) == false) return -1; AddNode("10", "TEN"); Finding a node is a simple as: HashTable* FindNode(char* key) { int offset = ComputeHash(key); HashTable* tmpNode = hashTable.hashBucketArray[offset]; while(tmpNode != NULL) { if(strcmp(tmpNode->key, key) == 0) return tmpNode; tmpNode = tmpNode->nextEntry; } return NULL; } And is used as follows: char* value = FindNode("10"); A: Here is my code for a Hash Table, implemented in Java. Suffers from a minor glitch- the key and value fields are not the same. Might edit that in the future. public class HashTable { private LinkedList[] hashArr=new LinkedList[128]; public static int HFunc(int key) { return key%128; } public boolean Put(Val V) { int hashval = HFunc(V.getKey()); LinkedNode ln = new LinkedNode(V,null); hashArr[hashval].Insert(ln); System.out.println("Inserted!"); return true; } public boolean Find(Val V) { int hashval = HFunc(V.getKey()); if (hashArr[hashval].getInitial()!=null && hashArr[hashval].search(V)==true) { System.out.println("Found!!"); return true; } else { System.out.println("Not Found!!"); return false; } } public boolean delete(Val v) { int hashval = HFunc(v.getKey()); if (hashArr[hashval].getInitial()!=null && hashArr[hashval].delete(v)==true) { System.out.println("Deleted!!"); return true; } else { System.out.println("Could not be found. How can it be deleted?"); return false; } } public HashTable() { for(int i=0; i<hashArr.length;i++) hashArr[i]=new LinkedList(); } } class Val { private int key; private int val; public int getKey() { return key; } public void setKey(int k) { this.key=k; } public int getVal() { return val; } public void setVal(int v) { this.val=v; } public Val(int key,int value) { this.key=key; this.val=value; } public boolean equals(Val v1) { if (v1.getVal()==this.val) { //System.out.println("hello im here"); return true; } else return false; } } class LinkedNode { private LinkedNode next; private Val obj; public LinkedNode(Val v,LinkedNode next) { this.obj=v; this.next=next; } public LinkedNode() { this.obj=null; this.next=null; } public Val getObj() { return this.obj; } public void setNext(LinkedNode ln) { this.next = ln; } public LinkedNode getNext() { return this.next; } public boolean equals(LinkedNode ln1, LinkedNode ln2) { if (ln1.getObj().equals(ln2.getObj())) { return true; } else return false; } } class LinkedList { private LinkedNode initial; public LinkedList() { this.initial=null; } public LinkedList(LinkedNode initial) { this.initial = initial; } public LinkedNode getInitial() { return this.initial; } public void Insert(LinkedNode ln) { LinkedNode temp = this.initial; this.initial = ln; ln.setNext(temp); } public boolean search(Val v) { if (this.initial==null) return false; else { LinkedNode temp = this.initial; while (temp!=null) { //System.out.println("encountered one!"); if (temp.getObj().equals(v)) return true; else temp=temp.getNext(); } return false; } } public boolean delete(Val v) { if (this.initial==null) return false; else { LinkedNode prev = this.initial; if (prev.getObj().equals(v)) { this.initial = null; return true; } else { LinkedNode temp = this.initial.getNext(); while (temp!=null) { if (temp.getObj().equals(v)) { prev.setNext(temp.getNext()); return true; } else { prev=temp; temp=temp.getNext(); } } return false; } } } } A: I think you need to be a little more specific. There are several variations on hashtables with regards to the following options * *Is the hashtable fixed-size or dynamic? *What type of hash function is used? *Are there any performance constraints when the hashtable is resized? The list can go on and on. Each of these constraints could lead to multiple implementations in any language. Personally, I would just use the built-in hashtable that is available in my language of choice. The only reason I would even consider implementing my own would be due to performance issues, and even then it is difficult to beat most existing implementations. A: For those who may use the code above, note the memory leak: tmpNode->key = (char*)malloc(strlen(key)+1); //must have +1 for '\0' tmpNode->value = (char*)malloc(strlen(value)+1); //must have +1 strcpy(tmpNode->key, key); strcpy(tmpNode->value, value); And to complete the code: HashNode* AppendLinkedNode(HashNode* ptr, char* key, char* value) { //follow pointer till end while (ptr->nextEntry!=NULL) { if (strcmp(ptr->value,value)==0) return NULL; ptr=ptr->nextEntry; } ptr->nextEntry=newNode(key,value); return ptr->nextEntry; } A: Minimal implementation in F# as a function that builds a hash table from a given sequence of key-value pairs and returns a function that searches the hash table for the value associated with the given key: > let hashtbl xs = let spine = [|for _ in xs -> ResizeArray()|] let f k = spine.[abs(k.GetHashCode()) % spine.Length] Seq.iter (fun (k, v) -> (f k).Add (k, v)) xs fun k -> Seq.pick (fun (k', v) -> if k=k' then Some v else None) (f k);; val hashtbl : seq<'a * 'b> -> ('a -> 'b) when 'a : equality A: I went and read some of the Wikipedia-page on hashing: http://en.wikipedia.org/wiki/Hash_table. It seems like a lot of work, to put up code for a hashtable here, especially since most languages I use allready have them built in. Why would you want implementations here? This stuff really belongs in a languages library. Please elaborate on what your expected solutions should include: * *hash function *variable bucket count *collision behavior Also state what the purpose of collecting them here is. Any serious implementation will easily be quite a mouthfull = this will lead to very long answers (possibly a few pages long each). You might also be enticing people to copy code from a library...
{ "language": "en", "url": "https://stackoverflow.com/questions/25282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Reserved Keyword in Enumeration in C# I would like to use as and is as members of an enumeration. I know that this is possible in VB.NET to write it like this: Public Enum Test [as] = 1 [is] = 2 End Enum How do I write the equivalent statement in C#? The following code does not compile: public enum Test { as = 1, is = 2 } A: You will need to prefix them with the @ symbol to use them. Here is the msdn page that explains it. A: Prefixing reserved words in C# is done with @. public enum Test { @as = 1, @is = 2 } A: It does seem like a bad idea though - like setting FIVE to equal 6. Why not just use a predetermined prefix so that te names are unique and future maintainers of your code understand what you are doing?
{ "language": "en", "url": "https://stackoverflow.com/questions/25297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: WCF - Domain Objects and IExtensibleDataObject Typical scenario. We use old-school XML Web Services internally for communicating between a server farm and several distributed and local clients. No third parties involved, only our applications used by ourselves and our customers. We're currently pondering moving from XML WS to a WCF/object-based model and have been experimenting with various approaches. One of them involves transferring the domain objects/aggregates directly over the wire, possibly invoking DataContract attributes on them. By using IExtensibleDataObject and a DataContract using the Order property on the DataMembers, we should be able to cope with simple property versioning issues (remember, we control all clients and can easily force-update them). I keep hearing that we should use dedicated, transfer-only Data Transfer Objects (DTOs) over the wire. Why? Is there still a reason to do so? We use the same domain model on the server side and client side, of course, prefilling collections, etc. only when deemed right and "necessary." Collection properties utilize the service locator principle and IoC to invoke either an NHibernate-based "service" to fetch data directly (on the server side), and a WCF "service" client on the client side to talk to the WCF server farm. So - why do we need to use DTOs? A: Having worked with both approaches (shared domain objects and DTOs) I'd say the big problem with shared domain objects is when you don't control all clients, but from my past experiences I'd usually use DTOs unless it development speed were of the essence. If there's any chance that you won't always be in control of the clients then I'd definately recommend DTOs, because as soon as you share your domain objects with someone else's client application you start tying your internals to someone else's dev cycle. I've also found DTOs useful when working in a versioned service environment, which allowed us to radically change the internals of our app but still accept calls to the old versions of our service interfaces. Finally, if you have a lot of client applications it might also be beneficial to use DTOs as you're then protected with an easily versionable service. A: In my experience DTOs are most useful for: * *Strictly defining what will be sent over the wire and having a type specifically devoted to that definition. *Isolating the rest of your application, client and server, from future changes. *Interoperability with non-.Net systems. DTOs certainly aren't a requirement, but they make it easier to design "safe" types. In your scenario these design features may not matter that much. I've used WCF with both strict DTOs and shared Domain Objects and in both scenarios it worked great. The only thing I noticed when sending Domain Objects over the wire was that I tended to send more data (and in unexpected ways) then I needed to. This was likely more due to my lack of experience with WCF than anything else; but it's something you should definitely be wary of should you choose to go that route.
{ "language": "en", "url": "https://stackoverflow.com/questions/25323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What's a good natural language library to use for paraphrasing? I'm looking for an existing library to summarize or paraphrase content (I'm aiming at blog posts) - any experience with existing natural language processing libraries? I'm open to a variety of languages, so I'm more interested in the abilities & accuracy. A: I think he wants to generate blog posts by automatically paraphrasing whatever was it the blogs this system is monitoring. This would be really interesting if you could combine 2 to 10 blog posts that are similar, but from different sources and then do a paraphrased "real" summary automatically (the size of 1 blog post). It could also be great for Homeworks. Unfortunately it's not that easy to do. The only way I could see is to be able to decompose every sentence into "meaning", and then randomly change the sentence structure and some words retaining the meaning. These sentences mean the same: * *I hate this guy, he is so dumb. *This guy is stupid, I hate him. *I despise this dumb guy. *He is dumb, I hate him. It would be nontrivial to write a program to transform one of these sentences to the others, and these are simple sentences, real sentences from blogs are much more complicated. A: There was some discussion of Grok. This is now supported as OpenCCG, and will be reimplemented in OpenNLP as well. You can find OpenCCG at http://openccg.sourceforge.net/. I would also suggest the Curran and Clark CCG parser available here: http://svn.ask.it.usyd.edu.au/trac/candc/wiki Basically, for paraphrase, what you're going to need to do is write up something that first parses sentences of blog posts, extracts the semantic meaning of these posts, and then searches through the space of vocab words which will compositionally create the same semantic meaning, and then pick one that doesn't match the current sentence. This will take a long time and it might not make a lot of sense. Don't forget that in order to do this, you're going to need near-perfect anaphora resolution and the ability to pick up discourse-level inferences. If you're just looking to make blog posts that don't have machine-identifiable duplicate content, you can always just use topic and focus transformations and WordNet synonyms. There have definitely been sites which have made money off of AdWords that have done this before. A: Your getting into really far out AI type domain. I have done extensive work in text transformation into machine knowledge mainly using Attempto Controlled English (see: http://attempto.ifi.uzh.ch/site/), it is a natural language (english) that is completely computer processable into several different ontologies, such as OWLDL. Seems like that would we way overkill though... Is there a reason for not just taking the first few sentences of your blog post and then appending an ellipse for your summary? A: Thanks for those links. Looks like GROK is dead - but it may work still for my purposes. 2 more links: * *http://classifier4j.sourceforge.net/ *http://www.corporasoftware.com/products/summarize.aspx The Attempto Controlled English is an interesting concept: as it's a completely reverse way of looking at the problem. Not really practical for what I am trying to do. @mmattax As for the suggestion of taking a few sentences - I'm not trying to present a summary: otherwise that would be a nice judo solution. I'm looking to actually summarize the content to use for other evaluation purposes.
{ "language": "en", "url": "https://stackoverflow.com/questions/25332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Is there any way to automate windows forms testing? I am familiar with nunit for unit testing of the business layer however I am looking now to automate the test of the win forms gui layer. I have seen watin and the watin recorder for automating tests on web application by accessing the controls and automating them. However I am struggling to find a watin equivalent for windows forms (written in c# or vb.net) preferably that is open source. Does one exist or are all products based on recording mouse and keyboard presses? Update: I have looked at this blog post on white and it seems the sort of thing I am looking for. The blog post raises some issues but as white is only in version 0.6 these may be resolved. Be interested if others have used white or any others for comparison. A: AutomatedQA's TestComplete is a good testing application to automate GUI testing. It supports more than just Windows Forms also, so you can reuse it for other applications. It is not open source and this is the best that I have found. I haven't seen an open source equivalent to WatiN. It does have a free trial, for you decide if you like it or not. The main reason I went with it, is that it really is cost effective, compared to other testing applications. A: As a new alternative, I can give you FlaUI (https://github.com/Roemer/FlaUI). It is basically a complete re-write of white with more features and a clean code-base. A: Check out http://www.codeplex.com/white and http://nunitforms.sourceforge.net/. We've used the White project with success. Same Answer to a previous question Edit The White project has moved, and is now located on GitHub as part of TestStack. A: The list Please check the updated list at the: List of Automated Testing (TDD/BDD/ATDD/SBE) Tools and Frameworks for .NET: User Interface Testing Framework Comment Coded UI Discontinued FlaUI NUnitForms Discontinued Squish GUI Tester TestComplete TestStack.White WinAppDriver What Microsoft recommends WinAppDriver Microsoft recommends to use WinAppDriver: Windows Application Driver (WinAppDriver) is a service to support Selenium-like UI Test Automation on Windows Applications. This service supports testing Universal Windows Platform (UWP), Windows Forms (WinForms), Windows Presentation Foundation (WPF), and Classic Windows (Win32) apps on Windows 10 PCs. Coded UI (Visual Studio ≤ 2019) Previously, Coded UI, a Visual Studio built-in feature and part of the UI Automation, was recommended for application UI testing (it's deprecated now): Coded UI Test for automated UI-driven functional testing is deprecated. Visual Studio 2019 is the last version where Coded UI Test will be available. We recommend using Selenium for testing web apps and Appium with WinAppDriver for testing desktop and UWP apps. Consider Xamarin.UITest for testing iOS and Android apps using the NUnit test framework. Automated tests that drive your application through its user interface (UI) are known as coded UI tests (CUITs). These tests include functional testing of the UI controls. They let you verify that the whole application, including its user interface, is functioning correctly. Coded UI Tests are particularly useful where there is validation or other logic in the user interface, for example in a web page. They are also frequently used to automate an existing manual test. Read more at: https://learn.microsoft.com/en-us/visualstudio/test/use-ui-automation-to-test-your-code A: WinAppDriver is a Selenium-like UI test automation service for testing Windows applications including Windows Forms applications. It can be used with Appium, a test automation framework. A: As far as I know, White is an abstraction layer over the top of Microsoft's UI Automation framework. I have written a similar layer that we use internally on our projects and it works great. So White would definattley be worth a look Microsoft have released the source to UI Automation, so if necessary you should be able to debug right down the whole stack if necessary. The really cool thing is that with licence cost, you can scale up and run as many machines as you like for execution. We run inside VSTS and link our results to requirements, but you can use c# express and nUnit and get first class tools and languages for little to no cost. A: Here are some links from MSDN Magazine on automatic testing code: * *Using UIAutomation Bugslayer March 2007 *Using PowerShell Test Run December 2007 *Tester, a utility for recording mouse clicks and keystrokes, then playing them back & program checking behaviour. Excellent for unmanaged code. Uses windows handles so may not work well for managed code. Bugslayer March 2002. A: You could check out the Microsoft UI Automation framework. This has been included in .NET since version 3.0. This is actually what the White framework uses anyway. A: You may also use Winium(https://github.com/2gis/Winium) that works on multiple Windows platform besides Windows 10 and is similar to Selenium with extra features that support controlling the application remotely.
{ "language": "en", "url": "https://stackoverflow.com/questions/25343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: What would be the fastest way to remove Newlines from a String in C#? I have a string that has some Environment.Newline in it. I'd like to strip those from the string and instead, replace the Newline with something like a comma. What would be, in your opinion, the best way to do this using C#.NET 2.0? A: Like this: string s = "hello\nworld"; s = s.Replace(Environment.NewLine, ","); A: Why not: string s = "foobar\ngork"; string v = s.Replace(Environment.NewLine,","); System.Console.WriteLine(v); A: string sample = "abc" + Environment.NewLine + "def"; string replaced = sample.Replace(Environment.NewLine, ","); A: Don't reinvent the wheel — just use: myString.Replace(Environment.NewLine, ",") A: The best way is the builtin way: Use string.Replace. Why do you need alternatives?
{ "language": "en", "url": "https://stackoverflow.com/questions/25349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Custom Attribute Binding in Silverlight I've got two Silverlight Controls in my project, both have properties TeamId. I would like to bind these together in XAML in the control hosting both user controls similar to: <agChat:UserTeams x:Name="oUserTeams" /> <agChat:OnlineUser x:Name="oOnlineUsers" TeamId="{Binding ElementName=oUserTeams, Path=TeamId}" /> In the first control, I'm implementing System.ComponentModel.INotifyPropertyChanged and raising the PropertyChanged event upon the TeamId property changing. In the second control, I've used the propdp snippet to identify the TeamId as a Dependency property. // Using a DependencyProperty as the backing store for TeamId. This enables animation, styling, binding, etc... public static readonly DependencyProperty TeamIdProperty = DependencyProperty.Register( "TeamId", typeof(string), typeof(OnlineUsers), new System.Windows.PropertyMetadata(new System.Windows.PropertyChangedCallback(TeamChanged))); However when the silverlight controls first gets created, I get the follow exception from Silverlight: Unhandled Error in Silverlight 2 Application Invalid attribute value {Binding ElementName=oUserTeams, Path=TeamId} for property TeamId. [Line: 21 Position: 146] at System.Windows.Application.LoadComponent(Object component, Uri xamlUri) at agChat.Page.InitializeComponent() at agChat.Page..ctor() at agChat.App.Application_Startup(Object sender, StartupEventArgs e) at System.Windows.CoreInvokeHandler.InvokeEventHandler(Int32 typeIndex, Delegate handlerDelegate, Object sender, Object args) at MS.Internal.JoltHelper.FireEvent(IntPtr unmanagedObj, IntPtr unmanagedObjArgs, Int32 argsTypeIndex, String eventName) Any ideas what I'm doing wrong? Obviously this could all be done in code-behind, but this seems like the correct approach. A: That is the correct approach in WPF, but not in Silverlight. You cannot bind to elements using xaml in Silverlight. This is the offending line: TeamId="{Binding ElementName=oUserTeams, Path=TeamId}" Specificly ElementName If you can, place the data object into Resources and declare it there, then you can do this: <agChat:UserTeams x:Name="oUserTeams" DataContext="{StaticResource myDataObject}" /> <agChat:OnlineUser x:Name="oOnlineUsers" DataContext="{StaticResource myDataObject}" TeamId="{Binding TeamId}" />
{ "language": "en", "url": "https://stackoverflow.com/questions/25355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Flash designer/coder collaboration best practices I've done several flash projects working as the ActionScripter with a designer doing all the pretty things and animation. When starting out I found quite a lot of information about ActionScript coding and flash design. Most of the information available seems to focus on one or the other. I didn't find any information about building flash projects in a way that lets the coder do their thing AND gives the designer freedom as well. Hopefully more experienced people can share, these are some of the things i discovered after a few projects * *Version control is a must (as always) but can be difficult to explain to designers *No ActionScript in the flash .fla files, they are binary and as a coder you want to try to keep away as much as possible *Model View Controller is the best way I've found to isolate visual design changes *Try to build the views so that they use frame labels, this allows the designer to decide what actually happens What are your experiences? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: I've been doing Flash for 9 years and I still find this a difficult thing to get right. There is a balance of power between designers and developers, which will inevitably tip one way or the other. If you work for a developer led studio, then you are lucky, as the designers will be instructed to make a design that fits your functionality. In Flex / MXML this is the only way to work. If, on the other hand, you work in a graphic design/creative/advertising studio, you will be instructed to build whatever the designer puts together in PhotoShop, whether or not it is feasible to build within the time. The key to getting around this is communication and education. Designers and design-focussed managers may not know what is involved in creating a particular piece of functionality, and if you explain to them why a particular thing is hard to do they might be persuaded to go and rethink their design. On the other hand, they may well think you're just a whiner! It never feels good when you have to tell someone "sorry, I can't really do that" when you know that you could make it work, given a few late nights! As well as the things you and others have already noted, like using FlashDevelop and external AS classes, here's some other things I recommend: * *Start with a site map / wireframe that both the developers and designers agree to. *Load all your text from XML into dynamic text fields, and make sure your buttons etc are designed to expand to fit content *Make sure your designers have some idea how to correctly cut-up graphics and lay them out in Flash. A developer shouldn't be messing about in PhotoShop when you're up against a deadline. *Make sure you get all you graphics assets well before the deadline - inevitably there'll be things they've missed and things that need changing. *Be firm and don't let your design team try to sneak in extra features at the last minute. *Let the designers use the timeline for character animation etc, but for simple tweens use an ActionScript tweening engine. Hope these tips are some use! A: The way I currently work is that I (the developer) build the functionality using a dummy FLA file, using only external class files. When the designer(s) finish up the layout, they send me a FLA with all the imported assets, and linked buttons and MovieClips. I then attach my document class to the new FLA, and make sure all the Objects match my code. Overall, its a pretty simple transition. If an asset needs to be updated for whatever reason, the designers just send me the asset, and I update the FLA manually. A: separating the design from the code is the important thing to do - since I try to do projects as a series of modular components (stitched together a little, of course, since nothing ever fits exactly) I first make a sort of interactive wireframe. It's got placeholders for all UI elements, appropriately named and nested. That .fla can be passed off to a designer, who can add whatever they want, as long as they keep the names and nesting order - essentially like skinning an app. A: On our team everyone uses TortoiseSVN and a Trac instance per project. Designers are using the standard Flash designer to edit .FLAs and developers are using FlashDevelop to manage ActionScript files and debug the project. The tool-chain works like this: * *Developers program the behavior of each window by hand-editing MXML files (it's not as hard as it sounds) and developing the corresponding .AS files at the same time. *Designers make graphics for skins and other UI elements that get (link) exported and store them in .FLAs alongside the code. *Developers [Import()] the resources in the .AS files. This way everything gets into source control and designers don't even look at a line of ActionScript. Off course I'm oversimplifying the process but I hope you'll get the idea.
{ "language": "en", "url": "https://stackoverflow.com/questions/25356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the best calendar pop-up to populate a web form? I want to be able to make an HTTP call updating some select boxes after a date is selected. I would like to be in control of updating the textbox so I know when there has been a "true" change (in the event the same date was selected). Ideally, I would call a function to pop-up the calendar and be able to evaluate the date before populating the text box...so I can do my validation before making a server call. A: JQuery's datepicker is an extremely flexible tool. With the ability to attach handlers prior to opening or after date selection, themes, range selection and a variety of other incredibly useful options, I've found that it meets all my needs. The fact that I sit next to one of its maintainers here at work is also fairly useful... A: I've been playing with the jquery datePicker script - you should be able to do everything you need to with this. A: YUI and ExtJs both have very nice looking and flexible calendars. A: If you ever end up considering a JavaScript library/toolkit, Dijit, a widget system which layers on top of Dojo, has a calendar (Dijit calendar test page). I found it relatively simple to implement. //Disclaimer: I'm in the middle of a love-hate relationship w/ Dojo at the moment, as I am in the process of learning and using it better. A: I don't like the MS ASP.NET ajax, but their datepicker is superb. Otherwise, jQuery datepicker. A: Check out the ASP.NET AJAX Calendar Extender or Steve Orr's drop down Calendar control. A: I'm using the JSCalendar from Dynarch for a project I'm currently working on. It is LGPL licensed and really flexible (easy to customize to your needs). It has lots of features and looks good, too. http://www.dynarch.com/projects/calendar/
{ "language": "en", "url": "https://stackoverflow.com/questions/25367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can I represent a very large integer in .NET? Does .NET come with a class capable of representing extremely large integers, such as 100 factorial? If not, what are some good third party libraries to accomplish this? A: Here is a link the documentation of big integer in framework 4.0 http://msdn.microsoft.com/en-us/library/system.numerics.biginteger(VS.100).aspx A: Mono has a public BigInteger implementation already: http://www.go-mono.com/docs/index.aspx?link=T:Mono.Math.BigInteger You can just grab the Mono.Security assembly to use it; since its a Mono class library it should be MIT licensed too. A: .NET 4 has a BigInteger class Represents an arbitrarily large signed integer. The BigInteger type is an immutable type that represents an arbitrarily large integer whose value in theory has no upper or lower bounds. This type differs from the other integral types in the .NET Framework, which have a range indicated by their MinValue and MaxValue properties. A: .NET has a BigInteger class, but it is internal, unfortunately. However, several places have their own. You can grab an implementation from IronPython, or the one from CodeProject, or from Visual J#. I have to say, I've not tried these myself, so I don't know which one is the best. http://www.codeplex.com/IronPython http://www.codeproject.com/KB/cs/biginteger.aspx http://msdn.microsoft.com/en-us/magazine/cc163696.aspx A: Microsoft.FSharp.Math.Types.BigInt It can represent any integer.
{ "language": "en", "url": "https://stackoverflow.com/questions/25375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: What is appliance and how to use lambda expressions? I've read that Lambda Expressions are an incredibly powerful addition to C#, yet I find myself mystified by them. How can they improve my life or make my code better? Can anyone point to a good resource for learning such expressions? They seem cool as hell, but how do they relate to my day-to-day life as an asp.net developer? Edit: Thanks for the examples, and thanks for the link to Eric White's articles. I'm still digesting those now. One quick question: are lambda expressions useful for anything other than querying? Every example I've seen has been a query construct. A: Lambdas bring functional programing to C#. They are anonymous functions that can be passed as values to certain other functions. Used most in LINQ. Here is a contrived example: List<int> myInts = GetAll(); IEnumerable<int> evenNumbers = myInts.Where(x => x % 2 == 0); Now when you foreach through evenNumbers the lamda x=> x % 2 == 0 is then applied as a filter to myInts. They become really useful in increasing readability to complicated algorithms that would have many nested IF conditionals and loops. A: Here's a simple example of something cool you can do with lambdas: List<int> myList = new List<int>{ 1, 2, 3, 4, 5, 6, 7, 8, 9 }; myList.RemoveAll(x => x > 5); //myList now == {1,2,3,4,5} The RemoveAll method takes a predicate(a delegate that takes argurments and returns a bool), any that match it get removed. Using a lambda expression makes it simpler than actually declaring the predicate. A: : are lambda expressions useful for anything other than querying Lamba expressions are nothing much other than a convenient way of writing a function 'in-line'. So they're useful any place you wanted a bit of code which can be called as though it's a separate function but which is actually written inside its caller. (In addition to keeping related code in the same location in a file, this also allows you to play fun games with variable scoping - see 'closures' for a reference.) An example of a non-query-related use of a lamba might be a bit of code which does something asynchronously that you start with ThreadPool.QueueUserWorkItem. The important point is that you could also write this using anonymous delegates (which were a C#2 introduction), or just a plain separate class member function. This http://blogs.msdn.com/jomo_fisher/archive/2005/09/13/464884.aspx is a superb step-by-step introduction into all this stuff, which might help you.
{ "language": "en", "url": "https://stackoverflow.com/questions/25376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Family Website CMS I am looking for a CMS that would be incredibly user-friendly and would have the following features: * *really simple message board (no login required) *family tree *story telling area *photo section *news section Is there anything out there like this that is really easily configurable? I've already messed around with Mambo and Family Connects, but I didnt like either of those. In the past I've just programmed my own websites, for lack of easily implementable features. However, I've assuming there's something I need out there just like this, that I can't find. Thanks. I don't want anyone to have to login, for one. The is for a family website, and much of my family really don't know what a website is, let alone how to use one. I want a super simple website with huge buttons and not a whole lot of distractions. Family Connects is a good example of what I want, except the photo album is horrible. I want people to post messages without logging in or signing up, and haven't seen that ability in mambo sites I've looked at. A: I can understand your stipulation that your users (family) shouldn't have to sign up - but without a sign-in, your site will be a free-for-all for spammers, hackers and other bored Internet denizens. That said, my suggestion is to use WordPress for a front end - register your family members yourself, and use a very basic template - or better yet, create one. A: I have created a CMS for exactly what you are looking for. My family uses it all the time and the majority of them are not computer savy. The only downside is that it requires a login, but like other people have said, their really isn't a way around that if you want your information to be private. Anyway, if you are still looking, try http://www.familycms.com/ A: I've been using http://www.myfamily.com/ and it fits all my needs. It includes: * *Pictures (with option to order prints) *Discussion *Family Trees (free from ancestry.com) *Videos *Files *Events A: I've setup CMS Made Simple a couple times now. It's all PHP and you can edit it to your heart's content. Give it a try. A: CMS made simple seems to die according to this study about content management systems found on MytestBox.com But if it's just for a family website... maybe you can try other CMSs which any web hosting company provides (like Joomla or Wordpress). These can be installed in several clicks (especially Wordpress - you can build a good site in Wordpress and it's very easy to maintain it). For a family website I thiknk Wordpress is the best and enough (lots of plugins and skins can be found for it on the web. A: If you're going for a family website you do have the option of removing the usernames/passwords/accounts by setting it up as an intranet site. Then you can browse it at home or from selective addresses. A: I recommend geni.com. It's much better than Myfamily.com
{ "language": "en", "url": "https://stackoverflow.com/questions/25379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: ASP.NET Forms Authorization I'm working on a website built with pure HTML and CSS, and I need a way to restrict access to pages located within particular directories within the site. The solution I came up with was, of course, ASP.NET Forms Authorization. I created the default Visual Studio log in form and set up the users, roles, and access restrictions with Visual Studio's wizard. The problem is, I can't log in to the website with the credentials that I have set. I'm using IIS 7. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: I'd guess (since I don't have IIS7 handy ATM) that you'd need to turn off Anonomyous Auth, and enable Forms Auth in the IIS7 sections. A: At what point did you insert your login/password? Did you have a look at the tables that where created? Althought your password must be encrypted, maybe it's worth just checking if your user was actually created. A: At what point did you insert your login/password? Did you have a look at the tables that where created? Althought your password must be encrypted, maybe it's worth just checking if your user was actually created. Forms Authentication does not require any form of user database. Steve, can you please paste in your forms authentication web.config section, also any relevant code to the ASP.NET Login control you were using. There is not enough information to troubleshoot here yet :) A: The web.config section is pretty useless as far as I can tell: <authentication mode="Forms" /> I looked in IIS 7, and in the Authentication section it says: Anonymous Authentication = Enabled, ASP.NET Impersonation = Disabled, Basic Authentication = Disabled, Forms Authentication = Disabled. Also, I have made no changes to the code other than dragging a Login object onto the designer and changing the page it points at to index.html. Currently, the log in fails by displaying the log in failed text. EDIT: Earlier when I would try to navigate directly to a page that is restricted, I would receive a blue page saying that I had insufficient permissions. Now I can see the pages that are restricted without logging in even though I have anon access denied. A: Steve, I don't think the issue is with your IIS settings. Because forms authentication does not rely on IIS authentication, you should configure anonymous access for your application in IIS if you intend to use forms authentication in your ASP.NET application. Try this in your web.config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <authentication mode="Forms" > <forms loginUrl="~/login.aspx" defaultUrl="~/"> <credentials passwordFormat="Clear"> <user name="YourUsername" password="superSecret" /> </credentials> </forms> </authentication> <authorization> <deny users="?"/> </authorization> <system.web> </configuration> There are better ways to implement forms authentication than hardcoding a username and password into your web.config, but this should work for getting you started.
{ "language": "en", "url": "https://stackoverflow.com/questions/25396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Linq 2 SQL on shared host I recently ran into an issue with linq on a shared host. The host is Shared Intellect and they support v3.5 of the framework. However, I am uncertain to whether they have SP1 installed. My suspicion is that they do not. I have a simple News table that has the following structure: NewsID uniqueidentifier Title nvarchar(250) Introduction nvarchar(1000) Article ntext DateEntered datetime (default getdate()) IsPublic bit (default true) My goal is to display the 3 most recent records from this table. I initially went the D&D method (I know, I know) and created a linq data-source and was unable to find a way to limit the results the way I desired, so I removed that and wrote the following: var dc = new NewsDataContext(); var news = from a in dc.News where a.IsPublic == true orderby a.DateEntered descending select new { a.NewsID, a.Introduction }; lstNews.DataSource = news.Take(3); lstNews.DataBind(); This worked perfectly on my local machine. However, when I uploaded everything to the shared host, I recieved the following error: .Read_<>f__AnonymousType0`2 (System.Data.Linq.SqlClient.Implementation.ObjectMaterializer`1<System.Data.SqlClient.SqlDataReader>) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.MethodAccessException: .Read_<>f__AnonymousType0`2 (System.Data.Linq.SqlClient.Implementation.ObjectMaterializer`1<System.Data.SqlClient.SqlDataReader>) I tried to search the error on Google, but met with no success. I then tried to modify my query in every way I could imagine, removing various combinations of the where/orderby parameters as well as limiting my query to a single column and even removing the Take command. My Question therefore comes in 3 parts: * *Has anyone else encountered this and if so, is there a "quick" fix? *Is there a way to use the datasource to limit the rows? *Is there some way to determine what version of the framework the shared host is running short of emailing them directly (which I have done and am awaiting an answer) A: System.MethodAccessException is thrown by the framework when it is missing an assembly, or one of the references are of the wrong version. The first thing I would do is try uploading and referencing your code to the LINQ assemblies in your BIN, instead of the shared hosting providers GAC.
{ "language": "en", "url": "https://stackoverflow.com/questions/25432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to create a pluginable Java program? I want to create a Java program that can be extended with plugins. How can I do that and where should I look for? I have a set of interfaces that the plugin must implement, and it should be in a jar. The program should watch for new jars in a relative (to the program) folder and registered them somehow. Although I do like Eclipse RCP, I think it's too much for my simple needs. Same thing goes for Spring, but since I was going to look at it anyway, I might as well try it. But still, I'd prefer to find a way to create my own plugin "framework" as simple as possible. A: Although I'll second the accepted solution, if a basic plugin support is needed (which is the case most of the time), there is also the Java Plugin Framework (JPF) which, though lacking proper documentation, is a very neat plugin framework implementation. It's easily deployable and - when you get through the classloading idiosynchrasies - very easy to develop with. A comment to the above is to be aware that plugin loadpaths below the plugin directory must be named after the full classpath in addition to having its class files deployed in a normal package path named path. E.g. plugins `-com.my.package.plugins `-com `-my `-package `-plugins |- Class1.class `- Class2.class A: I've done this for software I've written in the past, it's very handy. I did it by first creating an Interface that all my 'plugin' classes needed to implement. I then used the Java ClassLoader to load those classes and create instances of them. One way you can go about it is this: File dir = new File("put path to classes you want to load here"); URL loadPath = dir.toURI().toURL(); URL[] classUrl = new URL[]{loadPath}; ClassLoader cl = new URLClassLoader(classUrl); Class loadedClass = cl.loadClass("classname"); // must be in package.class name format That has loaded the class, now you need to create an instance of it, assuming the interface name is MyModule: MyModule modInstance = (MyModule)loadedClass.newInstance(); A: At the home-grown classloader approach: While its definitely a good way to learn about classloaders there is something called "classloader hell", mostly known by people who wrestled with it when it comes to use in bigger projects. Conflicting classes are easy to introduce and hard to solve. And there is a good reason why eclipse made the move to OSGi years ago. So, if its more then a pet project, take a serious look into OSGi. Its worth looking at. You'll learn about classloaders PLUS an emerging technolgy standard. A: Look into OSGi. On one hand, OSGi provides all sorts of infrastructure for managing, starting, and doing lots of other things with modular software components. On the other hand, it could be too heavy-weight for your needs. Incidentally, Eclipse uses OSGi to manage its plugins. A: I recommend that you take a close look at the Java Service Provider (SPI) API. It provides a simple system for finding all of the classes in all Jars on the classpath that expose themselves as implementing a particular service. I've used it in the past with plugin systems with great success. A: Have you considered building on top of Eclipse's Rich Client Platform, and then exposing the Eclipse extension framework? Also, depending on your needs, the Spring Framework might help with that and other things you might want to do: http://www.springframework.org/
{ "language": "en", "url": "https://stackoverflow.com/questions/25449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Abstraction away from CSS Many frameworks seek to abstract away from HTML (custom tags, JSFs component system) in an effort to make dealing with that particular kettle of fish easier. Is there anything you folks have used that has a similar concept applied to CSS? Something that does a bunch of cross-browser magic for you, supports like variables (why do I have to type #3c5c8d every time I want that colour), supports calculated fields (which are 'compiled' into CSS and JS), etc. Alternatively, am I even thinking about this correctly? Am I trying to push a very square block through a very round hole? A: You can always use a template engine to add variables and calculated fields to your CSS files. A: This elaborates on my previous answer. When I first started using CSS I also thought it was a pain that it didn't support variables, expressions, etc. But as I started to use it more and more, I developed a different style to overcome these issues. For example, instead of this: a { color: red } .entry { color: red } h1 { color: red } You can do: a, .entry, h1 { color: red } You can keep the color declared in one spot by doing this. Once you use CSS enough you should be able to overcome most browser inconsistencies easily. If you find that you need to use a CSS hack there is probably a better way to do it. A: Sorry to say that guys, but all of you missed the point. The word abstraction is the key. Say you and Sally are making a website. You are styling forms while she makes the corners round. Both you and she have defined a handful of selectors. What if, unknowingly, you picked class names that clash with the ones of Sally? You see, you can't "hide" (abstract out) the details when you work in CSS. That's why you can't fix a bug in IE then create a self-contained solution that others can use as-is, much like you call procedures in a programming language only caring about pre- and postconditions and not thinking of how it works on the inside. You just think of what you want to accomplish. This is the biggest problem with the web: it completely lacks abstraction mechanisms! Most of you will exclaim, "It's unnecessary; you stop smoking crack!" You will instead do the job of say, fixing layout bugs or making round corners or debating on the "best" markup for this or that case over and over again. You will find a site that explains the solution, then copy-paste the answer then adapt it to your specific case without even thinking what the hell are you doing! Yes, that's what you will do. End of the rant. A: Then comes the multiple browser issue There is this that helps remove some inconsistencies from IE. You can also use jQuery to add some selectors via javascript. I agree with Dan, learn it and it's not so much of a problem, even fun. A: See, this is the problem with SO-- every answer so far has made a valid point and should be considered the final answer. Let me try to sum up: * *CSS is good! To expand further, there is a learning curve but once you learn it many things will be much easier. *(Some) Browser inconsistencies are solvable generically. *(Some of your) Variable and calculated field functionality can be taken care of through whatever templating engine you use. I think a combination of all these certainly solves a large sum of problems (although to be fair deeply learning CSS is not an option for everyone; some people just don't use it enough to justify the time). There are some problems none of the above points cover (certain types of calculated fields would require writing a JS library for, me thinks) but it's certainly a good start. A: What I found works best is to really learn CSS. I mean really learn CSS. It can be a confusing language to learn, but if you read enough about it and practice, eventually you'll learn the best way to do things. The key is to do it enough that it comes natural. CSS can be very elegant if you know what you want to do before you start and you have enough experience to do it. Granted, it is also a major PITA to do sometimes, but even cross-browser issues aren't so bad if you really practice at it and learn what works and what doesn't, and how to get around problems. All it takes is practice and in time you can become good at it. A: For variable support, I have used PHP with CSS headers to great effect for that. I think you can do it in any language. Here is a php sample: <? header('content-type:text/css'); header("Expires: ".gmdate("D, d M Y H:i:s", (time()+900)) . " GMT"); $someColorVar = "#cc0000"; ?> BODY { background-color: <?= someColorVar ?>; } A: Solutions to problems seem to often involve jiggering numbers around like some chef trying to work out exactly how much nutmeg to put in his soon-to-be famous rice pudding I only get this when trying to make stuff work in IE. If you learn CSS to the point where you can code most things without having to look up the reference (if you're still looking up reference regularly you don't really know it and can't claim to complain I think), and then develop for firefox/safari, it's a pretty nice place to be in. Leave the pain and suffering of IE compatibilit to the end after it works in FF/Safari, so your mind will attribute the blame to IE, where it damn well belongs, rather than CSS in general. A: For CSS frameworks, you could consider YUI Grids. It makes basic layout a lot quicker and simpler, although used in its raw form it does compromise on semantics. A: CSS variables are coming (relatively) soon, but I agree they are long overdue. In the meantime, it is possible to use a CSS templating engine such as Sass, or even the dynamic web language of your choice, to generate your stylesheets programmatically. A: The key to a real understanding of CSS (and the browser headaches) is a solid understanding of the box model used by the CSS Standards, and the incorrect model used by some browsers. Once you have that down and start learning selectors you will get away from browser specific properties and CSS will become something you look forward to. A: Also check out BlueprintCSS, a layout framework in CSS. It doesn't solve all your problems, but many, and you don't have to write the CSS yourself. A: I believe the common errors beginners have with CSS are to do with specificity. If you're styling the a tag, are you sure you really want to be styling every single one in the document or a certain "class" of a tags? I usually start out being very specific with my CSS selectors and generalize them when I see fit. Here's a humerours article on the subject, but also informational: Specificity Wars A: If by some chance you happen to be using Ruby, there's Sass. It supports hierarchical selectors (using indentation to establish hierarchies), among other things, which makes life easier to an extend from a syntactical perspective (you repeat yourself a lot less). I am certainly with you, though. While I would consider myself a small-time CSS expert, I think it would be nice if there were tools for CSS like there are with Javascript (Prototype, JQuery, etc.). You tell the tool what you want, and it handles the browser inconsistencies behind-the-scenes. That would be ideal, methinks. A: CSS takes a bit of time to learn, but the thing I initially found most discouraging was the fact that so many hacks were needed to get all browsers to behave the same way. Learning a system which doesn't adhere to logic seems dumb... but I've clung to the vague belief that there is logic behind each browser's idiosyncrasy, in the form of the W3 spec. It seems that the new generation browsers are slowly coming into line - but IE6 still makes my life hell on a daily basis. Maybe creating an abstraction layer between compliant/valid CSS code and the browsers' shoddy implementations wouldn't be a bad thing. But if such a thing was created - would it need to be powered by JS (or jQuery)? (and would that create an unreasonably burden, in terms of processing cost?) I've found that it useful to 'level the ground' when scripting with CSS. There are probably loads of different flavours of reset script out there - but using YUI resets has helped me to reduce the number of quirks I'd otherwise encounter - and YUI grids make life a little easier sometimes. A: @SCdF: I think your summary here is fair. But the argument that some people don't have the time to learn CSS is bogus - just think about for a second. Substitute a technology that you've mastered and you'll see why: I. Hate. Java. Is there something out there that will just write it for me? Not everyone has the time to master Java. CSS is certainly an imperfect technology - I have high hopes that 5 years from now we won't be dealing with browser incompatibilities any more (we're almost there), and that we'll have better author-side tools (I've written a Visual Studio macro for my own use that provides the the sort of variables and calculations that you describe, so it's not impossible) - but to insist that you should be able to use this technology effectively without really understanding it just isn't reasonable. A: You are thinking about this correctly though, you're probably still going to need to understand the different browser implementations of CSS. This is just understanding the environment your application lives in. To clarify: this isn't about understanding CSS. If you know the language well, you've still got to handle the redundancy, duplication and lack of control structures in the language. Ive been writing CSS solidly for more than 10 years and I've come to the conclusion that while the language is powerful and effective, implementing CSS sucks. So I use an abstraction layer like Sass or Less or xCSS to interface to the language. These tools use a syntax similar to CSS so you're solving the problem in the problem's domain. Using something like PHP to write CSS works but is not the best approach. By hiding the problems in the language through an abstraction layer, you can deliver a better product that will maintain its integrity throughout the full life cycle of your project. Writing CSS by hand accelerates software rot unless you're providing solid documentation which most CSS coders aren't. If you're writing a well documented CSS framework, you probably wouldn't write it by hand anyway. It's just not efficient. Another problem with CSS is due to it's lack of support for nesting block declarations. This encourages coders to build a flat, global set of classes and handle the name collisions with a naming convention. We all know globals are evil but why do we write CSS in such a way? Wouldn't it be better to give your classes a context instead of exposing them to the whole document model? And your naming convention may work but it's just another task you must master to get the language written. I encourage those of you who pride yourselves on writing good CSS to start applying some of the best practices from programming to your markup. Using an abtraction layer doesn't mean you lack the skill to write good CSS, it means you've limited your exposure to the weaknesses of the language. A: You don’t need an abstraction away from CSS—you need to realize that CSS itself in an abstraction. CSS isn’t about putting pixels just so on the screen. Instead, it’s about writing a system of rules that help the browser make those decisions for you. This is necessary, because at the time you write CSS, you don’t know the content the browser will be applying it to; neither do you know the environment where the browser will be doing it. Mastering this takes time. You can’t pick up up CSS in a weekend and be good to go. It’s a bit deceiving, because the language has such a low barrier of entry, but the waters run deep. Here is just a few of the topics you should seek to master to be proficient in CSS: * *The Cascade and Inheritance *The Box Model *Layout methods including floats and the new flexbox *Positioning *Current best practices such as SMACSS or BEM to keep your styles modular and easy to maintain You don't need to know this all up front, but you should continue pushing forward. Just as with other languages and programming in general, you need to continually seek to learn more and master the craft. CSS is a fundamental part of web development, and more developers need to treat it with the same respect they afford other languages.
{ "language": "en", "url": "https://stackoverflow.com/questions/25450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What is the most efficient way to populate a time (or time range)? While plenty of solutions exist for entering dates (such as calendars, drop-down menus, etc.), it doesn't seem like there are too many "standard" ways to ask for a time (or time range). I've personally tried drop-down menus for the hour, minute, and second fields (and sometimes an "AM/PM" field, as well). I've also tried several clock-like input devices, most of which are too hard to use for the typical end-user. I've even tried "pop-out" time selection menus (which allow you to, for example, hover over the hour "10" to receive a sub-menu that contains ":00",":15",":30", and ":45") -- but none of these methods seem natural. So far, the best (and most universal) method I have found is just using simple text fields and forcing a user to manually populate the hour, minute, and second. Alternatively, I've had good experiences creating something similar to Outlook's "Day View" which allows you to drag and drop an event to set the start and end times. Is there a "best way" to ask for this information? Is anybody using some type of time input widget that's really intuitive and easy to use? Or is there at least a way that's more efficient than using plain text boxes? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: There is quite a useful time entry tool for JQuery. It provides a 'spinner' type approach, in addition to a standard text field. It also supports the use of the mouse scroll-wheel for adjustment (as well as the traditional 'just type it in' approach) and can be configured to restrict to n-minute steps too if you like. It's pretty customisable, supports localisation and a variety of other settings, I've used it successfully in a couple of projects/demo sites. A: I am a huge fan of plain language input (there was a topic on it the other day). I like the way 37signals backpack calendar let's you type things in (08/12 3pm Meeting with tom). I also like the way they handle times with their reminder system (they give you options like later today, tomorrow morning). A: I find Google Calendar's approach to be the best. Use a text box, but use JavaScript to make it sort of a drop-down for picking your time. A good demo can be found for a jQuery implementation here I haven't implemented this on my site yet so I'm not 100% sure, but I think you also need code from this jQuery plugin here: http://www.texotela.co.uk/code/jquery/timepicker/ Edit The first link I posted does not require the second link's code. It is simply based off of it. To get the actual JavaScript file from the example, you can view the source of the page to find where the file is, or you can go to the URL directly http://labs.perifer.se/timedatepicker/jquery.timePicker.js
{ "language": "en", "url": "https://stackoverflow.com/questions/25455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How costly is .NET reflection? I constantly hear how bad reflection is to use. While I generally avoid reflection and rarely find situations where it is impossible to solve my problem without it, I was wondering... For those who have used reflection in applications, have you measured performance hits and, is it really so bad? A: It's bad enough that you have to be worried even about reflection done internally by the .NET libraries for performance-critical code. The following example is obsolete - true at the time (2008), but long ago fixed in more recent CLR versions. Reflection in general is still a somewhat costly thing, though! Case in point: You should never use a member declared as "Object" in a lock (C#) / SyncLock (VB.NET) statement in high-performance code. Why? Because the CLR can't lock on a value type, which means that it has to do a run-time reflection type check to see whether or not your Object is actually a value type instead of a reference type. A: Reflection performance will depend on the implementation (repetitive calls should be cached eg: entity.GetType().GetProperty("PropName")). Since most of the reflection I see on a day to day basis is used to populate entities from data readers or other repository type structures I decided to benchmark performance specifically on reflection when it is used to get or set an objects properties. I devised a test which I think is fair since it caches all the repeating calls and only times the actual SetValue or GetValue call. All the source code for the performance test is in bitbucket at: https://bitbucket.org/grenade/accessortest. Scrutiny is welcome and encouraged. The conclusion I have come to is that it isn't practical and doesn't provide noticeable performance improvements to remove reflection in a data access layer that is returning less than 100,000 rows at a time when the reflection implementation is done well. The graph above demonstrates the output of my little benchmark and shows that mechanisms that outperform reflection, only do so noticeably after the 100,000 cycles mark. Most DALs only return several hundred or perhaps thousands of rows at a time and at these levels reflection performs just fine. A: As with all things in programming you have to balance performance cost with with any benefit gained. Reflection is an invaluable tool when used with care. I created a O/R mapping library in C# which used reflection to do the bindings. This worked fantastically well. Most of the reflection code was only executed once, so any performance hit was quite small, but the benefits were great. If I were writing a new fandangled sorting algorithm, I would probably not use reflection, since it would probably scale poorly. I appreciate that I haven't exactly answered your question here. My point is that it doesn't really matter. Use reflection where appropriate. It's just another language feature that you need to learn how and when to use. A: Reflection can have noticeable impact on performance if you use it for frequent object creation. I've developed application based on Composite UI Application Block which is relying on reflection heavily. There was a noticeable performance degradation related with objects creation via reflection. However in most cases there are no problems with reflection usage. If your only need is to inspect some assembly I would recommend Mono.Cecil which is very lightweight and fast A: As with everything, it's all about assessing the situation. In DotNetNuke there's a fairly core component called FillObject that uses reflection to populate objects from datarows. This is a fairly common scenario and there's an article on MSDN, Using Reflection to Bind Business Objects to ASP.NET Form Controls that covers the performance issues. Performance aside, one thing I don't like about using reflection in that particular scenario is that it tends to reduce the ability to understand the code at a quick glance which for me doesn't seem worth the effort when you consider you also lose compile time safety as opposed to strongly typed datasets or something like LINQ to SQL. A: Reflection is costly because of the many checks the runtime must make whenever you make a request for a method that matches a list of parameters. Somewhere deep inside, code exists that loops over all methods for a type, verifies its visibility, checks the return type and also checks the type of each and every parameter. All of this stuff costs time. When you execute that method internally theres some code that does stuff like checking you passed a compatible list of parameters before executing the actual target method. If possible it is always recommended that one caches the method handle if one is going to continually reuse it in the future. Like all good programming tips, it often makes sense to avoid repeating oneself. In this case it would be wasteful to continually lookup the method with certain parameters and then execute it each and everytime. Poke around the source and take a look at whats being done. A: Reflection does not drastically slow the performance of your app. You may be able to do certain things quicker by not using reflection, but if Reflection is the easiest way to achieve some functionality, then use it. You can always refactor you code away from Reflection if it becomes a perf problem. A: In his talk The Performance of Everyday Things, Jeff Richter shows that calling a method by reflection is about 1000 times slower than calling it normally. Jeff's tip: if you need to call the method multiple times, use reflection once to find it, then assign it to a delegate, and then call the delegate. A: It is. But that depends on what you're trying to do. I use reflection to dynamically load assemblies (plugins) and its performance "penalty" is not a problem, since the operation is something I do during startup of the application. However, if you're reflecting inside a series of nested loops with reflection calls on each, I'd say you should revisit your code :) For "a couple of time" operations, reflection is perfectly acceptable and you won't notice any delay or problem with it. It's a very powerful mechanism and it is even used by .NET, so I don't see why you shouldn't give it a try. A: Not massively. I've never had an issue with it in desktop development unless, as Martin states, you're using it in a silly location. I've heard a lot of people have utterly irrational fears about its performance in desktop development. In the Compact Framework (which I'm usually in) though, it's pretty much anathema and should be avoided like the plague in most cases. I can still get away with using it infrequently, but I have to be really careful with its application which is way less fun. :( A: If you're not in a loop, don't worry about it. A: My most pertinent experience was writing code to compare any two data entities of the same type in a large object model property-wise. Got it working, tried it, ran like a dog, obviously. I was despondent, then overnight realised that wihout changing the logic, I could use the same algorithm to auto-generate methods for doing the comparison but statically accessing the properties. It took no time at all to adapt the code for this purpose and I had the ability to do deep property-wise comparison of entities with static code that could be updated at the click of a button whenever the object model changed. My point being: In conversations with colleagues since I have several times pointed out that their use of reflection could be to autogenerate code to compile rather than perform runtime operations and this is often worth considering. A: I think you will find that the answer is, it depends. It's not a big deal if you want to put it in your task-list application. It is a big deal if you want to put it in Facebook's persistence library.
{ "language": "en", "url": "https://stackoverflow.com/questions/25458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "227" }
Q: Asynchronous Stored Procedure Calls Is it possible to call a stored procedure from another stored procedure asynchronously? Edit: Specifically I'm working with a DB2 database. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: Executive summary: Yes, if your database has a message queue service. You can push a message onto a queue and the queue processor will consume it asynchronously. * *Oracle: queues *Sql Server: service broker *DB2: event broker For "pure" stored procedure languages (PL/Sql or T-Sql) the answer is no, since it works against the fundamental transaction model most databases have. However, if your database has a queuing mechanism, you can use that to get the same result. A: With MS Sql Server 2005, try the Service Broker and/or CLR stored procedures. I don't think there's anything built directly into TSQL. A: It sounds like you need to put some scheduled jobs in place with Cron (or windows equiv). You could use the initial stored proc call to set some kind of flag in the DB, which is then checked periodically by a cron job. If you need to have a specific delay before the 2nd job executes, you should be able to do that by having the task scheduled by the cron job.
{ "language": "en", "url": "https://stackoverflow.com/questions/25460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: interfacing with stdbool.h C++ In a project I am interfacing between C++ and a C library that uses stdbool.h defined as such. #ifndef _STDBOOL_H #define _STDBOOL_H /* C99 Boolean types for compilers without C99 support */ /* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */ #if !defined(__cplusplus) #if !defined(__GNUC__) /* _Bool builtin type is included in GCC */ typedef enum { _Bool_must_promote_to_int = -1, false = 0, true = 1 } _Bool; #endif #define bool _Bool #define true 1 #define false 0 #define __bool_true_false_are_defined 1 #endif #endif Some structures have bool members. So if I have one of these structures defined as local variables within a C++ function and pass it to a C function the sizes are inconsistent between C++ and C as bool is one bye in C++ and 4 in C. Does anyone have any advice to how to overcome this without resorting to my current solution which is //#define bool _Bool #define bool unsigned char Which is against the C99 standard for stdbool.h A: Size is not the only thing that will be inconsistent here. In C++ bool is a keyword, and C++ guarantees that a bool can hold a value of either 1 or 0 and nothing else. C doesn't give you this guarantee. That said, if interoperability between C and C++ is important you can emulate C's custom-made boolean by defining an identical one for C++ and using that instead of the builtin bool. That will be a tradeoff between a buggy boolean and identical behaviour between the C boolean and the C++ boolean. A: I found the answer to my own question by finding a more compatible implementation of stdbool.h that is compliant with the C99 standard. #ifndef _STDBOOL_H #define _STDBOOL_H #include <stdint.h> /* C99 Boolean types for compilers without C99 support */ /* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */ #if !defined(__cplusplus) #if !defined(__GNUC__) /* _Bool builtin type is included in GCC */ /* ISO C Standard: 5.2.5 An object declared as type _Bool is large enough to store the values 0 and 1. */ /* We choose 8 bit to match C++ */ /* It must also promote to integer */ typedef int8_t _Bool; #endif /* ISO C Standard: 7.16 Boolean type */ #define bool _Bool #define true 1 #define false 0 #define __bool_true_false_are_defined 1 #endif #endif This is taken from the Ada Class Library project. A: Logically, you are not able to share source code between C and C++ with conflicting declarations for bool and have them link to each other. The only way you can share code and link is via an intermediary datastructure. Unfortunately, from what I understand, you can't modify the code that defines the interface between your C++ program and C library. If you could, I'd suggest using something like: union boolean { bool value_cpp; int value_c; }; // padding may be necessary depending on endianness The effect of which will be to make the datatype the same width in both languages; conversion to the native data type will need to be performed at both ends. Swap the use of bool for boolean in the library function definition, fiddle code in the library to convert, and you're done. So, what you're going to have to do instead is create a shim between the C++ program and the C library. You have: extern "C" bool library_func_1(int i, char c, bool b); And you need to create: bool library_func_1_cpp(int i, char c, bool b) { int result = library_func_1(i, c, static_cast<int>(b)); return (result==true); } And now call library_func_1_cpp instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/25461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How do I put unicode characters in my Antlr grammar? I'm trying to build a grammar with the following: NUMERIC: INTEGER | FLOAT | INFINITY | PI ... INFINITY: '∞' PI: 'π' But Antlr refuses to load the grammar. A: Use the Java expression representing the Unicode character: * *'π' = '\u03C0' *'∞' = '\u221E' That will work up to '\uFFFF'; Java doesn't support five-digit Unicode.
{ "language": "en", "url": "https://stackoverflow.com/questions/25475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Where do I get the Antlr Ant task? I'm trying to call an Antlr task in my Ant build.xml as follows: <path id="classpath.build"> <fileset dir="${dir.lib.build}" includes="**/*.jar" /> </path> ... <target name="generate-lexer" depends="init"> <antlr target="${file.antlr.lexer}"> <classpath refid="classpath.build"/> </antlr> </target> But Ant can't find the task definition. I've put all of the following in that dir.lib.build: * *antlr-3.1.jar *antlr-2.7.7.jar *antlr-runtime-3.1.jar *stringtemplate-3.2.jar But none of those seems to have the task definition. (I've also tried putting those jars in my Ant classpath; same problem.) A: The current Antlr-task jar is available at http://www.antlr.org/share/1169924912745/antlr3-task.zip It can be found on the antlr.org website under the "File Sharing" heading. A: You should use the antlrall.jar jar. You can go ahead and just drop it into your Ant installation but that does mean that it will only work for that one install. We check the jar in and use taskdef to load the jar file so that it doesn't become another step for developers when they start on the team or move to a new computer. * *Antlr http://ant.apache.org/manual/Tasks/antlr.html *Using taskdef http://www.onjava.com/pub/a/onjava/2004/06/02/anttask.html A: I just got this working for myself. Took me an hour. ugh. anyway, Step 1: download ant-antlr3 task from http://www.antlr.org/share/1169924912745/antlr3-task.zip Step 2: copy to where ant can see it. My mac: sudo cp /usr/local/lib/ant-antlr3.jar /usr/share/ant/lib/ my linux box: sudo cp /tmp/ant-antlr3.jar /usr/local/apache-ant-1.8.1/lib/ Step 3: make sure antlr2, antlr3, ST are in classpath. All in one is here: http://antlr.org/download/antlr-3.3-complete.jar Step 4: use in build.xml <path id="classpath"> <pathelement location="${antlr3.jar}"/> <pathelement location="${ant-antlr3.jar}"/> </path> <target name="antlr" depends="init"> <antlr:ant-antlr3 xmlns:antlr="antlib:org/apache/tools/ant/antlr" target="src/T.g" outputdirectory="build"> <classpath refid="classpath"/> </antlr:ant-antlr3> </target> Just added a faq entry: http://www.antlr.org/wiki/pages/viewpage.action?pageId=24805671 A: The most basic way to run Antlr is to execute the Antlr JAR: <project default="antlr"> <target name="antlr"> <java jar="antlr-4.1-complete.jar" fork="true"> <arg value="grammar.g4"/> </java> </target> </project> This is a bit slower, because it forks the JVM and it runs Antlr even if the grammar did not change. But it works in the same way with every Antlr version and does not need any special targets. A: On Ubuntu this should make it available: sudo apt-get install ant-optional A: Additional info on top of what everybody else contributed so far: The ant-optional package in Ubuntu includes the task shipped with Ant 1.8.2 which is a task for ANTLR 2.7.2 so this will fail with an error as described in this post. The method described by Terence is the best way to use the ANTLR3 task. If you do not have root access on a Linux machine, you can install the ant-antlr3.jar file in the Ant user directory: ~/.ant/lib. Check with ant -diagnostics whether ant-antlr3.jar is visible to Ant, as explained in this other post. If you are using Eclipse, you will need to restart the IDE before it recognises the new task and you will also need to include antlr3.jar and stringtemplate.jar in your classpath (but ant-antlr3.jar is not necessary).
{ "language": "en", "url": "https://stackoverflow.com/questions/25481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What's the simplest way to execute a query in Visual C++ I'm using Visual C++ 2005 and would like to know the simplest way to connect to a MS SQL Server and execute a query. I'm looking for something as simple as ADO.NET's SqlCommand class with it's ExecuteNonQuery(), ExecuteScalar() and ExecuteReader(). Sigh offered an answer using CDatabase and ODBC. Can anybody demonstrate how it would be done using ATL consumer templates for OleDb? Also what about returning a scalar value from the query? A: With MFC use CDatabase and ExecuteSQL if going via a ODBC connection. CDatabase db(ODBCConnectionString); db.Open(); db.ExecuteSQL(blah); db.Close(); A: You should be able to use OTL for this. It's pretty much: #define OTL_ODBC_MSSQL_2008 // Compile OTL 4/ODBC, MS SQL 2008 //#define OTL_ODBC // Compile OTL 4/ODBC. Uncomment this when used with MS SQL 7.0/ 2000 #include <otlv4.h> // include the OTL 4.0 header file #include <stdio> int main() { otl_connect db; // connect object otl_connect::otl_initialize(); // initialize ODBC environment try { int myint; db.rlogon("scott/tiger@mssql2008"); // connect to the database otl_stream select(10, "select someint from test_tab", db); while (!select.eof()) { select >> myint; std::cout<<"myint = " << myint << std::endl; } } catch(otl_exception& p) { std::cerr << p.code << std::endl; // print out error code std::cerr << p.sqlstate << std::endl; // print out error SQLSTATE std::cerr << p.msg << std::endl; // print out error message std::cerr << p.stm_text << std::endl; // print out SQL that caused the error std::cerr << p.var_info << std::endl; // print out the variable that caused the error } db.logoff(); // disconnect from the database return 0; } The nice thing about OTL, IMO, is that it's very fast, portable (I've used it on numerous platforms), and connects to a great many different databases. A: I used this recently: #include <ole2.h> #import "msado15.dll" no_namespace rename("EOF", "EndOfFile") #include <oledb.h> void CMyDlg::OnBnClickedButton1() { if ( FAILED(::CoInitialize(NULL)) ) return; _RecordsetPtr pRs = NULL; //use your connection string here _bstr_t strCnn(_T("Provider=SQLNCLI;Server=.\\SQLExpress;AttachDBFilename=C:\\Program Files\\Microsoft SQL Server\\MSSQL.1\\MSSQL\\Data\\db\\db.mdf;Database=mydb;Trusted_Connection=Yes;MARS Connection=true")); _bstr_t a_Select(_T("select * from Table")); try { pRs.CreateInstance(__uuidof(Recordset)); pRs->Open(a_Select.AllocSysString(), strCnn.AllocSysString(), adOpenStatic, adLockReadOnly, adCmdText); //obtain entire restult as comma separated text: CString text((LPCWSTR)pRs->GetString(adClipString, -1, _T(","), _T(""), _T("NULL"))); //iterate thru recordset: long count = pRs->GetRecordCount(); COleVariant var; CString strColumn1; CString column1(_T("column1_name")); for(int i = 1; i <= count; i++) { var = pRs->GetFields()->GetItem(column1.AllocSysString())->GetValue(); strColumn1 = (LPCTSTR)_bstr_t(var); } } catch(_com_error& e) { CString err((LPCTSTR)(e.Description())); MessageBox(err, _T("error"), MB_OK); _asm nop; // } // Clean up objects before exit. if (pRs) if (pRs->State == adStateOpen) pRs->Close(); ::CoUninitialize(); } A: Try the Microsoft Enterprise Library. A version should be available here for C++. The SQlHelper class impliments the methods you are looking for from the old ADO days. If you can get your hands on version 2 you can even use the same syntax.
{ "language": "en", "url": "https://stackoverflow.com/questions/25499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Filtering out anchor tags in a string I need to filter out anchor tags in a string. For instance, Check out this site: <a href="http://www.stackoverflow.com">stackoverflow</a> I need to be able to filter out the anchor tag to this: Check out this site: http://www.stackoverflow.com That format may not be constant, either. There could be other attributes to the anchor tag. Also, there could be more than 1 anchor tag in the string. I'm doing the filtering in vb.net before it goes to the database. A: Here's a simple regular expression that should work. Imports System.Text.RegularExpressions ' .... Dim reg As New Regex("<a.*?href=(?:'|"")(.+?)(?:'|"").*?>.+?</a>") Dim input As String = "This is a link: <a href='http://www.stackoverflow.com'>Stackoverflow</a>" input = reg.Replace(input, "$1", RegexOptions.IgnoreCase)
{ "language": "en", "url": "https://stackoverflow.com/questions/25519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Best Method to run a Java Application as a *nix Daemon or Windows Service? I am looking for the best method to run a Java Application as a *NIX daemon or a Windows Service. I've looked in to the Java Service Wrapper, the Apache Commons project 'jsvc', and the Apache Commons project 'procrun'. So far, the Java Service Wrapper looks like it's the best option... but, I'm wondering if there are any other "Open Source friendly" licensed products out there. A: Another option is WinRun4J. This is windows only but has some useful features: * *32 bit and 64 bit support *API to access the event log and registry *Can register service to be dependent on other services (i.e serviceA and serviceB must startup before serviceC) Its also open source friendly (CPL) so no restrictions on use. (full disclosure: I work on this project). A: I've had great success with Java Service Wrapper myself. I haven't looked at the others, but the major strengths of ServiceWrapper are: * *Great x-platform support - I've used it on Windows and Linux, and found it easy on both *Solid Documentation - The docs are clear and to the point, with great examples *Deep per-platform support - There are some unique features in the window service management system that are supported perfectly by service wrapper (w/o restarting). And on Windows, you will even see your app name in the process list instead of just "java.exe". *Standards Compliant - Unlike many ad-hoc Java init scripts, the scripts for service wrapper tend to be compliant with LSB standards. This can end up being very important if you ever want high availability management from something like Linux Heartbeat/HA. Anyway, just my 2 cents... :) A: Are there any special attributes that you need to apply (like OS guided resource management) that you need to support? Otherwise, for Unix you should be able to daemonize your application by writing an appropriate init.d script and setting your app to start automatically.
{ "language": "en", "url": "https://stackoverflow.com/questions/25530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: ASP.NET Master Pages equivalent in Java What would be the Master Pages equivalent in the Java web development world? I've heard of Tiles, Tapestry and Velocity but don't know anything about them. Are they as easy to use as Master Pages? I want something as easy as set up one template and subsequent pages derive from the template and override content regions, similar to Master Pages. Any examples would be great!! A: First, the equivalent of ASP.Net in Java is going to be a web framework, such as the ones you mention (Tiles, Tapestry and Velocity). Master pages give the ability to define pages in terms of content slotted into a master template. Master pages are a feature of ASP.Net (the .Net web framework), so you are looking for a feature similar to master pages in a Java web framework. http://tiles.apache.org/framework/tutorial/basic/pages.html gives some basic examples using Tiles and JSP to implement something similar with Struts, a Java web framework. In this case, the Master Pages functionality is a plugin on top of Struts. Velocity is a generic templating engine, not specialized for web pages and definitely more complicated than you need. (I've seen it used for code generation.) Tapestry is more of a full featured web stack than Tile, and is probably good for your purposes. Its templating functionality involves creating a component and putting all common markup in that. An example is at http://www.infoq.com/articles/tapestry5-intro. The specifics of this differ based on which Java web framework you choose. A: I've used sitemesh in previous projects and it's pretty easy to set up. Essentially, you create decorators which are equivalents of master pages. You then define which child pages use which decorators. See introduction to sitemesh for more information. A: You should also check out Facelets; there is a good introductory article on DeveloperWorks. The Facelets <ui:insert/> tag is comparable to the ASP.NET <asp:ContentPlaceHolder/> tag used in master pages; it lets you provide default content for that area of the page, but this can be overridden. To fill the Facelets template in another page, you start with a <ui:composition/> element that points to the template file. This is roughly equivalent to declaring the MasterPageFile attribute in an ASP.NET page. Inside the <ui:composition/> element, you use <ui:define/> elements to override the template defaults, similar to the way an <asp:Content/> tag is used. These elements can contain any kind of content - from simple strings to JSF elements. So, to bring it all together... In master.xhtml: <!-- HTML header content here --> <ui:insert name="AreaOne">Default content for AreaOne</ui:insert> <ui:insert name="AreaTwo">Default content for AreaTwo</ui:insert> <!-- HTML footer content here --> In page.xhtml: <ui:composition template="/WEB-INF/templates/master.xhtml"> <ui:define name="AreaOne">Here is some new content</ui:define> <ui:define name="AreaTwo"> <p>Some new content here too</p> </ui:define> </ui:composition> And this will render as: <!-- HTML header content here --> Here is some new content <p>Some new content here too</p> <!-- HTML footer content here --> You also get some other benefits with Facelets, such as the ability to reuse page components with different data. (Edited to provide more information.)
{ "language": "en", "url": "https://stackoverflow.com/questions/25532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Securely sync folders over a public network I need to keep the files & folders on two Windows-based, non-domain machines synchronized across a public network. I was thinking rsync over SSH - but I was wondering if there is a simpler solution? Any possibility of using the sync framework over SFTP/SCP/SSH? Or I'm open to better ideas? A: I don't think you could go past rsync. It's fast, reliable and when coupled with SSH (which is a requirement of yours), secure. It's also Free! If you want some integration with your versioning systems, check out Unison. There are also commercial alternatives such as InstantSync. A: Figured I'd post what I finally went with - WinSCP - http://winscp.net Connects via GUI to an SFTP server + supports Local/Remote/Both synchronization + scriptable with command-line/batch interface. A: Seems like a textbook case for using FolderShare. A: This answer is similar to the one the thread opener created but i cant comment so i add this here: I had the same problem one year ago at a customer. We needed a scriptable non gui solution for synchronizing a large folder with several subfolders over the internet! At that time the system used rsync but i didnt like the necessity of having to install a linux tool and needing a vpn between the two to secure the communication. So the first approach was scripting with Powershell and using WinSCP and IIS ftps (ftp over ssl). WinSCP and IIS ftps didnt work well together! Synchronizing often lead to strange exceptions we couldnt fix. Then we switched to CrushFTP and sftp (ssh ftp). This solution works very well! We had over 300 automatic nightly deployments and none failed. So i can recommend using powershell for scripting and the WinSCP library for synchronizing folders to an sftp (not ftps) server. Though it is definitely not as fast as rsync it is very stable and easily scriptable. A: You could set up shared folders over a secure VPN with Hamachi, then use a folder syncing app to sync them up. A: Have you tried the award-winning rsyncrypto? A: I use SVN. It works (I think) over SSH and SSL. Full versioning, file syncing, what's not to like? A: I'd recommend rdiff-backup. It supports incremental backups over SSH and is a free and proven solution. Being incremental, it also allows access to files that were deleted or to older versions of modified files. A: +1 for Chris's recommendation. This is exactly what I use FolderShare for, keeps folders in sync across 3 PCs running Windows and 2 Macs running OS X. A: you can use a vpn server on hamachi A: try this, SSHSync for windows http://code.google.com/p/sshsync/ A command line applications that allows intelligent Secure FTP transmissions. SshSync only support pull type transfers, but it allows use of a Private Key to ensure that authentication is secure. A text file that contains a list of files always processed is used to check that only 'new' files are retrieved.
{ "language": "en", "url": "https://stackoverflow.com/questions/25546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What's the best way to import/read data from pdf files? We get a large amount of data from our clients in pdf files in varying formats [layout-wise], these files are typically report output, and are typically properly annotated [they don't usually need OCR], but not formatted well enough that simply copying several hundred pages of text out of acrobat is not going to work. The best approach I've found so far is to write a script to parse the nearly-valid xml output (the comments are invalid and many characters are escaped in varying ways, é becomes [[[e9]]]é, $ becomes \$, % becomes \%...) of the command-line pdftoipe utility (to convert pdf files for a program called ipe), which gives me text elements with their positions on each page [see sample below], which works well enough for reports where the same values are on the same place on every page I care about, but would require extra scripting effort for importing matrix [cross-tab] pdf files. pdftoipe is not at all intended for this, and at best can be compiled manually using cygwin for windows. Are there libraries that make this easy from some scripting language I can tolerate? A graphical tool would be awesome too. And a pony. pdftoipe output of this sample looks like this: <ipe creator="pdftoipe 2006/10/09"><info media="0 0 612 792"/> <-- Page: 1 1 --> <page gridsize="8"> <path fill="1 1 1" fillrule="wind"> 64.8 144 m 486 144 l 486 727.2 l 64.8 727.2 l 64.8 144 l h </path> <path fill="1 1 1" fillrule="wind"> 64.8 144 m 486 144 l 486 727.2 l 64.8 727.2 l 64.8 144 l h </path> <path fill="1 1 1" fillrule="wind"> 64.8 144 m 486 144 l 486 727.2 l 64.8 727.2 l 64.8 144 l h </path> <text stroke="1 0 0" pos="0 0" size="18" transformable="yes" matrix="1 0 0 1 181.8 707.88">This is a sample PDF fil</text> <text stroke="1 0 0" pos="0 0" size="18" transformable="yes" matrix="1 0 0 1 356.28 707.88">e.</text> <text stroke="1 0 0" pos="0 0" size="18" transformable="yes" matrix="1 0 0 1 368.76 707.88"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 692.4"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 677.88"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 663.36"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 648.84"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 634.32"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 619.8"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 605.28"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 590.76"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 576.24"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 561.72"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 547.2"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 532.68"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 518.16"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 503.64"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 489.12"> </text> <text stroke="0 0 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 67.32 474.6"> </text> <text stroke="0 0 1" pos="0 0" size="16.2" transformable="yes" matrix="1 0 0 1 67.32 456.24">If you can read this</text> <text stroke="0 0 1" pos="0 0" size="16.2" transformable="yes" matrix="1 0 0 1 214.92 456.24">,</text> <text stroke="0 0 1" pos="0 0" size="16.2" transformable="yes" matrix="1 0 0 1 219.48 456.24"> you already have A</text> <text stroke="0 0 1" pos="0 0" size="16.2" transformable="yes" matrix="1 0 0 1 370.8 456.24">dobe Acrobat </text> <text stroke="0 0 1" pos="0 0" size="16.2" transformable="yes" matrix="1 0 0 1 67.32 437.64">Reader i</text> <text stroke="0 0 1" pos="0 0" size="16.2" transformable="yes" matrix="1 0 0 1 131.28 437.64">n</text> <text stroke="0 0 1" pos="0 0" size="16.2" transformable="yes" matrix="1 0 0 1 141.12 437.64">stalled on your computer.</text> <text stroke="0 0 0" pos="0 0" size="16.2" transformable="yes" matrix="1 0 0 1 337.92 437.64"> </text> <text stroke="0 0.502 0" pos="0 0" size="12.6" transformable="yes" matrix="1 0 0 1 342.48 437.64"> </text> <image width="800" height="600" rect="-92.04 800.64 374.4 449.76" ColorSpace="DeviceRGB" BitsPerComponent="8" Filter="DCTDecode" length="369925"> feedcafebabe... </image> </page> </ipe> A: We use Xpdf in one of our applications. Its a c++ library which is primarily used for pdf rendering, although it does have a text extractor which could be useful for this project. A: If you're fine with calling something external, you can use ghostscript - look at the ps2ascii script included with the distribution. I'm not sure what you want from a graphical tool - a big button that you push to chose the input and output files? A preview? You might be able to use GSView, depending on what you want. A: pdftohtml -xml although pdftoipe seems more detailed!! A: Have you looked at Aspose? We're using it for an ASP.net app and I've seen some examples of vbscript using it as well. It's not particularly expensive either. http://www.aspose.com/
{ "language": "en", "url": "https://stackoverflow.com/questions/25550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Get OS-level system information I'm currently building a Java app that could end up being run on many different platforms, but primarily variants of Solaris, Linux and Windows. Has anyone been able to successfully extract information such as the current disk space used, CPU utilisation and memory used in the underlying OS? What about just what the Java app itself is consuming? Preferrably I'd like to get this information without using JNI. A: The java.lang.management package does give you a whole lot more info than Runtime - for example it will give you heap memory (ManagementFactory.getMemoryMXBean().getHeapMemoryUsage()) separate from non-heap memory (ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage()). You can also get process CPU usage (without writing your own JNI code), but you need to cast the java.lang.management.OperatingSystemMXBean to a com.sun.management.OperatingSystemMXBean. This works on Windows and Linux, I haven't tested it elsewhere. For example ... call the get getCpuUsage() method more frequently to get more accurate readings. public class PerformanceMonitor { private int availableProcessors = getOperatingSystemMXBean().getAvailableProcessors(); private long lastSystemTime = 0; private long lastProcessCpuTime = 0; public synchronized double getCpuUsage() { if ( lastSystemTime == 0 ) { baselineCounters(); return; } long systemTime = System.nanoTime(); long processCpuTime = 0; if ( getOperatingSystemMXBean() instanceof OperatingSystemMXBean ) { processCpuTime = ( (OperatingSystemMXBean) getOperatingSystemMXBean() ).getProcessCpuTime(); } double cpuUsage = (double) ( processCpuTime - lastProcessCpuTime ) / ( systemTime - lastSystemTime ); lastSystemTime = systemTime; lastProcessCpuTime = processCpuTime; return cpuUsage / availableProcessors; } private void baselineCounters() { lastSystemTime = System.nanoTime(); if ( getOperatingSystemMXBean() instanceof OperatingSystemMXBean ) { lastProcessCpuTime = ( (OperatingSystemMXBean) getOperatingSystemMXBean() ).getProcessCpuTime(); } } } A: Add OSHI dependency via maven: <dependency> <groupId>com.github.dblock</groupId> <artifactId>oshi-core</artifactId> <version>2.2</version> </dependency> Get a battery capacity left in percentage: SystemInfo si = new SystemInfo(); HardwareAbstractionLayer hal = si.getHardware(); for (PowerSource pSource : hal.getPowerSources()) { System.out.println(String.format("%n %s @ %.1f%%", pSource.getName(), pSource.getRemainingCapacity() * 100d)); } A: Have a look at the APIs available in the java.lang.management package. For example: * *OperatingSystemMXBean.getSystemLoadAverage() *ThreadMXBean.getCurrentThreadCpuTime() *ThreadMXBean.getCurrentThreadUserTime() There are loads of other useful things in there as well. A: Usually, to get low level OS information you can call OS specific commands which give you the information you want with Runtime.exec() or read files such as /proc/* in Linux. A: CPU usage isn't straightforward -- java.lang.management via com.sun.management.OperatingSystemMXBean.getProcessCpuTime comes close (see Patrick's excellent code snippet above) but note that it only gives access to time the CPU spent in your process. it won't tell you about CPU time spent in other processes, or even CPU time spent doing system activities related to your process. for instance i have a network-intensive java process -- it's the only thing running and the CPU is at 99% but only 55% of that is reported as "processor CPU". don't even get me started on "load average" as it's next to useless, despite being the only cpu-related item on the MX bean. if only sun in their occasional wisdom exposed something like "getTotalCpuTime"... for serious CPU monitoring SIGAR mentioned by Matt seems the best bet. A: I think the best method out there is to implement the SIGAR API by Hyperic. It works for most of the major operating systems ( darn near anything modern ) and is very easy to work with. The developer(s) are very responsive on their forum and mailing lists. I also like that it is GPL2 Apache licensed. They provide a ton of examples in Java too! SIGAR == System Information, Gathering And Reporting tool. A: On Windows, you can run the systeminfo command and retrieves its output for instance with the following code: private static class WindowsSystemInformation { static String get() throws IOException { Runtime runtime = Runtime.getRuntime(); Process process = runtime.exec("systeminfo"); BufferedReader systemInformationReader = new BufferedReader(new InputStreamReader(process.getInputStream())); StringBuilder stringBuilder = new StringBuilder(); String line; while ((line = systemInformationReader.readLine()) != null) { stringBuilder.append(line); stringBuilder.append(System.lineSeparator()); } return stringBuilder.toString().trim(); } } A: If you are using Jrockit VM then here is an other way of getting VM CPU usage. Runtime bean can also give you CPU load per processor. I have used this only on Red Hat Linux to observer Tomcat performance. You have to enable JMX remote in catalina.sh for this to work. JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://my.tomcat.host:8080/jmxrmi"); JMXConnector jmxc = JMXConnectorFactory.connect(url, null); MBeanServerConnection conn = jmxc.getMBeanServerConnection(); ObjectName name = new ObjectName("oracle.jrockit.management:type=Runtime"); Double jvmCpuLoad =(Double)conn.getAttribute(name, "VMGeneratedCPULoad"); A: It is still under development but you can already use jHardware It is a simple library that scraps system data using Java. It works in both Linux and Windows. ProcessorInfo info = HardwareInfo.getProcessorInfo(); //Get named info System.out.println("Cache size: " + info.getCacheSize()); System.out.println("Family: " + info.getFamily()); System.out.println("Speed (Mhz): " + info.getMhz()); //[...] A: There's a Java project that uses JNA (so no native libraries to install) and is in active development. It currently supports Linux, OSX, Windows, Solaris and FreeBSD and provides RAM, CPU, Battery and file system information. * *https://github.com/oshi/oshi A: You can get some limited memory information from the Runtime class. It really isn't exactly what you are looking for, but I thought I would provide it for the sake of completeness. Here is a small example. Edit: You can also get disk usage information from the java.io.File class. The disk space usage stuff requires Java 1.6 or higher. public class Main { public static void main(String[] args) { /* Total number of processors or cores available to the JVM */ System.out.println("Available processors (cores): " + Runtime.getRuntime().availableProcessors()); /* Total amount of free memory available to the JVM */ System.out.println("Free memory (bytes): " + Runtime.getRuntime().freeMemory()); /* This will return Long.MAX_VALUE if there is no preset limit */ long maxMemory = Runtime.getRuntime().maxMemory(); /* Maximum amount of memory the JVM will attempt to use */ System.out.println("Maximum memory (bytes): " + (maxMemory == Long.MAX_VALUE ? "no limit" : maxMemory)); /* Total memory currently available to the JVM */ System.out.println("Total memory available to JVM (bytes): " + Runtime.getRuntime().totalMemory()); /* Get a list of all filesystem roots on this system */ File[] roots = File.listRoots(); /* For each filesystem root, print some info */ for (File root : roots) { System.out.println("File system root: " + root.getAbsolutePath()); System.out.println("Total space (bytes): " + root.getTotalSpace()); System.out.println("Free space (bytes): " + root.getFreeSpace()); System.out.println("Usable space (bytes): " + root.getUsableSpace()); } } } A: One simple way which can be used to get the OS level information and I tested in my Mac which works well : OperatingSystemMXBean osBean = (OperatingSystemMXBean)ManagementFactory.getOperatingSystemMXBean(); return osBean.getProcessCpuLoad(); You can find many relevant metrics of the operating system here A: To get the System Load average of 1 minute, 5 minutes and 15 minutes inside the java code, you can do this by executing the command cat /proc/loadavg using and interpreting it as below: Runtime runtime = Runtime.getRuntime(); BufferedReader br = new BufferedReader( new InputStreamReader(runtime.exec("cat /proc/loadavg").getInputStream())); String avgLine = br.readLine(); System.out.println(avgLine); List<String> avgLineList = Arrays.asList(avgLine.split("\\s+")); System.out.println(avgLineList); System.out.println("Average load 1 minute : " + avgLineList.get(0)); System.out.println("Average load 5 minutes : " + avgLineList.get(1)); System.out.println("Average load 15 minutes : " + avgLineList.get(2)); And to get the physical system memory by executing the command free -m and then interpreting it as below: Runtime runtime = Runtime.getRuntime(); BufferedReader br = new BufferedReader( new InputStreamReader(runtime.exec("free -m").getInputStream())); String line; String memLine = ""; int index = 0; while ((line = br.readLine()) != null) { if (index == 1) { memLine = line; } index++; } // total used free shared buff/cache available // Mem: 15933 3153 9683 310 3097 12148 // Swap: 3814 0 3814 List<String> memInfoList = Arrays.asList(memLine.split("\\s+")); int totalSystemMemory = Integer.parseInt(memInfoList.get(1)); int totalSystemUsedMemory = Integer.parseInt(memInfoList.get(2)); int totalSystemFreeMemory = Integer.parseInt(memInfoList.get(3)); System.out.println("Total system memory in mb: " + totalSystemMemory); System.out.println("Total system used memory in mb: " + totalSystemUsedMemory); System.out.println("Total system free memory in mb: " + totalSystemFreeMemory); A: For windows I went this way. com.sun.management.OperatingSystemMXBean os = (com.sun.management.OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean(); long physicalMemorySize = os.getTotalPhysicalMemorySize(); long freePhysicalMemory = os.getFreePhysicalMemorySize(); long freeSwapSize = os.getFreeSwapSpaceSize(); long commitedVirtualMemorySize = os.getCommittedVirtualMemorySize(); Here is the link with details. A: You can get some system-level information by using System.getenv(), passing the relevant environment variable name as a parameter. For example, on Windows: System.getenv("PROCESSOR_IDENTIFIER") System.getenv("PROCESSOR_ARCHITECTURE") System.getenv("PROCESSOR_ARCHITEW6432") System.getenv("NUMBER_OF_PROCESSORS") For other operating systems the presence/absence and names of the relevant environment variables will differ. A: Hey you can do this with java/com integration. By accessing WMI features you can get all the information. A: Not exactly what you asked for, but I'd recommend checking out ArchUtils and SystemUtils from commons-lang3. These also contain some relevant helper facilities, e.g.: import static org.apache.commons.lang3.ArchUtils.*; import static org.apache.commons.lang3.SystemUtils.*; System.out.printf("OS architecture: %s\n", OS_ARCH); // OS architecture: amd64 System.out.printf("OS name: %s\n", OS_NAME); // OS name: Linux System.out.printf("OS version: %s\n", OS_VERSION); // OS version: 5.18.16-200.fc36.x86_64 System.out.printf("Is Linux? - %b\n", IS_OS_LINUX); // Is Linux? - true System.out.printf("Is Mac? - %b\n", IS_OS_MAC); // Is Mac? - false System.out.printf("Is Windows? - %b\n", IS_OS_WINDOWS); // Is Windows? - false System.out.printf("JVM name: %s\n", JAVA_VM_NAME); // JVM name: Java HotSpot(TM) 64-Bit Server VM System.out.printf("JVM vendor: %s\n", JAVA_VM_VENDOR); // JVM vendor: Oracle Corporation System.out.printf("JVM version: %s\n", JAVA_VM_VERSION); // JVM version: 11.0.12+8-LTS-237 System.out.printf("Username: %s\n", getUserName()); // Username: johndoe System.out.printf("Hostname: %s\n", getHostName()); // Hostname: garage-pc var processor = getProcessor(); System.out.printf("CPU arch: %s\n", processor.getArch()) // CPU arch: BIT_64 System.out.printf("CPU type: %s\n", processor.getType()); // CPU type: X86
{ "language": "en", "url": "https://stackoverflow.com/questions/25552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "241" }
Q: Capturing a repeated group I am attempting to parse a string like the following using a .NET regular expression: H3Y5NC8E-TGA5B6SB-2NVAQ4E0 and return the following using Split: H3Y5NC8E TGA5B6SB 2NVAQ4E0 I validate each character against a specific character set (note that the letters 'I', 'O', 'U' & 'W' are absent), so using string.Split is not an option. The number of characters in each group can vary and the number of groups can also vary. I am using the following expression: ([ABCDEFGHJKLMNPQRSTVXYZ0123456789]{8}-?){3} This will match exactly 3 groups of 8 characters each. Any more or less will fail the match. This works insofar as it correctly matches the input. However, when I use the Split method to extract each character group, I just get the final group. RegexBuddy complains that I have repeated the capturing group itself and that I should put a capture group around the repeated group. However, none of my attempts to do this achieve the desired result. I have been trying expressions like this: (([ABCDEFGHJKLMNPQRSTVXYZ0123456789]{8})-?){4} But this does not work. Since I generate the regex in code, I could just expand it out by the number of groups, but I was hoping for a more elegant solution. Please note that the character set does not include the entire alphabet. It is part of a product activation system. As such, any characters that can be accidentally interpreted as numbers or other characters are removed. e.g. The letters 'I', 'O', 'U' & 'W' are not in the character set. The hyphens are optional since a user does not need top type them in, but they can be there if the user as done a copy & paste. A: BTW, you can replace [ABCDEFGHJKLMNPQRSTVXYZ0123456789] character class with a more readable subtracted character class. [[A-Z\d]-[IOUW]] If you just want to match 3 groups like that, why don't you use this pattern 3 times in your regex and just use captured 1, 2, 3 subgroups to form the new string? ([[A-Z\d]-[IOUW]]){8}-([[A-Z\d]-[IOUW]]){8}-([[A-Z\d]-[IOUW]]){8} In PHP I would return (I don't know .NET) return "$1 $2 $3"; A: I have discovered the answer I was after. Here is my working code: static void Main(string[] args) { string pattern = @"^\s*((?<group>[ABCDEFGHJKLMNPQRSTVXYZ0123456789]{8})-?){3}\s*$"; string input = "H3Y5NC8E-TGA5B6SB-2NVAQ4E0"; Regex re = new Regex(pattern); Match m = re.Match(input); if (m.Success) foreach (Capture c in m.Groups["group"].Captures) Console.WriteLine(c.Value); } A: After reviewing your question and the answers given, I came up with this: RegexOptions options = RegexOptions.None; Regex regex = new Regex(@"([ABCDEFGHJKLMNPQRSTVXYZ0123456789]{8})", options); string input = @"H3Y5NC8E-TGA5B6SB-2NVAQ4E0"; MatchCollection matches = regex.Matches(input); for (int i = 0; i != matches.Count; ++i) { string match = matches[i].Value; } Since the "-" is optional, you don't need to include it. I am not sure what you was using the {4} at the end for? This will find the matches based on what you want, then using the MatchCollection you can access each match to rebuild the string. A: Why use Regex? If the groups are always split by a -, can't you use Split()? A: Sorry if this isn't what you intended, but your string always has the hyphen separating the groups then instead of using regex couldn't you use the String.Split() method? Dim stringArray As Array = someString.Split("-") A: You can use this pattern: Regex.Split("H3Y5NC8E-TGA5B6SB-2NVAQ4E0", "([ABCDEFGHJKLMNPQRSTVXYZ0123456789]{8}+)-?") But you will need to filter out empty strings from resulting array. Citation from MSDN: If multiple matches are adjacent to one another, an empty string is inserted into the array. A: What are the defining characteristics of a valid block? We'd need to know that in order to really be helpful. My generic suggestion, validate the charset in a first step, then split and parse in a seperate method based on what you expect. If this is in a web site/app then you can use the ASP Regex validation on the front end then break it up on the back end. A: If you're just checking the value of the group, with group(i).value, then you will only get the last one. However, if you want to enumerate over all the times that group was captured, use group(2).captures(i).value, as shown below. system.text.RegularExpressions.Regex.Match("H3Y5NC8E-TGA5B6SB-2NVAQ4E0","(([ABCDEFGHJKLMNPQRSTVXYZ0123456789]+)-?)*").Groups(2).Captures(i).Value A: Mike, You can use character set of your choice inside character group. All you need is to add "+" modifier to capture all groups. See my previous answer, just change [A-Z0-9] to whatever you need (i.e. [ABCDEFGHJKLMNPQRSTVXYZ0123456789])
{ "language": "en", "url": "https://stackoverflow.com/questions/25561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I write a sort worse than O(n!) I wrote an O(n!) sort for my amusement that can't be trivially optimized to run faster without replacing it entirely. [And no, I didn't just randomize the items until they were sorted]. How might I write an even worse Big-O sort, without just adding extraneous junk that could be pulled out to reduce the time complexity? http://en.wikipedia.org/wiki/Big_O_notation has various time complexities sorted in growing order. Edit: I found the code, here is my O(n!) deterministic sort with amusing hack to generate list of all combinations of a list. I have a slightly longer version of get_all_combinations that returns an iterable of combinations, but unfortunately I couldn't make it a single statement. [Hopefully I haven't introduced bugs by fixing typos and removing underscores in the below code] def mysort(somelist): for permutation in get_all_permutations(somelist): if is_sorted(permutation): return permutation def is_sorted(somelist): # note: this could be merged into return... something like return len(foo) <= 1 or reduce(barf) if (len(somelist) <= 1): return True return 1 > reduce(lambda x,y: max(x,y),map(cmp, somelist[:-1], somelist[1:])) def get_all_permutations(lst): return [[itm] + cbo for idx, itm in enumerate(lst) for cbo in get_all_permutations(lst[:idx] + lst[idx+1:])] or [lst] A: There's a (proven!) worst sorting algorithm called slow sort that uses the “multiply and surrender” paradigm and runs in exponential time. While your algorithm is slower, it doesn't progress steadily but instead performs random jumps. Additionally, slow sort's best case is still exponential while yours is constant. A: Chris and I mentioned Bozosort and Bogosort in a different question. A: There's always NeverSort, which is O(∞): def never_sort(array) while(true) end return quicksort(array) end PS: I really want to see your deterministic O(n!) sort; I can't think of any that are O(n!), but have a finite upper bound in classical computation (aka are deterministic). PPS: If you're worried about the compiler wiping out that empty while block, you can force it not to by using a variable both in- and outside the block: def never_sort(array) i=0 while(true) { i += 1 } puts "done with loop after #{i} iterations!" return quicksort(array) end A: You could always do a Random sort. It works by rearranging all the elements randomly, then checking to see if it's sorted. If not, it randomly resorts them. I don't know how it would fit in big-O notation, but it will definitely be slow! A: Here is the slowest, finite sort you can get: Link each operation of Quicksort to the Busy Beaver function. By the time you get >4 operations, you'll need up-arrow notation :) A: One way that I can think of would be to calculated the post position of each element through a function that vary gradually moved the large elements to the end and the small ones to the beginning. If you used a trig based function, you could make the elements osculate through the list instead of going directly toward their final position. After you've processed each element in the set, then do a full traversal to determine if the array is sorted or not. I'm not positive that this will give you O(n!) but it should still be pretty slow. A: I think that if you do lots of copying then you can get a "reasonable" brute force search (N!) to take N^2 time per case giving N!*N^2 A: How about looping over all arrays t of n integers (n-tuples of integers are countable, so this is doable though it's an infinite loop of course), and for each of these: * *if its elements are exactly those of the input array (see algo below!) and the array is sorted (linear algo for example, but I'm sure we can do worse), then return t; *otherwise continue looping. To check that two arrays a and b of length n contain the same elements, how about the following recursive algorithm: loop over all couples (i,j) of indices between 0 and n-1, and for each such couple * *test if a[i]==b[j]: *if so, return TRUE if and only if a recursive call on the lists obtained by removing a[i] from a and b[j] from b returns TRUE; *continue looping over couples, and if all couples are done, return FALSE. The time will depend a lot on the distribution of integers in the input array. Seriously, though, is there a point to such a question? Edit: @Jon, your random sort would be in O(n!) on average (since there are n! permutations, you have probability 1/n! of finding the right one). This holds for arrays of distinct integers, might be slightly different if some elements have multiple occurences in the input array, and would then depend on the distribution of the elements of the input arrays (in the integers).
{ "language": "en", "url": "https://stackoverflow.com/questions/25566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Which Java DOM Wrapper is the Best or Most Popular? I've used jdom in the past, and have looked briefly at XOM and DOM4j. Each seems to provide essentially the same thing, as they each provide a simplified wrapper over the (very obtuse) standard W3C DOM APIs. I know that JDOM went through some effort to develop a JSR and standardization process at one point, but as far as I know that effort never went anywhere. All of the project appear to be in stasis with little new development in the past several years. Is there a consensus as to which is the best? Also what are the pros and cons of each ? A: I like XOM, because I like the way Elliotte Rusty Harold thinks. Of the ones you mention I belive it's the one that strays away from the DOM standard API:s the most, but I consider that a benefit. I once implemented a DOM library for Cocoa, and XOM was my inspiration. I've worked with JDOM as well, and there's absolutely nothing wrong with it, although I do prefer XOM. A: While dom4j is an old player, we have been using it for a while and haven't regret it yet. Strong features: simplicity, xpath support and others. Weak sides: yet to support java 5.0, but version 2.0 has been finally announced. A: It all depends on the feature set. If you want to benefit from an XSL Transformation Engine (Like Xalan) or an XPath Engine (Like Jaxen or Saxon) I would recommend sticking to the more popular framework available like Apache Xerces, JDOM. After that, it's all a matter of taste. I personnally use a W3C compliant ( org.w3c.* ) like Apache Xerces because they are common enough, reasonably fast and well supported by the Java Community. Of course, if you need blinding speed and do not care about XPath, XQuery or XSL, you can surely find yourself something that is much faster and/or resource-hungry. (i.e. A StAX Implementation)
{ "language": "en", "url": "https://stackoverflow.com/questions/25614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Shutting down a computer Is there a way to shutdown a computer using a built-in Java method? A: Create your own function to execute an OS command through the command line? For the sake of an example. But know where and why you'd want to use this as others note. public static void main(String arg[]) throws IOException{ Runtime runtime = Runtime.getRuntime(); Process proc = runtime.exec("shutdown -s -t 0"); System.exit(0); } A: Here's another example that could work cross-platform: public static void shutdown() throws RuntimeException, IOException { String shutdownCommand; String operatingSystem = System.getProperty("os.name"); if ("Linux".equals(operatingSystem) || "Mac OS X".equals(operatingSystem)) { shutdownCommand = "shutdown -h now"; } // This will work on any version of windows including version 11 else if (operatingSystem.contains("Windows")) { shutdownCommand = "shutdown.exe -s -t 0"; } else { throw new RuntimeException("Unsupported operating system."); } Runtime.getRuntime().exec(shutdownCommand); System.exit(0); } The specific shutdown commands may require different paths or administrative privileges. A: I use this program to shutdown the computer in X minutes. public class Shutdown { public static void main(String[] args) { int minutes = Integer.valueOf(args[0]); Timer timer = new Timer(); timer.schedule(new TimerTask() { @Override public void run() { ProcessBuilder processBuilder = new ProcessBuilder("shutdown", "/s"); try { processBuilder.start(); } catch (IOException e) { throw new RuntimeException(e); } } }, minutes * 60 * 1000); System.out.println(" Shutting down in " + minutes + " minutes"); } } A: Better use .startsWith than use .equals ... String osName = System.getProperty("os.name"); if (osName.startsWith("Win")) { shutdownCommand = "shutdown.exe -s -t 0"; } else if (osName.startsWith("Linux") || osName.startsWith("Mac")) { shutdownCommand = "shutdown -h now"; } else { System.err.println("Shutdown unsupported operating system ..."); //closeApp(); } work fine Ra. A: Here is an example using Apache Commons Lang's SystemUtils: public static boolean shutdown(int time) throws IOException { String shutdownCommand = null, t = time == 0 ? "now" : String.valueOf(time); if(SystemUtils.IS_OS_AIX) shutdownCommand = "shutdown -Fh " + t; else if(SystemUtils.IS_OS_FREE_BSD || SystemUtils.IS_OS_LINUX || SystemUtils.IS_OS_MAC|| SystemUtils.IS_OS_MAC_OSX || SystemUtils.IS_OS_NET_BSD || SystemUtils.IS_OS_OPEN_BSD || SystemUtils.IS_OS_UNIX) shutdownCommand = "shutdown -h " + t; else if(SystemUtils.IS_OS_HP_UX) shutdownCommand = "shutdown -hy " + t; else if(SystemUtils.IS_OS_IRIX) shutdownCommand = "shutdown -y -g " + t; else if(SystemUtils.IS_OS_SOLARIS || SystemUtils.IS_OS_SUN_OS) shutdownCommand = "shutdown -y -i5 -g" + t; else if(SystemUtils.IS_OS_WINDOWS) shutdownCommand = "shutdown.exe /s /t " + t; else return false; Runtime.getRuntime().exec(shutdownCommand); return true; } This method takes into account a whole lot more operating systems than any of the above answers. It also looks a lot nicer and is more reliable then checking the os.name property. Edit: Supports delay and all versions of Windows (inc. 8/10). A: The quick answer is no. The only way to do it is by invoking the OS-specific commands that will cause the computer to shutdown, assuming your application has the necessary privileges to do it. This is inherently non-portable, so you'd need either to know where your application will run or have different methods for different OSs and detect which one to use. A: You can use JNI to do it in whatever way you'd do it with C/C++. A: On Windows Embedded by default there is no shutdown command in cmd. In such case you need add this command manually or use function ExitWindowsEx from win32 (user32.lib) by using JNA (if you want more Java) or JNI (if easier for you will be to set priviliges in C code). A: easy single line Runtime.getRuntime().exec("shutdown -s -t 0"); but only work on windows
{ "language": "en", "url": "https://stackoverflow.com/questions/25637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: When is NavigationService initialized? I want to catch the NavigationService.Navigating event from my Page, to prevent the user from navigating forward. I have an event handler defined thusly: void PreventForwardNavigation(object sender, NavigatingCancelEventArgs e) { if (e.NavigationMode == NavigationMode.Forward) { e.Cancel = true; } } ... and that works fine. However, I am unsure exactly where to place this code: NavigationService.Navigating += PreventForwardNavigation; If I place it in the constructor of the page, or the Initialized event handler, then NavigationService is still null and I get a NullReferenceException. However, if I place it in the Loaded event handler for the Page, then it is called every time the page is navigated to. If I understand right, that means I'm handling the same event multiple times. Am I ok to add the same handler to the event multiple times (as would happen were I to use the page's Loaded event to hook it up)? If not, is there some place in between Initialized and Loaded where I can do this wiring? A: @Espo your link helped me find the workaround. I call it a workaround because it's ugly, but it's what MS themselves do in their documentation: public MyPage() // ctor { InitializeComponent(); this.Loaded += delegate { NavigationService.Navigating += MyNavHandler; }; this.Unloaded += delegate { NavigationService.Navigating -= MyNavHandler; }; } So you basically have to unsubscribe from the navigation service's events when your page is unloaded. +1 to your response for helping me find it. I don't seem to be able to mark my own response as the "accepted answer" so I guess I'll leave it for now. A: NavigationService.Navigate triggers both a NavigationService.Navigating event AND an Application.Navigating event. I solved this problem with the following: public class PageBase : Page { static PageBase() { Application.Current.Navigating += NavigationService_Navigating; } protected static void NavigationService_Navigating(object sender, NavigatingCancelEventArgs e) { // put your event handler code here... } }
{ "language": "en", "url": "https://stackoverflow.com/questions/25642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What could prevent OpenGL glDrawPixels from working on some video cards? The following code writes no data to the back buffer on Intel integrated video cards,for example, on a MacBook. On ATI cards, such as in the iMac, it draws to the back buffer. The width and height are correct (and 800x600 buffer) and m_PixelBuffer is correctly filled with 0xAA00AA00. My best guess so far is that there is something amiss with needing glWindowPos set. I do not currently set it (or the raster position), and when I get GL_CURRENT_RASTER_POSITION I noticed that the default on the ATI card is 0,0,0,0 and the Intel it's 0,0,0,1. When I set the raster pos on the ATI card to 0,0,0,1 I get the same result as the Intel card, nothing drawn to the back buffer. Is there some transform state I'm missing? This is a 2D application so the view transform is a very simple glOrtho. glDrawPixels(GetBufferWidth(), GetBufferHeight(), GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, m_PixelBuffer); Any more info I can provide, please ask. I'm pretty much an OpenGL and Mac newb so I don't know if I'm providing enough information. A: I've always had problems with OpenGL implementations from Intel, though I'm not sure that's your problem this time. I think you're running into some byte-order issues. Give this a read and feel free to experiment with different constants for packing and color order. http://developer.apple.com/documentation/MacOSX/Conceptual/universal_binary/universal_binary_tips/chapter_5_section_25.html I know it's OSX guide, you can probably find similar OpenGL articles on other sites for other platforms. This should be applicable. A: I've always had problems with OpenGL implementations from Intel This is kind of what I'm worried about, but I have a hard time believing they'd screw up something as basic as glDrawPixels, and also, since I can "duplicate" the problem by changing the raster position vector, it makes me think it's my fault and I'm missing something basic. I think you're running into some byte-order issues That was my first inclination, and I've tried packing differently, with no result. I also tried packing the buffer with values that would present a usable alpha if swizzled, with no result. This is why I'm barking up the raster pos tree, but I'm still honestly not 100% sure. Note that I'm targeting only Intel Macs if that makes a difference. Thanks for the link, it was a good read, and good to tuck away for future reference. I'd upmod but I can't until I get 3 more rep points :) A: It's highly unlikely that a basic function like glDrawPixels might not be working. Have you tried some really simple settings like GL_RGB or GL_RGBA for format and GL_UNSIGNED_BYTE or GL_FLOAT for type? If not, can you share with us the smallest possible program which replicates your problem? A: The default raster position should be (0,0,0,1), but you can reset it to make sure. Just before calling glDrawPixels(), try GLint valid; glGet(GL_CURRENT_RASTER_POSITION_VALID, &valid); This should tell you if the current raster position is valid. If it is, then this is not your problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/25646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Height of NSTextView with one line? I want to programatically create an NSTextView. How can I determine the correct frame height so that the view displays one line of text in the current default font? A: The NSFont class has a method that can give you the size of a rectangle that would enclose a specific attributed string. Get the font used by your text view, create a string that serves as a reasonable example of what will be in the text view, and use that to inform your frame height. (The frame height will need to be some number of points larger than the actual rectangle the string would be displayed in.) Alternately, you can get the various metrics from the font and attempt to calculate a reasonable frame from that. That might or might not work; for example, a font like Apple Chancery has a huge amount of variation depending on the glyphs that are being rendered, where they are in a word, and so on; I don't know that you can calculate what the needed size would be in advance without knowing exactly what you were going to render. A: It would be more normal to be using an NSTextField than an NSTextView for a single line of text. With NSTextField, just do the following: [textField setFont:myFont]; [textField sizeToFit]; Oh, and there is no built-in 'current default font'. If an application has such a concept, it needs to track it itself. The font panel doesn't read or write to anything global, it's used to operate on specific text objects.
{ "language": "en", "url": "https://stackoverflow.com/questions/25652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Hide a column in ASP.NET Dynamic Data Is there any way to apply an attribute to a model file in ASP.NET Dynamic Data to hide the column? For instance, I can currently set the display name of a column like this: [DisplayName("Last name")] public object Last_name { get; set; } Is there a similar way to hide a column? Edit: Many thanks to Christian Hagelid for going the extra mile and giving a spot-on answer :-) A: Had no idea what ASP.NET Dynamic Data was so you promted me to so some research :) Looks like the property you are looking for is [ScaffoldColumn(false)] There is also a similar property for tables [ScaffoldTable(false)] source A: A much, much easier method: If you want to only show certain columns in the List page, but all or others in the Details, etc. pages, see How do I hide a column only on the list page in ASP.NET Dynamic Data? Simply set AutoGenerateColumns="false" in the GridView control, then define exactly the columns you want: <Columns> ... <asp:DynamicField DataField="FirstName" HeaderText="First Name" /> <asp:DynamicField DataField="LastName" HeaderText="Last Name" /> </Columns>
{ "language": "en", "url": "https://stackoverflow.com/questions/25653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: pyGame within a pyGTK application What is the best way to use PyGame (SDL) within a PyGTK application? I'm searching for a method that allows me to have a drawing area in the GTK window and at the same time being able to manage both GTK and SDL events. A: I've never attempted it myself, but hearing plenty about other people who've tried, it's not a road you want to go down. There is the alternative of putting the gui in pygame itself. There are plenty of gui toolkits built specifically for pygame that you could use. Most of them are rather unfinished, but there are 2 big, actively maintained ones: PGU and OcempGUI. The full list on the pygame site is here. A: You may be interested in this message thread. Looks like they recommend against it. A: PyGame works much better when it can manage its own window, or even better, use the whole screen. GTK has flexible enough widgets to allow creation of a drawing area. This page may help, though, if you want to try it. A: There's a simple solution that might work for you. Write the PyGTK stuff and PyGame stuff as separate applications. Then from the PyGTK application call the PyGame application, using os.system to call the PyGame application. If you need to share data between the two then either use a database, pipes or IPC. A: http://faq.pygtk.org/index.py?file=faq23.042.htp&req=show mentions it all: You need to create a drawing area and set the environment variable SDL_WINDOWID after it's realized: import os import gobject import gtk import pygame WINX = 400 WINY = 200 window = gtk.Window() window.connect('delete-event', gtk.main_quit) window.set_resizable(False) area = gtk.DrawingArea() area.set_app_paintable(True) area.set_size_request(WINX, WINY) window.add(area) area.realize() # Force SDL to write on our drawing area os.putenv('SDL_WINDOWID', str(area.window.xid)) # We need to flush the XLib event loop otherwise we can't # access the XWindow which set_mode() requires gtk.gdk.flush() pygame.init() pygame.display.set_mode((WINX, WINY), 0, 0) screen = pygame.display.get_surface() image_surface = pygame.image.load('foo.png') screen.blit(image_surface, (0, 0)) gobject.idle_add(pygame.display.update) window.show_all() while gtk.event_pending(): # pygame/SDL event processing goes here gtk.main_iteration(False) A: I tried doing this myself a while ago, and I never got it to work perfectly. Actually I never got it to work at all under Windows, as it kept crashing the entire OS and I ran out of patience. I continued to use it though as it was only important it ran on Linux, and was only a small project. I'd strongly recommend you investigate alternatives. It always felt like a nasty hack, and made me feel dirty. A: The Sugar project has several Activities built with PyGTK and PyGame. They wrote a support lib to achieve this, called Sugargame. You should be able to modify it for regular PyGTK apps instead of Sugar. Here's a chapter in Sugar's development book about how to use it. The lib allows for communicating events between GTK and PyGame. Enjoy!
{ "language": "en", "url": "https://stackoverflow.com/questions/25661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Python module for converting PDF to text Is there any python module to convert PDF files into text? I tried one piece of code found in Activestate which uses pypdf but the text generated had no space between and was of no use. A: I needed to convert a specific PDF to plain text within a python module. I used PDFMiner 20110515, after reading through their pdf2txt.py tool I wrote this simple snippet: from cStringIO import StringIO from pdfminer.pdfinterp import PDFResourceManager, process_pdf from pdfminer.converter import TextConverter from pdfminer.layout import LAParams def to_txt(pdf_path): input_ = file(pdf_path, 'rb') output = StringIO() manager = PDFResourceManager() converter = TextConverter(manager, output, laparams=LAParams()) process_pdf(manager, converter, input_) return output.getvalue() A: Since none for these solutions support the latest version of PDFMiner I wrote a simple solution that will return text of a pdf using PDFMiner. This will work for those who are getting import errors with process_pdf import sys from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter from pdfminer.pdfpage import PDFPage from pdfminer.converter import XMLConverter, HTMLConverter, TextConverter from pdfminer.layout import LAParams from cStringIO import StringIO def pdfparser(data): fp = file(data, 'rb') rsrcmgr = PDFResourceManager() retstr = StringIO() codec = 'utf-8' laparams = LAParams() device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams) # Create a PDF interpreter object. interpreter = PDFPageInterpreter(rsrcmgr, device) # Process each page contained in the document. for page in PDFPage.get_pages(fp): interpreter.process_page(page) data = retstr.getvalue() print data if __name__ == '__main__': pdfparser(sys.argv[1]) See below code that works for Python 3: import sys from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter from pdfminer.pdfpage import PDFPage from pdfminer.converter import XMLConverter, HTMLConverter, TextConverter from pdfminer.layout import LAParams import io def pdfparser(data): fp = open(data, 'rb') rsrcmgr = PDFResourceManager() retstr = io.StringIO() codec = 'utf-8' laparams = LAParams() device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams) # Create a PDF interpreter object. interpreter = PDFPageInterpreter(rsrcmgr, device) # Process each page contained in the document. for page in PDFPage.get_pages(fp): interpreter.process_page(page) data = retstr.getvalue() print(data) if __name__ == '__main__': pdfparser(sys.argv[1]) A: Repurposing the pdf2txt.py code that comes with pdfminer; you can make a function that will take a path to the pdf; optionally, an outtype (txt|html|xml|tag) and opts like the commandline pdf2txt {'-o': '/path/to/outfile.txt' ...}. By default, you can call: convert_pdf(path) A text file will be created, a sibling on the filesystem to the original pdf. def convert_pdf(path, outtype='txt', opts={}): import sys from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter, process_pdf from pdfminer.converter import XMLConverter, HTMLConverter, TextConverter, TagExtractor from pdfminer.layout import LAParams from pdfminer.pdfparser import PDFDocument, PDFParser from pdfminer.pdfdevice import PDFDevice from pdfminer.cmapdb import CMapDB outfile = path[:-3] + outtype outdir = '/'.join(path.split('/')[:-1]) debug = 0 # input option password = '' pagenos = set() maxpages = 0 # output option codec = 'utf-8' pageno = 1 scale = 1 showpageno = True laparams = LAParams() for (k, v) in opts: if k == '-d': debug += 1 elif k == '-p': pagenos.update( int(x)-1 for x in v.split(',') ) elif k == '-m': maxpages = int(v) elif k == '-P': password = v elif k == '-o': outfile = v elif k == '-n': laparams = None elif k == '-A': laparams.all_texts = True elif k == '-D': laparams.writing_mode = v elif k == '-M': laparams.char_margin = float(v) elif k == '-L': laparams.line_margin = float(v) elif k == '-W': laparams.word_margin = float(v) elif k == '-O': outdir = v elif k == '-t': outtype = v elif k == '-c': codec = v elif k == '-s': scale = float(v) # CMapDB.debug = debug PDFResourceManager.debug = debug PDFDocument.debug = debug PDFParser.debug = debug PDFPageInterpreter.debug = debug PDFDevice.debug = debug # rsrcmgr = PDFResourceManager() if not outtype: outtype = 'txt' if outfile: if outfile.endswith('.htm') or outfile.endswith('.html'): outtype = 'html' elif outfile.endswith('.xml'): outtype = 'xml' elif outfile.endswith('.tag'): outtype = 'tag' if outfile: outfp = file(outfile, 'w') else: outfp = sys.stdout if outtype == 'txt': device = TextConverter(rsrcmgr, outfp, codec=codec, laparams=laparams) elif outtype == 'xml': device = XMLConverter(rsrcmgr, outfp, codec=codec, laparams=laparams, outdir=outdir) elif outtype == 'html': device = HTMLConverter(rsrcmgr, outfp, codec=codec, scale=scale, laparams=laparams, outdir=outdir) elif outtype == 'tag': device = TagExtractor(rsrcmgr, outfp, codec=codec) else: return usage() fp = file(path, 'rb') process_pdf(rsrcmgr, device, fp, pagenos, maxpages=maxpages, password=password) fp.close() device.close() outfp.close() return A: Pdftotext An open source program (part of Xpdf) which you could call from python (not what you asked for but might be useful). I've used it with no problems. I think google use it in google desktop. A: pyPDF works fine (assuming that you're working with well-formed PDFs). If all you want is the text (with spaces), you can just do: import pyPdf pdf = pyPdf.PdfFileReader(open(filename, "rb")) for page in pdf.pages: print page.extractText() You can also easily get access to the metadata, image data, and so forth. A comment in the extractText code notes: Locate all text drawing commands, in the order they are provided in the content stream, and extract the text. This works well for some PDF files, but poorly for others, depending on the generator used. This will be refined in the future. Do not rely on the order of text coming out of this function, as it will change if this function is made more sophisticated. Whether or not this is a problem depends on what you're doing with the text (e.g. if the order doesn't matter, it's fine, or if the generator adds text to the stream in the order it will be displayed, it's fine). I have pyPdf extraction code in daily use, without any problems. A: You can also quite easily use pdfminer as a library. You have access to the pdf's content model, and can create your own text extraction. I did this to convert pdf contents to semi-colon separated text, using the code below. The function simply sorts the TextItem content objects according to their y and x coordinates, and outputs items with the same y coordinate as one text line, separating the objects on the same line with ';' characters. Using this approach, I was able to extract text from a pdf that no other tool was able to extract content suitable for further parsing from. Other tools I tried include pdftotext, ps2ascii and the online tool pdftextonline.com. pdfminer is an invaluable tool for pdf-scraping. def pdf_to_csv(filename): from pdflib.page import TextItem, TextConverter from pdflib.pdfparser import PDFDocument, PDFParser from pdflib.pdfinterp import PDFResourceManager, PDFPageInterpreter class CsvConverter(TextConverter): def __init__(self, *args, **kwargs): TextConverter.__init__(self, *args, **kwargs) def end_page(self, i): from collections import defaultdict lines = defaultdict(lambda : {}) for child in self.cur_item.objs: if isinstance(child, TextItem): (_,_,x,y) = child.bbox line = lines[int(-y)] line[x] = child.text for y in sorted(lines.keys()): line = lines[y] self.outfp.write(";".join(line[x] for x in sorted(line.keys()))) self.outfp.write("\n") # ... the following part of the code is a remix of the # convert() function in the pdfminer/tools/pdf2text module rsrc = PDFResourceManager() outfp = StringIO() device = CsvConverter(rsrc, outfp, "ascii") doc = PDFDocument() fp = open(filename, 'rb') parser = PDFParser(doc, fp) doc.initialize('') interpreter = PDFPageInterpreter(rsrc, device) for i, page in enumerate(doc.get_pages()): outfp.write("START PAGE %d\n" % i) interpreter.process_page(page) outfp.write("END PAGE %d\n" % i) device.close() fp.close() return outfp.getvalue() UPDATE: The code above is written against an old version of the API, see my comment below. A: slate is a project that makes it very simple to use PDFMiner from a library: >>> with open('example.pdf') as f: ... doc = slate.PDF(f) ... >>> doc [..., ..., ...] >>> doc[1] 'Text from page 2...' A: Try PDFMiner. It can extract text from PDF files as HTML, SGML or "Tagged PDF" format. The Tagged PDF format seems to be the cleanest, and stripping out the XML tags leaves just the bare text. A Python 3 version is available under: * *https://github.com/pdfminer/pdfminer.six A: The PDFMiner package has changed since codeape posted. EDIT (again): PDFMiner has been updated again in version 20100213 You can check the version you have installed with the following: >>> import pdfminer >>> pdfminer.__version__ '20100213' Here's the updated version (with comments on what I changed/added): def pdf_to_csv(filename): from cStringIO import StringIO #<-- added so you can copy/paste this to try it from pdfminer.converter import LTTextItem, TextConverter from pdfminer.pdfparser import PDFDocument, PDFParser from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter class CsvConverter(TextConverter): def __init__(self, *args, **kwargs): TextConverter.__init__(self, *args, **kwargs) def end_page(self, i): from collections import defaultdict lines = defaultdict(lambda : {}) for child in self.cur_item.objs: if isinstance(child, LTTextItem): (_,_,x,y) = child.bbox #<-- changed line = lines[int(-y)] line[x] = child.text.encode(self.codec) #<-- changed for y in sorted(lines.keys()): line = lines[y] self.outfp.write(";".join(line[x] for x in sorted(line.keys()))) self.outfp.write("\n") # ... the following part of the code is a remix of the # convert() function in the pdfminer/tools/pdf2text module rsrc = PDFResourceManager() outfp = StringIO() device = CsvConverter(rsrc, outfp, codec="utf-8") #<-- changed # becuase my test documents are utf-8 (note: utf-8 is the default codec) doc = PDFDocument() fp = open(filename, 'rb') parser = PDFParser(fp) #<-- changed parser.set_document(doc) #<-- added doc.set_parser(parser) #<-- added doc.initialize('') interpreter = PDFPageInterpreter(rsrc, device) for i, page in enumerate(doc.get_pages()): outfp.write("START PAGE %d\n" % i) interpreter.process_page(page) outfp.write("END PAGE %d\n" % i) device.close() fp.close() return outfp.getvalue() Edit (yet again): Here is an update for the latest version in pypi, 20100619p1. In short I replaced LTTextItem with LTChar and passed an instance of LAParams to the CsvConverter constructor. def pdf_to_csv(filename): from cStringIO import StringIO from pdfminer.converter import LTChar, TextConverter #<-- changed from pdfminer.layout import LAParams from pdfminer.pdfparser import PDFDocument, PDFParser from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter class CsvConverter(TextConverter): def __init__(self, *args, **kwargs): TextConverter.__init__(self, *args, **kwargs) def end_page(self, i): from collections import defaultdict lines = defaultdict(lambda : {}) for child in self.cur_item.objs: if isinstance(child, LTChar): #<-- changed (_,_,x,y) = child.bbox line = lines[int(-y)] line[x] = child.text.encode(self.codec) for y in sorted(lines.keys()): line = lines[y] self.outfp.write(";".join(line[x] for x in sorted(line.keys()))) self.outfp.write("\n") # ... the following part of the code is a remix of the # convert() function in the pdfminer/tools/pdf2text module rsrc = PDFResourceManager() outfp = StringIO() device = CsvConverter(rsrc, outfp, codec="utf-8", laparams=LAParams()) #<-- changed # becuase my test documents are utf-8 (note: utf-8 is the default codec) doc = PDFDocument() fp = open(filename, 'rb') parser = PDFParser(fp) parser.set_document(doc) doc.set_parser(parser) doc.initialize('') interpreter = PDFPageInterpreter(rsrc, device) for i, page in enumerate(doc.get_pages()): outfp.write("START PAGE %d\n" % i) if page is not None: interpreter.process_page(page) outfp.write("END PAGE %d\n" % i) device.close() fp.close() return outfp.getvalue() EDIT (one more time): Updated for version 20110515 (thanks to Oeufcoque Penteano!): def pdf_to_csv(filename): from cStringIO import StringIO from pdfminer.converter import LTChar, TextConverter from pdfminer.layout import LAParams from pdfminer.pdfparser import PDFDocument, PDFParser from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter class CsvConverter(TextConverter): def __init__(self, *args, **kwargs): TextConverter.__init__(self, *args, **kwargs) def end_page(self, i): from collections import defaultdict lines = defaultdict(lambda : {}) for child in self.cur_item._objs: #<-- changed if isinstance(child, LTChar): (_,_,x,y) = child.bbox line = lines[int(-y)] line[x] = child._text.encode(self.codec) #<-- changed for y in sorted(lines.keys()): line = lines[y] self.outfp.write(";".join(line[x] for x in sorted(line.keys()))) self.outfp.write("\n") # ... the following part of the code is a remix of the # convert() function in the pdfminer/tools/pdf2text module rsrc = PDFResourceManager() outfp = StringIO() device = CsvConverter(rsrc, outfp, codec="utf-8", laparams=LAParams()) # becuase my test documents are utf-8 (note: utf-8 is the default codec) doc = PDFDocument() fp = open(filename, 'rb') parser = PDFParser(fp) parser.set_document(doc) doc.set_parser(parser) doc.initialize('') interpreter = PDFPageInterpreter(rsrc, device) for i, page in enumerate(doc.get_pages()): outfp.write("START PAGE %d\n" % i) if page is not None: interpreter.process_page(page) outfp.write("END PAGE %d\n" % i) device.close() fp.close() return outfp.getvalue() A: PDFminer gave me perhaps one line [page 1 of 7...] on every page of a pdf file I tried with it. The best answer I have so far is pdftoipe, or the c++ code it's based on Xpdf. see my question for what the output of pdftoipe looks like. A: Additionally there is PDFTextStream which is a commercial Java library that can also be used from Python. A: I have used pdftohtml with the -xml argument, read the result with subprocess.Popen(), that will give you x coord, y coord, width, height, and font, of every snippet of text in the pdf. I think this is what 'evince' probably uses too because the same error messages spew out. If you need to process columnar data, it gets slightly more complicated as you have to invent an algorithm that suits your pdf file. The problem is that the programs that make PDF files don't really necessarily lay out the text in any logical format. You can try simple sorting algorithms and it works sometimes, but there can be little 'stragglers' and 'strays', pieces of text that don't get put in the order you thought they would. So you have to get creative. It took me about 5 hours to figure out one for the pdf's I was working on. But it works pretty good now. Good luck. A: Found that solution today. Works great for me. Even rendering PDF pages to PNG images. http://www.swftools.org/gfx_tutorial.html
{ "language": "en", "url": "https://stackoverflow.com/questions/25665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "414" }
Q: ColdFusion - When to use the "request" scope? Been going over my predecessor's code and see usage of the "request" scope frequently. What is the appropriate usage of this scope? A: There are several scopes that are available to any portion of your code: Session, Client, Cookie, Application, and Request. Some are inadvisable to use in certain ways (i.e. using Request or Application scope inside your Custom Tags or CFC's; this is coupling, violates encapsulation principles, and is considered a bad practice), and some have special purposes: Cookie is persisted on the client machine as physical cookies, and Session scoped variables are user-specific and expire with the user's session on the website. If a variable is extremely unlikely to change (constant for all intents and purposes) and can simply be initialized on application startup and never written again, generally you should put it into Application scope because this persists it between every user and every session. When properly implemented it is written once and read N times. A proper implementation of Application variables in Application.cfm might look like this: <cfif not structKeyExists(application, "dsn")> <cflock scope="application" type="exclusive" timeout="30"> <cfif not structKeyExists(application, "dsn")> <cfset application.dsn = "MyDSN" /> <cfset foo = "bar" /> <cfset x = 5 /> </cfif> </cflock> </cfif> Note that the existence of the variable in the application scope is checked before and after the lock, so that if two users create a race condition at application startup, only one of them will end up setting the application variables. The benefit of this approach is that it won't constantly refresh these stored variables on every request, wasting the user's time and the server's processing cycles. The trade-off is that it is a little verbose and complex. This was greatly simplified with the addition of Application.cfc. Now, you can specify which variables are created on application startup and don't have to worry about locking and checking for existence and all of that fun stuff: <cfcomponent> <cfset this.name = "myApplicationName" /> <cffunction name="onApplicationStart" returnType="boolean" output="false"> <cfset application.dsn = "MyDSN" /> <cfset foo = "bar" /> <cfset x = 5 /> <cfreturn true /> </cffunction> </cfcomponent> For more information on Application.cfc including all of the various special functions available and every little detail about what and how to use it, I recommend this post on Raymond Camden's blog. To summarize, request scope is available everywhere in your code, but that doesn't necessarily make it "right" to use it everywhere. Chances are that your predecessor was using it to break encapsulation, and that can be cumbersome to refactor out. You may be best off leaving it as-is, but understanding which scope is the best tool for the job will definitely make your future code better. A: This is a very subjective question, and some would even argue that it is never "appropriate" to use the request scope in modern ColdFusion applications. With that disclaimer out of the way, let's define what the request scope is and where it would be useful. The request scope is the absolute global scope in a single ColdFusion page request. It is not a shared scope, like application, server, client, and session scopes, so locking is not necessary to make it threadsafe (unless you spawn worker threads from a single request using CF8's CFTHREAD tag). As a global scope, it is a very convenient way to persist variables through any level in the request's stack without having to pass them from parent to caller. This was a very common way to persist variables through nested or recursive Custom Tags in older CF apps. Note that while many applications use this scope to store application-level variables (configuration settings, for example), the big (and sometimes subtle) difference between the request scope and the application scope is that the value of the same request-scoped variable can differ between individual page requests. I would guess that your predecessor used this scope as a means to conveniently set variables that needed to survive the jump between encapsulated or nested units of code without having to pass them around explicitly. A: Okay, I just wanted to comment on your code. Please forgive me if I seem crazy. But you already verified that the structKeyExists in the beginning. Since you know it's going to be true, it wouldn't make sense to run another check. So my version of it would be this... But thats just me. <cfif not structKeyExists(application, "dsn")> <cflock scope="application" type="exclusive" timeout="30"> <cfset application.dsn = "MyDSN" /> <cfset foo = "bar" /> <cfset x = 5 /> </cflock> </cfif> Alright. A: I've been writing my company's framework that will be used to power our site. I use the request variable to set certain data that would be available to the other CFC's I had to do this so the data would be available throughout the application, without the need to continually pass in the data. I honestly believe that using request , and application as long as its a static function component then you should not have a problem. I'm not sure if I am wrong in my thinking with this but once I release the framework we will see.
{ "language": "en", "url": "https://stackoverflow.com/questions/25672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: MVC - where to implement form validation (server-side)? In coding a traditional MVC application, what is the best practice for coding server-side form validations? Does the code belong in the controller, or the model layer? And why? A: From Wikipedia: Model-view-controller (MVC) is an architectural pattern used in software engineering. Successful use of the pattern isolates business logic from user interface considerations, resulting in an application where it is easier to modify either the visual appearance of the application or the underlying business rules without affecting the other. In MVC, the model represents the information (the data) of the application and the business rules used to manipulate the data; the view corresponds to elements of the user interface such as text, checkbox items, and so forth; and the controller manages details involving the communication to the model of user actions such as keystrokes and mouse movements. Thus, model - it holds the application and the business rules. A: I completely agree with Josh. However you may create a kind of validation layer between Controller and Model so that most of syntactical validations can be carried out on data before it reaches to model. For example, The validation layer would validate the date format, amount format, mandatory fields, etc... So that model would purely concentrate on business validations like x amount should be greater than y amount. A: My experience with MVC thus far consists of entirely rails. Rails does it's validation 100% in the Model. For the most part this works very well. I'd say 9 out of 10 times it's all you need. There are some areas however where what you're submitting from a form doesn't match up with your model properly. There may be some additional filtering/rearranging or so on. The best way to solve these situations I've found is to create faux-model objects, which basically act like Model objects but map 1-to-1 with the form data. These faux-model objects don't actually save anything, they're just a bucket for the data with validations attached. An example of such a thing (in rails) is ActiveForm Once the data gets into those (and is valid) it's usually a pretty simple step to transfer it directly across to your actual models. A: The basic syntax check should be in the control as it translates the user input for the model. The model needs to do the real data validation.
{ "language": "en", "url": "https://stackoverflow.com/questions/25675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What is the best free memory leak detector for a C/C++ program and its plug-in DLLs? I have a .exe and many plug-in .dll modules that the .exe loads. (I have source for both.) A cross-platform (with source) solution would be ideal, but the platform can be narrowed to WinXP and Visual Studio (7.1/2003 in my case). The built-in VS leak detector only gives the line where new/malloc was called from, but I have a wrapper for allocations, so a full symbolic stack trace would be best. The detector would also be able to detect for a leak in both the .exe and its accompanying plug-in .dll modules. A: I have had good experiences with Rational Purify. I have also heard nice things about Valgrind A: I personally use Visual Leak Detector, though it can cause large delays when large blocks are leaked (it displays the contents of the entire leaked block). A: As for me I use Deleaker to locate leaks. I am pleased. A: My freely available memory profiler MemPro allows you to compare 2 snapshots and gives stack traces for all of the allocations. A: If you don't want to recompile (as Visual Leak Detector requires) I would recommend WinDbg, which is both powerful and fast (though it's not as easy to use as one could desire). On the other hand, if you don't want to mess with WinDbg, you can take a look at UMDH, which is also developed by Microsoft and it's easier to learn. Take a look at these links in order to learn more about WinDbg, memory leaks and memory management in general: * *Memory Leak Detection Using Windbg *Memory Leak Detection in MFC *Common WinDbg Commands (Thematically Grouped) *C/C++ Memory Corruption And Memory Leaks *The Memory Management Reference *Using LeakDiag to Debug Unmanaged Memory Leaks *Heap: Pleasures and Pains A: Try Jochen Kalmbach's Memory Leak Detector on Code Project. The URL to the latest version was somewhere in the comments when I last checked. A: As several of my friend has posted there are many free leak detectors for C++. All of that will cause overhead when running your code, approximatly 20% slower. I preffer Visual Leak Detector for Visual C++ 2008/2010/2012 , you can download the source code from - enter link description here .
{ "language": "en", "url": "https://stackoverflow.com/questions/25730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: What's the difference between a string constant and a string literal? I'm learning objective-C and Cocoa and have come across this statement: The Cocoa frameworks expect that global string constants rather than string literals are used for dictionary keys, notification and exception names, and some method parameters that take strings. I've only worked in higher level languages so have never had to consider the details of strings that much. What's the difference between a string constant and string literal? A: In Objective-C, the syntax @"foo" is an immutable, literal instance of NSString. It does not make a constant string from a string literal as Mike assume. Objective-C compilers typically do intern literal strings within compilation units — that is, they coalesce multiple uses of the same literal string — and it's possible for the linker to do additional interning across the compilation units that are directly linked into a single binary. (Since Cocoa distinguishes between mutable and immutable strings, and literal strings are always also immutable, this can be straightforward and safe.) Constant strings on the other hand are typically declared and defined using syntax like this: // MyExample.h - declaration, other code references this extern NSString * const MyExampleNotification; // MyExample.m - definition, compiled for other code to reference NSString * const MyExampleNotification = @"MyExampleNotification"; The point of the syntactic exercise here is that you can make uses of the string efficient by ensuring that there's only one instance of that string in use even across multiple frameworks (shared libraries) in the same address space. (The placement of the const keyword matters; it guarantees that the pointer itself is guaranteed to be constant.) While burning memory isn't as big a deal as it may have been in the days of 25MHz 68030 workstations with 8MB of RAM, comparing strings for equality can take time. Ensuring that most of the time strings that are equal will also be pointer-equal helps. Say, for example, you want to subscribe to notifications from an object by name. If you use non-constant strings for the names, the NSNotificationCenter posting the notification could wind up doing a lot of byte-by-byte string comparisons when determining who is interested in it. If most of these comparisons are short-circuited because the strings being compared have the same pointer, that can be a big win. A: Let's use C++, since my Objective C is totally non-existent. If you stash a string into a constant variable: const std::string mystring = "my string"; Now when you call methods, you use my_string, you're using a string constant: someMethod(mystring); Or, you can call those methods with the string literal directly: someMethod("my string"); The reason, presumably, that they encourage you to use string constants is because Objective C doesn't do "interning"; that is, when you use the same string literal in several places, it's actually a different pointer pointing to a separate copy of the string. For dictionary keys, this makes a huge difference, because if I can see the two pointers are pointing to the same thing, that's much cheaper than having to do a whole string comparison to make sure the strings have equal value. Edit: Mike, in C# strings are immutable, and literal strings with identical values all end pointing at the same string value. I imagine that's true for other languages as well that have immutable strings. In Ruby, which has mutable strings, they offer a new data-type: symbols ("foo" vs. :foo, where the former is a mutable string, and the latter is an immutable identifier often used for Hash keys). A: Some definitions A literal is a value, which is immutable by definition. eg: 10 A constant is a read-only variable or pointer. eg: const int age = 10; A string literal is a expression like @"". The compiler will replace this with an instance of NSString. A string constant is a read-only pointer to NSString. eg: NSString *const name = @"John"; Some comments on the last line: * *That's a constant pointer, not a constant object1. objc_sendMsg2 doesn't care if you qualify the object with const. If you want an immutable object, you have to code that immutability inside the object3. *All @"" expressions are indeed immutable. They are replaced4 at compile time with instances of NSConstantString, which is a specialized subclass of NSString with a fixed memory layout5. This also explains why NSString is the only object that can be initialized at compile time6. A constant string would be const NSString* name = @"John"; which is equivalent to NSString const* name= @"John";. Here, both syntax and programmer intention are wrong: const <object> is ignored, and the NSString instance (NSConstantString) was already immutable. 1 The keyword const applies applies to whatever is immediately to its left. If there is nothing to its left, it applies to whatever is immediately to its right. 2 This is the function that the runtime uses to send all messages in Objective-C, and therefore what you can use to change the state of an object. 3 Example: in const NSMutableArray *array = [NSMutableArray new]; [array removeAllObjects]; const doesn't prevent the last statement. 4 The LLVM code that rewrites the expression is RewriteModernObjC::RewriteObjCStringLiteral in RewriteModernObjC.cpp. 5 To see the NSConstantString definition, cmd+click it in Xcode. 6 Creating compile time constants for other classes would be easy but it would require the compiler to use a specialized subclass. This would break compatibility with older Objective-C versions. Back to your quote The Cocoa frameworks expect that global string constants rather than string literals are used for dictionary keys, notification and exception names, and some method parameters that take strings. You should always prefer string constants over string literals when you have a choice. By using string constants, you enlist the help of the compiler to check your spelling and thus avoid runtime errors. It says that literals are error prone. But it doesn't say that they are also slower. Compare: // string literal [dic objectForKey:@"a"]; // string constant NSString *const a = @"a"; [dic objectForKey:a]; In the second case I'm using keys with const pointers, so instead [a isEqualToString:b], I can do (a==b). The implementation of isEqualToString: compares the hash and then runs the C function strcmp, so it is slower than comparing the pointers directly. Which is why constant strings are better: they are faster to compare and less prone to errors. If you also want your constant string to be global, do it like this: // header extern NSString *const name; // implementation NSString *const name = @"john";
{ "language": "en", "url": "https://stackoverflow.com/questions/25746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: What does the @ symbol represent in objective-c? I'm learning objective-c and keep bumping into the @ symbol. It is used in different scenarios, for example at the start of a string or to synthesise accessor methods. What's does the @ symbol mean in objective-c? A: From Objective-C Tutorial: The @ Symbol, the reason it is on the front of various keywords: Using @ should make it easier to bolt an Objective-C compiler on to an existing C compiler. Because the @ isn't valid in any context in C except a string literal, the tokenizer (an early and simple step in the compiler) could be modified to simply look for the @ character outside of a string constant (the tokenizer understands string literals, so it is in a position to distinguish this). When @ is encountered the tokenizer would put the rest of the compiler in "Objective-C mode." (The Objective-C parser would be responsible for returning the compiler back to regular C mode when it detects the end of the Objective-C code). Also when seen in front of a string literal, it makes an NSString rather than a 'char *' in C. A: From Macrumors: Objective-C Tutorial, when in front of string literal: There are also @"" NSString literals. It is essentially shorthand for NSString's +stringWithUTF8String method. The @ also adds unicode support to C strings. A: From the manual: Objective-C frameworks typically do not use C-style strings. Instead, they pass strings around as NSString objects. The NSString class provides an object wrapper for strings that has all of the advantages you would expect, including built-in memory management for storing arbitrary-length strings, support for Unicode, printf-style formatting utilities, and more. Because such strings are used commonly though, Objective-C provides a shorthand notation for creating NSString objects from constant values. To use this shorthand, all you have to do is precede a normal, double-quoted string with the @ symbol, as shown in the following examples: NSString *myString = @"My String\n"; NSString *anotherString = [NSString stringWithFormat:@"%d %@", 1, @"String"]; A: As other answers have noted, the @ symbol was convenient for adding Objective-C's superset of functionality to C because @ is not used syntactically by C. As to what it represents, that depends on the context in which it is used. The uses fall roughly into two categories (keywords and literals), and I've summarized the uses I could find below. I wrote most of this before finding NSHipster's great summary. Here's another less thorough cheat sheet. (Both of those sources call things prefixed with @ "compiler directives", but I thought compiler directives were things like #define, #ifdef, etc. If someone could weigh in on the correct terminology, I'd appreciate it.) Objective-C Keywords @ prefixes many Objective-C keywords: * *@interface: declares the methods and properties associated with a class *@implementation: implements a class's declarations from @interface *@protocol/@optional/@required: declares methods and properties that are independent of any specific class. *@class: forward declaration of a class *@property/@synthesize/@dynamic: declare properties in an @interface *@try/@throw/@catch/@finally: exception handling *@end: closes @interface, @implementation, and @protocol. *@encode: returns a C string that encodes the internal representation of a given type *@synchronized: ensures parallel execution exclusivity *@selector/@protocol: return SEL or protocol pointers with a specified name *@defs: I'm not really sure; it appears to import Objective-C class properties into a struct. NSHipster's page says it does not exist in modern Objective-C. *@public/@package/@protected/@private: access modifiers *@available: checks API availability. *@autoreleasepool: creates a new autorelease scope. Any objects that received an autorelease in the block will receive a release after exiting the block (and not before.) Objective-C Literals @ creates Objective-C literals: * *@...: NSNumber literal NSNumber *fortyTwo = @42; // equivalent to [NSNumber numberWithInt:42] NSNumber *yesNumber = @YES; // equivalent to [NSNumber numberWithBool:YES] *@(...): Boxed expressions // numbers. NSNumber *piOverTwo = @(M_PI / 2); // [NSNumber numberWithDouble:(M_PI / 2)] // strings. NSString *path = @(getenv("PATH")); // [NSString stringWithUTF8String:(getenv("PATH"))] // structs. NSValue *center = @(view.center); // Point p = view.center; // [NSValue valueWithBytes:&p objCType:@encode(Point)]; *@"...": Boxed C strings *@[]/@{}: Container literals NSArray *array = @[ @"Hello", NSApp, [NSNumber numberWithInt:42] ]; NSDictionary *dictionary = @{ @"name" : NSUserName(), @"date" : [NSDate date], @"processInfo" : [NSProcessInfo processInfo] }; A: The @ character isn't used in C or C++ identifiers, so it's used to introduce Objective-C language keywords in a way that won't conflict with the other languages' keywords. This enables the "Objective" part of the language to freely intermix with the C or C++ part. Thus with very few exceptions, any time you see @ in some Objective-C code, you're looking at Objective-C constructs rather than C or C++ constructs. The major exceptions are id, Class, nil, and Nil, which are generally treated as language keywords even though they may also have a typedef or #define behind them. For example, the compiler actually does treat id specially in terms of the pointer type conversion rules it applies to declarations, as well as to the decision of whether to generate GC write barriers. Other exceptions are in, out, inout, oneway, byref, and bycopy; these are used as storage class annotations on method parameter and return types to make Distributed Objects more efficient. (They become part of the method signature available from the runtime, which DO can look at to determine how to best serialize a transaction.) There are also the attributes within @property declarations, copy, retain, assign, readonly, readwrite, nonatomic, getter, and setter; those are only valid within the attribute section of a @property declaration.
{ "language": "en", "url": "https://stackoverflow.com/questions/25749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "153" }
Q: How do I put a space character before option text in a HTML select element? In a drop down list, I need to add spaces in front of the options in the list. I am trying <select> <option>&#32;&#32;Sample</option> </select> for adding two spaces but it displays no spaces. How can I add spaces before option texts? A: As Rob Cooper pointed out, there are some compatibility issues with older browsers (IE6 will display the actual letters "& nbsp;" This is how I get around it in ASP.Net (I don't have VS open so I'm not sure what characters this actually gets translated to): Server.HtmlDecode("&nbsp;") A: i tried multiple things but the only thing that worked for me was to use javascript. just notice that i'm using the unicode code for space rather than the html entity, as js doenst know a thing about entities $("#project_product_parent_id option").each(function(i,option){ $option = $(option); $option.text($option.text().replace(/─/g,'\u00A0\u00A0\u00A0')) }); A: You can also press alt+space (on mac) for a non-breaking space. I use it for a Drupal module because Drupal decodes html entities. A: I'm nearly certain you can accomplish this with CSS padding, as well. Then you won't be married to the space characters being hard-coded into all of your <option> tags. A: @Brian I'm nearly certain you can accomplish this with CSS padding, as well. Then you won't be married to the space characters being hard-coded into all of your tags. Good thinking - but unfortunately it doesn't work in (everyone's favourite browser...) IE7 :-( Here's some code that will work in Firefox (and I assume Op/Saf). <select> <option style="padding-left: 0px;">Blah</option> <option style="padding-left: 5px;">Blah</option> <option style="padding-left: 10px;">Blah</option> <option style="padding-left: 0px;">Blah</option> <option style="padding-left: 5px;">Blah</option> </select> A: Isn't &#160 the entity for space? <select> <option>&#160;option 1</option> <option> option 2</option> </select> Works for me... EDIT: Just checked this out, there may be compatibility issues with this in older browsers, but all seems to work fine for me here. Just thought I should let you know as you may want to replace with &nbsp; A: Just use char 255 (type Alt+2+5+5 on your numeric keypad) with a monospace font like Courier New. A: Use \xA0 with String. This Works Perfect while binding C# Model Data to a Dropdown... SectionsList.ForEach(p => { p.Text = "\xA0\xA0Section: " + p.Text; }); A: &nbsp; Can you try that? Or is it the same? A: I think you want &nbsp; or &#160; So a fixed version of your example could be... <select> <option>&nbsp;&nbsp;Sample</option> </select> or <select> <option>&#160;&#160;Sample</option> </select> A: Server.HtmlDecode("&nbsp;") is the only one that worked for me. Otherwise the chr are printed as text. I tried to add the padding as a Attribute for the listitem, however it didnt affect it. A: I was also having the same issue and I was required to fix this as soon as possible. Though I googled a lot, I was not able to find a quick solution. Instead I used my own solution, though I am not sure if its appropriate one, it works in my case and exactly which I was required to do. So when you add an ListItem in dropdown and you want to add space then use the following:- Press ALT and type 0160 on your numeric keypad, so it should be something like ALT+0160. It will add a space. ListItem("ALT+0160 ALT+0160 TEST", "TESTVAL") A: In PHP you can use html_entity_decode: obj_dat.options[0] = new Option('Menu 1', 'Menu 1'); obj_dat.options[1] = new Option('<?php echo html_entity_decode('&nbsp;&nbsp;&nbsp;'); ?>Menu 1.1', 'Menu 1.1'); A: I tried several of these examples, but the only thing that worked was using javascript, much like dabobert's, but not jQuery, just plain old vanilla javascript and spaces: for(var x = 0; x < dd.options.length; x++) { item = dd.options[x]; //if a line that needs indenting item.text = ' ' + item.text; //indent } This is IE only. IE11 in fact. Ugh. A: 1.Items Return to List 2.Foreach loop in list 3.. foreach (var item in q) { StringWriter myWriter = new StringWriter(); myWriter.Lable = HttpUtility.HtmlDecode(item.Label.Replace(" ", "&nbsp;")); } This work for me!!!
{ "language": "en", "url": "https://stackoverflow.com/questions/25752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Java configuration framework I'm in the process of weeding out all hardcoded values in a Java library and was wondering what framework would be the best (in terms of zero- or close-to-zero configuration) to handle run-time configuration? I would prefer XML-based configuration files, but it's not essential. Please do only reply if you have practical experience with a framework. I'm not looking for examples, but experience... A: Commons Configuration We're using this. Properties files alone are much easier to handle, but if you need to represent more complex data commons configuration can do this and read your properties files as well. If you aren't doing anything complicated I'd stick to properites files. A: If you want to do something advanced (and typesafe), you might want to take a look at this: http://www.ibm.com/developerworks/java/library/j-configint/index.html A: the Intelligent Parameter Utilization Tool (InPUT, page) allows to externalize almost any (hard coded) decision as a parameter into an XML based configuration file. It has been initiated in early 2012 as a response to the perceived deficiencies in existing configuration tools with respect to generality, and separation of concerns. InPUT is probably more powerful than most use cases require, as it allows for the programming language independent formulation of experimental data (input - output), with features such as the definition of complex descriptor to class mappings, or randomized configuration spawning and validation based on predefined value ranges (for test and research, e.g. Monte Carlo simulations). You can define parameters with sub parameters, relative restrictions on parameter values (numerical param a > param b) etc. . Its still in beta, but rather stable, I use it for my research, for the configuration and documentation of experiments, and for teaching purposes. Once it is available for other languages (C++ adapter in the pipe), other researchers/practitioners can reuse the descriptors running their implementations of the same algorithms in C++ (using the code mapping concept). That way, experimental results can be validated/programs can be migrated more easily. The documentation is still in working process, but a couple of examples are available on the page. InPUT is open source software. For those interested, the Conceptual Research Paper. A: Apache Commons Configuration works great. It supports having the configuration stored in a wide range of formats on the backend including properties, XML, JNDI, and more. It is easy to use and to extend. To get the most flexibility out of it use a factory to get the configuration and just use the Configuration interface after that. Two feature of Commons Configuration that differentiate it over a straight Properties file is that it support automatic conversion to common types (int, float, String arrays) and it supports property substitution: server.host=myHost server.url=http://${server.host}/somePath A: I tend to use java.util.Properties (or similar classes in other languages and frameworks) wrapped in an application-specific configuration class most of the time, but I am very interested in alternatives or variations on this. Especially since things can become a bit tricky if graphical configuration dialogs or multiple views on the configuration data is involved. Unfortunately I don't have any experience with specific libraries for Java (except with the ones I have written myself), but any pointers would be appreciated. Update OK. That wasn't entirely true, three is the Spring Java Configuration Project. A: If your hardcoded values are just simple key-value pairs, you should look at java.util.Properties. It's a lot simpler than xml, easier to use, and mind-numbingly trivial to implement. If you are working with Java and the data you are storing or retrieving from disk is modeled as a key value pair (which it sounds like it is in your case), then I really can't imagine a better solution. I have used properties files for simple configuration of small packages in a bigger project, and as a more global configuration for a whole project, and I have never had problems with it. Of course this has the huge benefit of not requiring any 3rd party libraries to utilize. A: Here are various options: * *java.util.Properties *java.util.prefs.Preferences (since Java 5) *Commons Configuration *jConfig *JFig *Carbon's Configuration Service You might want to read Comparison of Commons Configuration With JFig and JConfig and Configuring your Applications using JFig for some feedback from various users. Personally, I've used jConfig and it was a good experience. A: I wrote about this a couple of weeks ago and came to the conclusion that XML is one of the most widely used notations. Is it the best? I don't think so, I really like JSON, but the tooling is still not up to XML so I guess we have to wait and see. A: You can try YamlBeans. This way you write whatever classes you want to hold your config data, then you can automatically write and read them to and from YAML. YAML is a human readable data format. It has more expressive power than java.util.Properties. You can have lists, maps, anchors, typed data, etc. A: Please take a look at this URL: http://issues.apache.org/jira/browse/CONFIGURATION-394 The Configuration framework which we're looking for it is something on top of Apache Commons Configuration and must support Concurrency Issues, JMX issues and most of stores(e.g .properties file, .xml files or PreferencesAPI). What weblogic team provides on 'Administration Console' is intersting which through it you can have transactional(atomic) updates on configurations so that are registered listeners be notified. The Apache guys insist that this project is out of scopes of Commons Configuration, maybe! I've attached a simple configuration framework, take look please. A: I just posted a brief bit of code about using Spring's ClassPathResource as an alternative to IoC. ClassPathResource permits you to place property files anywhere on the classpath (e.g., all in one place, or as peers to the code they configure. My example just uses java.util.Properties, so you can use the plaintext "name=value" style or its XML format. A: Properties files a very simple, if you need something more functional, you could format some of your configuration files as Java classes. These can be placed in a different package/module and can be pre-compiled or loaded at runtime with a library like BeanShell. Note: In the simplest case (pre-compiled) you don't need any additional libraries. A: Regarding the suggestions to use java.util.Properties - starting in jdk 1.5, the Preferences API (java.util.prefs) appears to be the preferred alternative to using the Properties API. Reasons: increased scalability, back-end neutrality, ect. A: You could have a look at newly announced tools4j-config whose mission statement is to allow you to easily handle configuration at runtime.
{ "language": "en", "url": "https://stackoverflow.com/questions/25765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: Most efficient way to get data from the database to session What is the quickest way to get a large amount of data (think golf) and the most efficient (think performance) to get a large amount of data from a MySQL database to a session without having to continue doing what I already have: $sql = "SELECT * FROM users WHERE username='" . mysql_escape_string($_POST['username']) . "' AND password='" . mysql_escape_string(md5($_POST['password'])) . "'"; $result = mysql_query($sql, $link) or die("There was an error while trying to get your information.\n<!--\n" . mysql_error($link) . "\n-->"); if(mysql_num_rows($result) < 1) { $_SESSION['username'] = $_POST['username']; redirect('index.php?p=signup'); } $_SESSION['id'] = mysql_result($result, '0', 'id'); $_SESSION['fName'] = mysql_result($result, '0', 'fName'); $_SESSION['lName'] = mysql_result($result, '0', 'lName'); ... And before anyone asks yes I do really need to 'SELECT Edit: Yes, I am sanitizing the data, so that there can be no SQL injection, that is further up in the code. A: I came up with this and it appears to work. while($row = mysql_fetch_assoc($result)) { $_SESSION = array_merge_recursive($_SESSION, $row); } A: Most efficient: $get = mysql_query("SELECT * FROM table_name WHERE field_name=$something") or die(mysql_error()); $_SESSION['data'] = mysql_fetch_assoc($get); Done. This is now stored in an array. So say a field is username you just do: echo $_SESSION['data']['username']; Data is the name of the array - username is the array field.. which holds the value for that field. EDIT: fixed some syntax mistakes :P but you get the idea. A: OK, this doesn't answer your question, but doesn't your current code leave you open to SQL Injection? I could be wrong, never worked in PHP, just saw the use of strings in the SQL and alarm bells started ringing! Edit: I am not trying to tamper with your post, I was correcting a spelling error, please do not roll back. A: I am not sure what you mean by "large amounts of data", but it looks to me like you are only initializing data for one user? If so, I don't see any reason to optimize this unless you have hundreds of columns in your database with several megabytes of data in them. Or, to put it differently, why do you need to optimize this? Are you having performance problems? What you are doing now is the straight-forward approach, and I can't really see any reason to do it differently unless you have some specific problems with it. Wrapping the user data in a user object might help some on the program structure though. Validating your input is probably also a good idea. A: Try using json for example: $_SESSION['data'] = json_encode(mysql_fetch_array($result)); Edit Later you then json_decode the $_SESSION['data'] variable and you got an array with all the data you need. Clarification: You can use json_encode and json_decode if you want to reduce the number of lines of code you write. In the example in the question, a line of code was needed to copy each column in the database to the SESSION array. Instead of doing it with 50-75 lines of code, you could do it with 1 by json_encoding the entire database record into a string. This string can be stored in the SESSION variable. Later, when the user visits another page, the SESSION variable is there with the entire JSON string. If you then want to know the first name, you can use the following code: $fname = json_decode($_SESSION['data'])['fname']; This method won't be faster than a line by line copy, but it will save coding and it will be more resistant to changes in your database or code. BTW Does anyone else have trouble entering ] into markdown? I have to paste it in. A: @Unkwntech Looks like you are correct, but following a Google, which led here looks like you may want to change to mysql_real_escape_string() As for the edit, I corrected the spelling of efficient as well as removed the "what is the".. Since that's not really required since the topic says it all. You can review the edit history (which nicely highlights the actual changes) by clicking the "edited a min ago" text at the bottom of your question. A: Try using json for example: $_SESSION['data'] = json_encode(mysql_fetch_array($result)); Is the implementation of that function faster than what he is already doing? Does anyone else have trouble entering ] into markdown? I have to paste it in Yes, it's bugged. A: @Anders - there are something like 50-75 columns. Again, unless this is actually causing performance problems in your application I would not bother with optimizing it. If, however, performance is a problem I would consider only getting some of the data initially and lazy-loading the other columns as they are needed. A: It's not so much that it causing performance problems but that I would like the code to look a bit cleaner. Then wrap the user data in a class. Modifyng $_SESSION directly looks somewhat dirty. If you want to keep the data in a dictionary—which you can still do even if you put it in a separate class. You could also implement a loop that iterates over all columns, gets their names and copy the data to a map with the same key names. That way your internal variables—named by key in the dictionary—and database column names would always be the same. (This also has the downside of changing the variable name when you change the column name in the database, but this is a quite common and well-accepted trade-off.) A: If Unkwntech's suggestion does indeed work, I suggest you change your SELECT statement so that it doesn't grab everything, since your password column would be one of those fields. As for whether or not you need to keep this stuff in the session, I say why not? If you're going to check the DB when the user logs in (I'm assuming this would be done then, no?) anyway, you might as well store fairly non-sensitive information (like name) in the session if you plan on using that information throughout the person's visit.
{ "language": "en", "url": "https://stackoverflow.com/questions/25767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Output compile time stamp in Visual C++ executable? How can I insert compilation timestamp information into an executable I build with Visual C++ 2005? I want to be able to output something like this when I execute the program: This build XXXX was compiled at dd-mm-yy, hh:mm. where date and time reflect the time when the project was built. They should not change with each successive call of the program, unless it's recompiled. A: Well... for Visual C++, there's a built in symbol called __ImageBase. Specifically: EXTERN_C IMAGE_DOS_HEADER __ImageBase; You can inspect that at runtime to determine the timestamp in the PE header: const IMAGE_NT_HEADERS *nt_header= (const IMAGE_NT_HEADERS *)((char *)&__ImageBase + __ImageBase.e_lfanew); And use nt_header->FileHeader.TimeDateStamp to get the timestamp, which is seconds from 1/1/1970. A: __TIME__ and __DATE__ can work, however there are some complications. If you put these definitions in a .h file, and include the definitions from multiple .c/.cpp files, each file will have a different version of the date/time based on when it gets compiled. So if you're looking to use the date/time in two different places and they should always match, you're in trouble. If you're doing an incremental build, one of the files may be rebuilt while the other is not, which again results in time stamps that could be wildly different. A slightly better approach is to make GetBuildTimeStamp() prototypes in a .h file, and put the __TIME__ and __DATE__ macros in the implementation(.c/.cpp) file. This way you can use the time stamps in multiple places in your code and they will always match. However you need to ensure that the .c/.cpp file is rebuilt every time a build is performed. If you're doing clean builds then this solution may work for you. If you're doing incremental builds, then you need to ensure the build stamp is updated on every build. In Visual C++ you can do this with PreBuild steps - however in this case I would recommend that instead of using __DATE__ and __TIME__ in a compiled .c/.cpp file, you use a text-based file that is read at run-time during your program's execution. This makes it fast for your build script to update the timestamp (no compiling or linking required) and doesn't require your PreBuild step to understand your compiler flags or options. A: Though not your exact format, DATE will be of the format Mmm dd yyyy, while TIME will be of the format hh:mm:ss. You can create a string like this and use it in whatever print routine makes sense for you: const char *buildString = "This build XXXX was compiled at " __DATE__ ", " __TIME__ "."; (Note on another answer: TIMESTAMP only spits out the modification date/time of the source file, not the build date/time.) A: __DATE__ __TIME__ are predefined as part of the standards for C99 so should be available to you. They run once with the preprocessor. A: I think, the suggested solutions to use DATE, TIME or TIMESTAMP would be good enough. I do recommend to get a hold of a touch program to include in a pre-build step in order to touch the file that holds the use of the preprocessor variable. Touching a file makes sure, that its timestamp is newer than at the time it was last compiled. That way, the date/time in the compiled file is changed as well with each rebuild. A: Visual C++ also supports __TIMESTAMP__ which is almost exactly what you need. That being said, the tough part about build timestamps is keeping them up to date, that means compiling the file in which __TIMESTAMP__ is used on every rebuild. Not sure if there's a way to set this up in Visual C++ though.
{ "language": "en", "url": "https://stackoverflow.com/questions/25771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Delete all but the most recent X files in bash Is there a simple way, in a pretty standard UNIX environment with bash, to run a command to delete all but the most recent X files from a directory? To give a bit more of a concrete example, imagine some cron job writing out a file (say, a log file or a tar-ed up backup) to a directory every hour. I'd like a way to have another cron job running which would remove the oldest files in that directory until there are less than, say, 5. And just to be clear, there's only one file present, it should never be deleted. A: (ls -t|head -n 5;ls)|sort|uniq -u|xargs rm This version supports names with spaces: (ls -t|head -n 5;ls)|sort|uniq -u|sed -e 's,.*,"&",g'|xargs rm A: Ignoring newlines is ignoring security and good coding. wnoise had the only good answer. Here is a variation on his that puts the filenames in an array $x while IFS= read -rd ''; do x+=("${REPLY#* }"); done < <(find . -maxdepth 1 -printf '%T@ %p\0' | sort -r -z -n ) A: For Linux (GNU tools), an efficient & robust way to keep the n newest files in the current directory while removing the rest: n=5 find . -maxdepth 1 -type f -printf '%T@ %p\0' | sort -z -nrt ' ' -k1,1 | sed -z -e "1,${n}d" -e 's/[^ ]* //' | xargs -0r rm -f For BSD, find doesn't have the -printf predicate, stat can't output NULL bytes, and sed + awk can't handle NULL-delimited records. Here's a solution that doesn't support newlines in paths but that safeguards against them by filtering them out: #!/bin/bash n=5 find . -maxdepth 1 -type f ! -path $'*\n*' -exec stat -f '%.9Fm %N' {} + | sort -nrt ' ' -k1,1 | awk -v n="$n" -F'^[^ ]* ' 'NR > n {printf "%s%c", $2, 0}' | xargs -0 rm -f note: I'm using bash because of the $'\n' notation. For sh you can define a variable containing a literal newline and use it instead. Solution for UNIX & Linux (inspired from AIX/HP-UX/SunOS/BSD/Linux ls -b): Some platforms don't provide find -printf, nor stat, nor support NUL-delimited records with stat/sort/awk/sed/xargs. That's why using perl is probably the most portable way to tackle the problem, because it is available by default in almost every OS. I could have written the whole thing in perl but I didn't. I only use it for substituting stat and for encoding-decoding-escaping the filenames. The core logic is the same as the previous solutions and is implemented with POSIX tools. note: perl's default stat has a resolution of a second, but starting from perl-5.8.9 you can get sub-second resolution with the stat function of the module Time::HiRes (when both the OS and the filesystem support it). That's what I'm using here; if your perl doesn't provide it then you can remove the ‑MTime::HiRes=stat from the command line. n=5 find . '(' -name '.' -o -prune ')' -type f -exec \ perl -MTime::HiRes=stat -le ' foreach (@ARGV) { @st = stat($_); if ( @st > 0 ) { s/([\\\n])/sprintf( "\\%03o", ord($1) )/ge; print sprintf( "%.9f %s", $st[9], $_ ); } else { print STDERR "stat: $_: $!"; } } ' {} + | sort -nrt ' ' -k1,1 | sed -e "1,${n}d" -e 's/[^ ]* //' | perl -l -ne ' s/\\([0-7]{3})/chr(oct($1))/ge; s/(["\n])/"\\$1"/g; print "\"$_\""; ' | xargs -E '' sh -c '[ "$#" -gt 0 ] && rm -f "$@"' sh Explanations: * *For each file found, the first perl gets the modification time and outputs it along the encoded filename (each newline and backslash characters are replaced with the literals \012 and \134 respectively). *Now each time filename is guaranteed to be single-line, so POSIX sort and sed can safely work with this stream. *The second perl decodes the filenames and escapes them for POSIX xargs. *Lastly, xargs calls rm for deleting the files. The sh command is a trick that prevents xargs from running rm when there's no files to delete. A: Simpler variant of thelsdj's answer: ls -tr | head -n -5 | xargs --no-run-if-empty rm ls -tr displays all the files, oldest first (-t newest first, -r reverse). head -n -5 displays all but the 5 last lines (ie the 5 newest files). xargs rm calls rm for each selected file. A: I realize this is an old thread, but maybe someone will benefit from this. This command will find files in the current directory : for F in $(find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T@ %p\n' | sort -r -z -n | tail -n+5 | awk '{ print $2; }'); do rm $F; done This is a little more robust than some of the previous answers as it allows to limit your search domain to files matching expressions. First, find files matching whatever conditions you want. Print those files with the timestamps next to them. find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T@ %p\n' Next, sort them by the timestamps: sort -r -z -n Then, knock off the 4 most recent files from the list: tail -n+5 Grab the 2nd column (the filename, not the timestamp): awk '{ print $2; }' And then wrap that whole thing up into a for statement: for F in $(); do rm $F; done This may be a more verbose command, but I had much better luck being able to target conditional files and execute more complex commands against them. A: If the filenames don't have spaces, this will work: ls -C1 -t| awk 'NR>5'|xargs rm If the filenames do have spaces, something like ls -C1 -t | awk 'NR>5' | sed -e "s/^/rm '/" -e "s/$/'/" | sh Basic logic: * *get a listing of the files in time order, one column *get all but the first 5 (n=5 for this example) *first version: send those to rm *second version: gen a script that will remove them properly A: With zsh Assuming you don't care about present directories and you will not have more than 999 files (choose a bigger number if you want, or create a while loop). [ 6 -le `ls *(.)|wc -l` ] && rm *(.om[6,999]) In *(.om[6,999]), the . means files, the o means sort order up, the m means by date of modification (put a for access time or c for inode change), the [6,999] chooses a range of file, so doesn't rm the 5 first. A: Adaptation of @mklement0's excellent answer with some parameters and without needing to navigate to the folder containing the files to be deleted... TARGET_FOLDER="/my/folder/path" FILES_KEEP=5 ls -tp "$TARGET_FOLDER"**/* | grep -v '/$' | tail -n +$((FILES_KEEP+1)) | xargs -d '\n' -r rm -- [Ref(s).: https://stackoverflow.com/a/3572628/3223785 ] Thanks! A: find . -maxdepth 1 -type f -printf '%T@ %p\0' | sort -r -z -n | awk 'BEGIN { RS="\0"; ORS="\0"; FS="" } NR > 5 { sub("^[0-9]*(.[0-9]*)? ", ""); print }' | xargs -0 rm -f Requires GNU find for -printf, and GNU sort for -z, and GNU awk for "\0", and GNU xargs for -0, but handles files with embedded newlines or spaces. A: The problems with the existing answers: * *inability to handle filenames with embedded spaces or newlines. * *in the case of solutions that invoke rm directly on an unquoted command substitution (rm `...`), there's an added risk of unintended globbing. *inability to distinguish between files and directories (i.e., if directories happened to be among the 5 most recently modified filesystem items, you'd effectively retain fewer than 5 files, and applying rm to directories will fail). wnoise's answer addresses these issues, but the solution is GNU-specific (and quite complex). Here's a pragmatic, POSIX-compliant solution that comes with only one caveat: it cannot handle filenames with embedded newlines - but I don't consider that a real-world concern for most people. For the record, here's the explanation for why it's generally not a good idea to parse ls output: http://mywiki.wooledge.org/ParsingLs ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {} Note: This command operates in the current directory; to target a directory explicitly, use a subshell ((...)) with cd: (cd /path/to && ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {}) The same applies analogously to the commands below. The above is inefficient, because xargs has to invoke rm separately for each filename. However, your platform's specific xargs implementation may allow you to solve this problem: A solution that works with GNU xargs is to use -d '\n', which makes xargs consider each input line a separate argument, yet passes as many arguments as will fit on a command line at once: ls -tp | grep -v '/$' | tail -n +6 | xargs -d '\n' -r rm -- Note: Option -r (--no-run-if-empty) ensures that rm is not invoked if there's no input. A solution that works with both GNU xargs and BSD xargs (including on macOS) - though technically still not POSIX-compliant - is to use -0 to handle NUL-separated input, after first translating newlines to NUL (0x0) chars., which also passes (typically) all filenames at once: ls -tp | grep -v '/$' | tail -n +6 | tr '\n' '\0' | xargs -0 rm -- Explanation: * *ls -tp prints the names of filesystem items sorted by how recently they were modified , in descending order (most recently modified items first) (-t), with directories printed with a trailing / to mark them as such (-p). * *Note: It is the fact that ls -tp always outputs file / directory names only, not full paths, that necessitates the subshell approach mentioned above for targeting a directory other than the current one ((cd /path/to && ls -tp ...)). *grep -v '/$' then weeds out directories from the resulting listing, by omitting (-v) lines that have a trailing / (/$). * *Caveat: Since a symlink that points to a directory is technically not itself a directory, such symlinks will not be excluded. *tail -n +6 skips the first 5 entries in the listing, in effect returning all but the 5 most recently modified files, if any. Note that in order to exclude N files, N+1 must be passed to tail -n +. *xargs -I {} rm -- {} (and its variations) then invokes on rm on all these files; if there are no matches at all, xargs won't do anything. * *xargs -I {} rm -- {} defines placeholder {} that represents each input line as a whole, so rm is then invoked once for each input line, but with filenames with embedded spaces handled correctly. *-- in all cases ensures that any filenames that happen to start with - aren't mistaken for options by rm. A variation on the original problem, in case the matching files need to be processed individually or collected in a shell array: # One by one, in a shell loop (POSIX-compliant): ls -tp | grep -v '/$' | tail -n +6 | while IFS= read -r f; do echo "$f"; done # One by one, but using a Bash process substitution (<(...), # so that the variables inside the `while` loop remain in scope: while IFS= read -r f; do echo "$f"; done < <(ls -tp | grep -v '/$' | tail -n +6) # Collecting the matches in a Bash *array*: IFS=$'\n' read -d '' -ra files < <(ls -tp | grep -v '/$' | tail -n +6) printf '%s\n' "${files[@]}" # print array elements A: All these answers fail when there are directories in the current directory. Here's something that works: find . -maxdepth 1 -type f | xargs -x ls -t | awk 'NR>5' | xargs -L1 rm This: * *works when there are directories in the current directory *tries to remove each file even if the previous one couldn't be removed (due to permissions, etc.) *fails safe when the number of files in the current directory is excessive and xargs would normally screw you over (the -x) *doesn't cater for spaces in filenames (perhaps you're using the wrong OS?) A: ls -tQ | tail -n+4 | xargs rm List filenames by modification time, quoting each filename. Exclude first 3 (3 most recent). Remove remaining. EDIT after helpful comment from mklement0 (thanks!): corrected -n+3 argument, and note this will not work as expected if filenames contain newlines and/or the directory contains subdirectories. A: Remove all but 5 (or whatever number) of the most recent files in a directory. rm `ls -t | awk 'NR>5'` A: found interesting cmd in Sed-Onliners - Delete last 3 lines - fnd it perfect for another way to skin the cat (okay not) but idea: #!/bin/bash # sed cmd chng #2 to value file wish to retain cd /opt/depot ls -1 MyMintFiles*.zip > BigList sed -n -e :a -e '1,2!{P;N;D;};N;ba' BigList > DeList for i in `cat DeList` do echo "Deleted $i" rm -f $i #echo "File(s) gonzo " #read junk done exit 0 A: Removes all but the 10 latest (most recents) files ls -t1 | head -n $(echo $(ls -1 | wc -l) - 10 | bc) | xargs rm If less than 10 files no file is removed and you will have : error head: illegal line count -- 0 To count files with bash A: I needed an elegant solution for the busybox (router), all xargs or array solutions were useless to me - no such command available there. find and mtime is not the proper answer as we are talking about 10 items and not necessarily 10 days. Espo's answer was the shortest and cleanest and likely the most unversal one. Error with spaces and when no files are to be deleted are both simply solved the standard way: rm "$(ls -td *.tar | awk 'NR>7')" 2>&- Bit more educational version: We can do it all if we use awk differently. Normally, I use this method to pass (return) variables from the awk to the sh. As we read all the time that can not be done, I beg to differ: here is the method. Example for .tar files with no problem regarding the spaces in the filename. To test, replace "rm" with the "ls". eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}') Explanation: ls -td *.tar lists all .tar files sorted by the time. To apply to all the files in the current folder, remove the "d *.tar" part awk 'NR>7... skips the first 7 lines print "rm \"" $0 "\"" constructs a line: rm "file name" eval executes it Since we are using rm, I would not use the above command in a script! Wiser usage is: (cd /FolderToDeleteWithin && eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}')) In the case of using ls -t command will not do any harm on such silly examples as: touch 'foo " bar' and touch 'hello * world'. Not that we ever create files with such names in real life! Sidenote. If we wanted to pass a variable to the sh this way, we would simply modify the print (simple form, no spaces tolerated): print "VarName="$1 to set the variable VarName to the value of $1. Multiple variables can be created in one go. This VarName becomes a normal sh variable and can be normally used in a script or shell afterwards. So, to create variables with awk and give them back to the shell: eval $(ls -td *.tar | awk 'NR>7 { print "VarName=\""$1"\"" }'); echo "$VarName" A: leaveCount=5 fileCount=$(ls -1 *.log | wc -l) tailCount=$((fileCount - leaveCount)) # avoid negative tail argument [[ $tailCount < 0 ]] && tailCount=0 ls -t *.log | tail -$tailCount | xargs rm -f A: I made this into a bash shell script. Usage: keep NUM DIR where NUM is the number of files to keep and DIR is the directory to scrub. #!/bin/bash # Keep last N files by date. # Usage: keep NUMBER DIRECTORY echo "" if [ $# -lt 2 ]; then echo "Usage: $0 NUMFILES DIR" echo "Keep last N newest files." exit 1 fi if [ ! -e $2 ]; then echo "ERROR: directory '$1' does not exist" exit 1 fi if [ ! -d $2 ]; then echo "ERROR: '$1' is not a directory" exit 1 fi pushd $2 > /dev/null ls -tp | grep -v '/' | tail -n +"$1" | xargs -I {} rm -- {} popd > /dev/null echo "Done. Kept $1 most recent files in $2." ls $2|wc -l A: Modified version of the answer of @Fabien if you want to specify a path. Useful if you're running the script elsewhere. ls -tr /path/foo/ | head -n -5 | xargs -I% --no-run-if-empty rm /path/foo/%
{ "language": "en", "url": "https://stackoverflow.com/questions/25785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "200" }
Q: Copy/duplicate database without using mysqldump Without local access to the server, is there any way to duplicate/clone a MySQL db (with content and without content) into another without using mysqldump? I am currently using MySQL 4.0. A: I can see you said you didn't want to use mysqldump, but I reached this page while looking for a similar solution and others might find it as well. With that in mind, here is a simple way to duplicate a database from the command line of a windows server: * *Create the target database using MySQLAdmin or your preferred method. In this example, db2 is the target database, where the source database db1 will be copied. *Execute the following statement on a command line: mysqldump -h [server] -u [user] -p[password] db1 | mysql -h [server] -u [user] -p[password] db2 Note: There is NO space between -p and [password] A: If you are using Linux, you can use this bash script: (it perhaps needs some additional code cleaning but it works ... and it's much faster then mysqldump|mysql) #!/bin/bash DBUSER=user DBPASSWORD=pwd DBSNAME=sourceDb DBNAME=destinationDb DBSERVER=db.example.com fCreateTable="" fInsertData="" echo "Copying database ... (may take a while ...)" DBCONN="-h ${DBSERVER} -u ${DBUSER} --password=${DBPASSWORD}" echo "DROP DATABASE IF EXISTS ${DBNAME}" | mysql ${DBCONN} echo "CREATE DATABASE ${DBNAME}" | mysql ${DBCONN} for TABLE in `echo "SHOW TABLES" | mysql $DBCONN $DBSNAME | tail -n +2`; do createTable=`echo "SHOW CREATE TABLE ${TABLE}"|mysql -B -r $DBCONN $DBSNAME|tail -n +2|cut -f 2-` fCreateTable="${fCreateTable} ; ${createTable}" insertData="INSERT INTO ${DBNAME}.${TABLE} SELECT * FROM ${DBSNAME}.${TABLE}" fInsertData="${fInsertData} ; ${insertData}" done; echo "$fCreateTable ; $fInsertData" | mysql $DBCONN $DBNAME A: Note there is a mysqldbcopy command as part of the add on mysql utilities.... https://dev.mysql.com/doc/mysql-utilities/1.5/en/utils-task-clone-db.html A: The best way to clone database tables without mysqldump: * *Create a new database. *Create clone-queries with query: SET @NewSchema = 'your_new_db'; SET @OldSchema = 'your_exists_db'; SELECT CONCAT('CREATE TABLE ',@NewSchema,'.',table_name, ' LIKE ', TABLE_SCHEMA ,'.',table_name,';INSERT INTO ',@NewSchema,'.',table_name,' SELECT * FROM ', TABLE_SCHEMA ,'.',table_name,';') FROM information_schema.TABLES where TABLE_SCHEMA = @OldSchema AND TABLE_TYPE != 'VIEW'; *Run that output! But note, script above just fast clone tables - not views, triggers and user-functions: you can fast get structure by mysqldump --no-data --triggers -uroot -ppassword , and then use to clone only insert statement . Why it is actual question? Because uploading of mysqldumps is ugly slow if DB is over 2Gb. And you can't clone InnoDB tables just by copying DB files (like snapshot backuping). A: All of the prior solutions get at the point a little, however, they just don't copy everything over. I created a PHP function (albeit somewhat lengthy) that copies everything including tables, foreign keys, data, views, procedures, functions, triggers, and events. Here is the code: /* This function takes the database connection, an existing database, and the new database and duplicates everything in the new database. */ function copyDatabase($c, $oldDB, $newDB) { // creates the schema if it does not exist $schema = "CREATE SCHEMA IF NOT EXISTS {$newDB};"; mysqli_query($c, $schema); // selects the new schema mysqli_select_db($c, $newDB); // gets all tables in the old schema $tables = "SELECT table_name FROM information_schema.tables WHERE table_schema = '{$oldDB}' AND table_type = 'BASE TABLE'"; $results = mysqli_query($c, $tables); // checks if any tables were returned and recreates them in the new schema, adds the foreign keys, and inserts the associated data if (mysqli_num_rows($results) > 0) { // recreates all tables first while ($row = mysqli_fetch_array($results)) { $table = "CREATE TABLE {$newDB}.{$row[0]} LIKE {$oldDB}.{$row[0]}"; mysqli_query($c, $table); } // resets the results to loop through again mysqli_data_seek($results, 0); // loops through each table to add foreign key and insert data while ($row = mysqli_fetch_array($results)) { // inserts the data into each table $data = "INSERT IGNORE INTO {$newDB}.{$row[0]} SELECT * FROM {$oldDB}.{$row[0]}"; mysqli_query($c, $data); // gets all foreign keys for a particular table in the old schema $fks = "SELECT constraint_name, column_name, table_name, referenced_table_name, referenced_column_name FROM information_schema.key_column_usage WHERE referenced_table_name IS NOT NULL AND table_schema = '{$oldDB}' AND table_name = '{$row[0]}'"; $fkResults = mysqli_query($c, $fks); // checks if any foreign keys were returned and recreates them in the new schema // Note: ON UPDATE and ON DELETE are not pulled from the original so you would have to change this to your liking if (mysqli_num_rows($fkResults) > 0) { while ($fkRow = mysqli_fetch_array($fkResults)) { $fkQuery = "ALTER TABLE {$newDB}.{$row[0]} ADD CONSTRAINT {$fkRow[0]} FOREIGN KEY ({$fkRow[1]}) REFERENCES {$newDB}.{$fkRow[3]}({$fkRow[1]}) ON UPDATE CASCADE ON DELETE CASCADE;"; mysqli_query($c, $fkQuery); } } } } // gets all views in the old schema $views = "SHOW FULL TABLES IN {$oldDB} WHERE table_type LIKE 'VIEW'"; $results = mysqli_query($c, $views); // checks if any views were returned and recreates them in the new schema if (mysqli_num_rows($results) > 0) { while ($row = mysqli_fetch_array($results)) { $view = "SHOW CREATE VIEW {$oldDB}.{$row[0]}"; $viewResults = mysqli_query($c, $view); $viewRow = mysqli_fetch_array($viewResults); mysqli_query($c, preg_replace("/CREATE(.*?)VIEW/", "CREATE VIEW", str_replace($oldDB, $newDB, $viewRow[1]))); } } // gets all triggers in the old schema $triggers = "SELECT trigger_name, action_timing, event_manipulation, event_object_table, created FROM information_schema.triggers WHERE trigger_schema = '{$oldDB}'"; $results = mysqli_query($c, $triggers); // checks if any triggers were returned and recreates them in the new schema if (mysqli_num_rows($results) > 0) { while ($row = mysqli_fetch_array($results)) { $trigger = "SHOW CREATE TRIGGER {$oldDB}.{$row[0]}"; $triggerResults = mysqli_query($c, $trigger); $triggerRow = mysqli_fetch_array($triggerResults); mysqli_query($c, str_replace($oldDB, $newDB, $triggerRow[2])); } } // gets all procedures in the old schema $procedures = "SHOW PROCEDURE STATUS WHERE db = '{$oldDB}'"; $results = mysqli_query($c, $procedures); // checks if any procedures were returned and recreates them in the new schema if (mysqli_num_rows($results) > 0) { while ($row = mysqli_fetch_array($results)) { $procedure = "SHOW CREATE PROCEDURE {$oldDB}.{$row[1]}"; $procedureResults = mysqli_query($c, $procedure); $procedureRow = mysqli_fetch_array($procedureResults); mysqli_query($c, str_replace($oldDB, $newDB, $procedureRow[2])); } } // gets all functions in the old schema $functions = "SHOW FUNCTION STATUS WHERE db = '{$oldDB}'"; $results = mysqli_query($c, $functions); // checks if any functions were returned and recreates them in the new schema if (mysqli_num_rows($results) > 0) { while ($row = mysqli_fetch_array($results)) { $function = "SHOW CREATE FUNCTION {$oldDB}.{$row[1]}"; $functionResults = mysqli_query($c, $function); $functionRow = mysqli_fetch_array($functionResults); mysqli_query($c, str_replace($oldDB, $newDB, $functionRow[2])); } } // selects the old schema (a must for copying events) mysqli_select_db($c, $oldDB); // gets all events in the old schema $query = "SHOW EVENTS WHERE db = '{$oldDB}';"; $results = mysqli_query($c, $query); // selects the new schema again mysqli_select_db($c, $newDB); // checks if any events were returned and recreates them in the new schema if (mysqli_num_rows($results) > 0) { while ($row = mysqli_fetch_array($results)) { $event = "SHOW CREATE EVENT {$oldDB}.{$row[1]}"; $eventResults = mysqli_query($c, $event); $eventRow = mysqli_fetch_array($eventResults); mysqli_query($c, str_replace($oldDB, $newDB, $eventRow[3])); } } } A: Actually i wanted to achieve exactly that in PHP but none of the answers here were very helpful so here's my – pretty straightforward – solution using MySQLi: // Database variables $DB_HOST = 'localhost'; $DB_USER = 'root'; $DB_PASS = '1234'; $DB_SRC = 'existing_db'; $DB_DST = 'newly_created_db'; // MYSQL Connect $mysqli = new mysqli( $DB_HOST, $DB_USER, $DB_PASS ) or die( $mysqli->error ); // Create destination database $mysqli->query( "CREATE DATABASE $DB_DST" ) or die( $mysqli->error ); // Iterate through tables of source database $tables = $mysqli->query( "SHOW TABLES FROM $DB_SRC" ) or die( $mysqli->error ); while( $table = $tables->fetch_array() ): $TABLE = $table[0]; // Copy table and contents in destination database $mysqli->query( "CREATE TABLE $DB_DST.$TABLE LIKE $DB_SRC.$TABLE" ) or die( $mysqli->error ); $mysqli->query( "INSERT INTO $DB_DST.$TABLE SELECT * FROM $DB_SRC.$TABLE" ) or die( $mysqli->error ); endwhile; A: You can duplicate a table without data by running: CREATE TABLE x LIKE y; (See the MySQL CREATE TABLE Docs) You could write a script that takes the output from SHOW TABLES from one database and copies the schema to another. You should be able to reference schema+table names like: CREATE TABLE x LIKE other_db.y; As far as the data goes, you can also do it in MySQL, but it's not necessarily fast. After you've created the references, you can run the following to copy the data: INSERT INTO x SELECT * FROM other_db.y; If you're using MyISAM, you're better off to copy the table files; it'll be much faster. You should be able to do the same if you're using INNODB with per table table spaces. If you do end up doing an INSERT INTO SELECT, be sure to temporarily turn off indexes with ALTER TABLE x DISABLE KEYS! EDIT Maatkit also has some scripts that may be helpful for syncing data. It may not be faster, but you could probably run their syncing scripts on live data without much locking. A: In PHP: function cloneDatabase($dbName, $newDbName){ global $admin; $db_check = @mysql_select_db ( $dbName ); $getTables = $admin->query("SHOW TABLES"); $tables = array(); while($row = mysql_fetch_row($getTables)){ $tables[] = $row[0]; } $createTable = mysql_query("CREATE DATABASE `$newDbName` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;") or die(mysql_error()); foreach($tables as $cTable){ $db_check = @mysql_select_db ( $newDbName ); $create = $admin->query("CREATE TABLE $cTable LIKE ".$dbName.".".$cTable); if(!$create) { $error = true; } $insert = $admin->query("INSERT INTO $cTable SELECT * FROM ".$dbName.".".$cTable); } return !isset($error); } // usage $clone = cloneDatabase('dbname','newdbname'); // first: toCopy, second: new database A: I don't really know what you mean by "local access". But for that solution you need to be able to access over ssh the server to copy the files where is database is stored. I cannot use mysqldump, because my database is big (7Go, mysqldump fail) If the version of the 2 mysql database is too different it might not work, you can check your mysql version using mysql -V. 1) Copy the data from your remote server to your local computer (vps is the alias to your remote server, can be replaced by [email protected]) ssh vps:/etc/init.d/mysql stop scp -rC vps:/var/lib/mysql/ /tmp/var_lib_mysql ssh vps:/etc/init.d/apache2 start 2) Import the data copied on your local computer /etc/init.d/mysql stop sudo chown -R mysql:mysql /tmp/var_lib_mysql sudo nano /etc/mysql/my.cnf -> [mysqld] -> datadir=/tmp/var_lib_mysql /etc/init.d/mysql start If you have a different version, you may need to run /etc/init.d/mysql stop mysql_upgrade -u root -pPASSWORD --force #that step took almost 1hrs /etc/init.d/mysql start A: an SQL that shows SQL commands, need to run to duplicate a database from one database to another. for each table there is create a table statement and an insert statement. it assumes both databases are on the same server: select @fromdb:="crm"; select @todb:="crmen"; SET group_concat_max_len=100000000; SELECT GROUP_CONCAT( concat("CREATE TABLE `",@todb,"`.`",table_name,"` LIKE `",@fromdb,"`.`",table_name,"`;\n", "INSERT INTO `",@todb,"`.`",table_name,"` SELECT * FROM `",@fromdb,"`.`",table_name,"`;") SEPARATOR '\n\n') as sqlstatement FROM information_schema.tables where table_schema=@fromdb and TABLE_TYPE='BASE TABLE'; A: Mysqldump isn't bad solution. Simplest way to duplicate database: mysqldump -uusername -ppass dbname1 | mysql -uusername -ppass dbname2 Also, you can change storage engine by this way: mysqldump -uusername -ppass dbname1 | sed 's/InnoDB/RocksDB/' | mysql -uusername -ppass dbname2
{ "language": "en", "url": "https://stackoverflow.com/questions/25794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "439" }
Q: How do I intercept a method call in C#? For a given class I would like to have tracing functionality i.e. I would like to log every method call (method signature and actual parameter values) and every method exit (just the method signature). How do I accomplish this assuming that: * *I don't want to use any 3rd party AOP libraries for C#, *I don't want to add duplicate code to all the methods that I want to trace, *I don't want to change the public API of the class - users of the class should be able to call all the methods in exactly the same way. To make the question more concrete let's assume there are 3 classes: public class Caller { public static void Call() { Traced traced = new Traced(); traced.Method1(); traced.Method2(); } } public class Traced { public void Method1(String name, Int32 value) { } public void Method2(Object object) { } } public class Logger { public static void LogStart(MethodInfo method, Object[] parameterValues); public static void LogEnd(MethodInfo method); } How do I invoke Logger.LogStart and Logger.LogEnd for every call to Method1 and Method2 without modifying the Caller.Call method and without adding the calls explicitly to Traced.Method1 and Traced.Method2? Edit: What would be the solution if I'm allowed to slightly change the Call method? A: C# is not an AOP oriented language. It has some AOP features and you can emulate some others but making AOP with C# is painful. I looked up for ways to do exactly what you wanted to do and I found no easy way to do it. As I understand it, this is what you want to do: [Log()] public void Method1(String name, Int32 value); and in order to do that you have two main options * *Inherit your class from MarshalByRefObject or ContextBoundObject and define an attribute which inherits from IMessageSink. This article has a good example. You have to consider nontheless that using a MarshalByRefObject the performance will go down like hell, and I mean it, I'm talking about a 10x performance lost so think carefully before trying that. *The other option is to inject code directly. In runtime, meaning you'll have to use reflection to "read" every class, get its attributes and inject the appropiate call (and for that matter I think you couldn't use the Reflection.Emit method as I think Reflection.Emit wouldn't allow you to insert new code inside an already existing method). At design time this will mean creating an extension to the CLR compiler which I have honestly no idea on how it's done. The final option is using an IoC framework. Maybe it's not the perfect solution as most IoC frameworks works by defining entry points which allow methods to be hooked but, depending on what you want to achive, that might be a fair aproximation. A: If you write a class - call it Tracing - that implements the IDisposable interface, you could wrap all method bodies in a Using( Tracing tracing = new Tracing() ){ ... method body ...} In the Tracing class you could the handle the logic of the traces in the constructor/Dispose method, respectively, in the Tracing class to keep track of the entering and exiting of the methods. Such that: public class Traced { public void Method1(String name, Int32 value) { using(Tracing tracer = new Tracing()) { [... method body ...] } } public void Method2(Object object) { using(Tracing tracer = new Tracing()) { [... method body ...] } } } A: If you want to trace after your methods without limitation (no code adaptation, no AOP Framework, no duplicate code), let me tell you, you need some magic... Seriously, I resolved it to implement an AOP Framework working at runtime. You can find here : NConcern .NET AOP Framework I decided to create this AOP Framework to give a respond to this kind of needs. it is a simple library very lightweight. You can see an example of logger in home page. If you don't want to use a 3rd party assembly, you can browse the code source (open source) and copy both files Aspect.Directory.cs and Aspect.Directory.Entry.cs to adapted as your wishes. Theses classes allow to replace your methods at runtime. I would just ask you to respect the license. I hope you will find what you need or to convince you to finally use an AOP Framework. A: Take a look at this - Pretty heavy stuff.. http://msdn.microsoft.com/en-us/magazine/cc164165.aspx Essential .net - don box had a chapter on what you need called Interception. I scraped some of it here (Sorry about the font colors - I had a dark theme back then...) http://madcoderspeak.blogspot.com/2005/09/essential-interception-using-contexts.html A: I have found a different way which may be easier... Declare a Method InvokeMethod [WebMethod] public object InvokeMethod(string methodName, Dictionary<string, object> methodArguments) { try { string lowerMethodName = '_' + methodName.ToLowerInvariant(); List<object> tempParams = new List<object>(); foreach (MethodInfo methodInfo in serviceMethods.Where(methodInfo => methodInfo.Name.ToLowerInvariant() == lowerMethodName)) { ParameterInfo[] parameters = methodInfo.GetParameters(); if (parameters.Length != methodArguments.Count()) continue; else foreach (ParameterInfo parameter in parameters) { object argument = null; if (methodArguments.TryGetValue(parameter.Name, out argument)) { if (parameter.ParameterType.IsValueType) { System.ComponentModel.TypeConverter tc = System.ComponentModel.TypeDescriptor.GetConverter(parameter.ParameterType); argument = tc.ConvertFrom(argument); } tempParams.Insert(parameter.Position, argument); } else goto ContinueLoop; } foreach (object attribute in methodInfo.GetCustomAttributes(true)) { if (attribute is YourAttributeClass) { RequiresPermissionAttribute attrib = attribute as YourAttributeClass; YourAttributeClass.YourMethod();//Mine throws an ex } } return methodInfo.Invoke(this, tempParams.ToArray()); ContinueLoop: continue; } return null; } catch { throw; } } I then define my methods like so [WebMethod] public void BroadcastMessage(string Message) { //MessageBus.GetInstance().SendAll("<span class='system'>Web Service Broadcast: <b>" + Message + "</b></span>"); //return; InvokeMethod("BroadcastMessage", new Dictionary<string, object>() { {"Message", Message} }); } [RequiresPermission("editUser")] void _BroadcastMessage(string Message) { MessageBus.GetInstance().SendAll("<span class='system'>Web Service Broadcast: <b>" + Message + "</b></span>"); return; } Now I can have the check at run time without the dependency injection... No gotchas in site :) Hopefully you will agree that this is less weight then a AOP Framework or deriving from MarshalByRefObject or using remoting or proxy classes. A: The simplest way to achieve that is probably to use PostSharp. It injects code inside your methods based on the attributes that you apply to it. It allows you to do exactly what you want. Another option is to use the profiling API to inject code inside the method, but that is really hardcore. A: First you have to modify your class to implement an interface (rather than implementing the MarshalByRefObject). interface ITraced { void Method1(); void Method2() } class Traced: ITraced { .... } Next you need a generic wrapper object based on RealProxy to decorate any interface to allow intercepting any call to the decorated object. class MethodLogInterceptor: RealProxy { public MethodLogInterceptor(Type interfaceType, object decorated) : base(interfaceType) { _decorated = decorated; } public override IMessage Invoke(IMessage msg) { var methodCall = msg as IMethodCallMessage; var methodInfo = methodCall.MethodBase; Console.WriteLine("Precall " + methodInfo.Name); var result = methodInfo.Invoke(_decorated, methodCall.InArgs); Console.WriteLine("Postcall " + methodInfo.Name); return new ReturnMessage(result, null, 0, methodCall.LogicalCallContext, methodCall); } } Now we are ready to intercept calls to Method1 and Method2 of ITraced public class Caller { public static void Call() { ITraced traced = (ITraced)new MethodLogInterceptor(typeof(ITraced), new Traced()).GetTransparentProxy(); traced.Method1(); traced.Method2(); } } A: You can use open source framework CInject on CodePlex. You can write minimal code to create an Injector and get it to intercept any code quickly with CInject. Plus, since this is Open Source you can extend this as well. Or you can follow the steps mentioned on this article on Intercepting Method Calls using IL and create your own interceptor using Reflection.Emit classes in C#. A: You could achieve it with Interception feature of a DI container such as Castle Windsor. Indeed, it is possible to configure the container in such way that every classes that have a method decorated by a specific attribute would be intercepted. Regarding point #3, OP asked for a solution without AOP framework. I assumed in the following answer that what should be avoided were Aspect, JointPoint, PointCut, etc. According to Interception documentation from CastleWindsor, none of those are required to accomplish what is asked. Configure generic registration of an Interceptor, based on the presence of an attribute: public class RequireInterception : IContributeComponentModelConstruction { public void ProcessModel(IKernel kernel, ComponentModel model) { if (HasAMethodDecoratedByLoggingAttribute(model.Implementation)) { model.Interceptors.Add(new InterceptorReference(typeof(ConsoleLoggingInterceptor))); model.Interceptors.Add(new InterceptorReference(typeof(NLogInterceptor))); } } private bool HasAMethodDecoratedByLoggingAttribute(Type implementation) { foreach (var memberInfo in implementation.GetMembers()) { var attribute = memberInfo.GetCustomAttributes(typeof(LogAttribute)).FirstOrDefault() as LogAttribute; if (attribute != null) { return true; } } return false; } } Add the created IContributeComponentModelConstruction to container container.Kernel.ComponentModelBuilder.AddContributor(new RequireInterception()); And you can do whatever you want in the interceptor itself public class ConsoleLoggingInterceptor : IInterceptor { public void Intercept(IInvocation invocation) { Console.Writeline("Log before executing"); invocation.Proceed(); Console.Writeline("Log after executing"); } } Add the logging attribute to your method to log public class Traced { [Log] public void Method1(String name, Int32 value) { } [Log] public void Method2(Object object) { } } Note that some handling of the attribute will be required if only some method of a class needs to be intercepted. By default, all public methods will be intercepted. A: I don't know a solution but my approach would be as follows. Decorate the class (or its methods) with a custom attribute. Somewhere else in the program, let an initialization function reflect all types, read the methods decorated with the attributes and inject some IL code into the method. It might actually be more practical to replace the method by a stub that calls LogStart, the actual method and then LogEnd. Additionally, I don't know if you can change methods using reflection so it might be more practical to replace the whole type. A: You could potentially use the GOF Decorator Pattern, and 'decorate' all classes that need tracing. It's probably only really practical with an IOC container (but as pointer out earlier you may want to consider method interception if you're going to go down the IOC path). A: you need to bug Ayende for an answer on how he did it: http://ayende.com/Blog/archive/2009/11/19/can-you-hack-this-out.aspx A: AOP is a must for clean code implementing, however if you want to surround a block in C#, generic methods have relatively easier usage. (with intelli sense and strongly typed code) Certainly, it can NOT be an alternative for AOP. Although PostSHarp have little buggy issues (i do not feel confident for using at production), it is a good stuff. Generic wrapper class, public class Wrapper { public static Exception TryCatch(Action actionToWrap, Action<Exception> exceptionHandler = null) { Exception retval = null; try { actionToWrap(); } catch (Exception exception) { retval = exception; if (exceptionHandler != null) { exceptionHandler(retval); } } return retval; } public static Exception LogOnError(Action actionToWrap, string errorMessage = "", Action<Exception> afterExceptionHandled = null) { return Wrapper.TryCatch(actionToWrap, (e) => { if (afterExceptionHandled != null) { afterExceptionHandled(e); } }); } } usage could be like this (with intelli sense of course) var exception = Wrapper.LogOnError(() => { MessageBox.Show("test"); throw new Exception("test"); }, "Hata"); A: Maybe it's to late for this answer but here it goes. What you are looking to achieve is built in MediatR library. This is my RequestLoggerBehaviour which intercepts all calls to my business layer. namespace SmartWay.Application.Behaviours { public class RequestLoggerBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> { private readonly ILogger _logger; private readonly IAppSession _appSession; private readonly ICreateLogGrain _createLogGrain; public RequestLoggerBehaviour(ILogger<TRequest> logger, IAppSession appSession, IClusterClient clusterClient) { _logger = logger; _appSession = appSession; _createLogGrain = clusterClient.GetGrain<ICreateLogGrain>(Guid.NewGuid()); } public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next) { var name = typeof(TRequest).Name; _logger.LogInformation($"SmartWay request started: ClientId: {_appSession.ClientId} UserId: {_appSession.UserId} Operation: {name} Request: {request}"); var response = await next(); _logger.LogInformation($"SmartWay request ended: ClientId: {_appSession.ClientId} UserId: {_appSession.UserId} Operation: {name} Request: {request}"); return response; } } } You can also create performance behaviours to trace methods that take too long to execute for example. Having clean architecture (MediatR) on your business layer will allow you to keep your code clean while you enforce SOLID principles. You can see how it works here: https://youtu.be/5OtUm1BLmG0?t=1 A: The best you can do before C# 6 with 'nameof' released is to use slow StackTrace and linq Expressions. E.g. for such method public void MyMethod(int age, string name) { log.DebugTrace(() => age, () => name); //do your stuff } Such line may be produces in your log file Method 'MyMethod' parameters age: 20 name: Mike Here is the implementation: //TODO: replace with 'nameof' in C# 6 public static void DebugTrace(this ILog log, params Expression<Func<object>>[] args) { #if DEBUG var method = (new StackTrace()).GetFrame(1).GetMethod(); var parameters = new List<string>(); foreach(var arg in args) { MemberExpression memberExpression = null; if (arg.Body is MemberExpression) memberExpression = (MemberExpression)arg.Body; if (arg.Body is UnaryExpression && ((UnaryExpression)arg.Body).Operand is MemberExpression) memberExpression = (MemberExpression)((UnaryExpression)arg.Body).Operand; parameters.Add(memberExpression == null ? "NA" : memberExpression.Member.Name + ": " + arg.Compile().DynamicInvoke().ToString()); } log.Debug(string.Format("Method '{0}' parameters {1}", method.Name, string.Join(" ", parameters))); #endif } A: * *Write your own AOP library. *Use reflection to generate a logging proxy over your instances (not sure if you can do it without changing some part of your existing code). *Rewrite the assembly and inject your logging code (basically the same as 1). *Host the CLR and add logging at this level (i think this is the hardest solution to implement, not sure if you have the required hooks in the CLR though).
{ "language": "en", "url": "https://stackoverflow.com/questions/25803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "158" }
Q: Python super class reflection If I have Python code class A(): pass class B(): pass class C(A, B): pass and I have class C, is there a way to iterate through it's super classed (A and B)? Something like pseudocode: >>> magicGetSuperClasses(C) (<type 'A'>, <type 'B'>) One solution seems to be inspect module and getclasstree function. def magicGetSuperClasses(cls): return [o[0] for o in inspect.getclasstree([cls]) if type(o[0]) == type] but is this a "Pythonian" way to achieve the goal? A: The inspect module was a good start, use the getmro function: Return a tuple of class cls’s base classes, including cls, in method resolution order. No class appears more than once in this tuple. ... >>> class A: pass >>> class B: pass >>> class C(A, B): pass >>> import inspect >>> inspect.getmro(C)[1:] (<class __main__.A at 0x8c59f2c>, <class __main__.B at 0x8c59f5c>) The first element of the returned tuple is C, you can just disregard it. A: C.__bases__ is an array of the super classes, so you could implement your hypothetical function like so: def magicGetSuperClasses(cls): return cls.__bases__ But I imagine it would be easier to just reference cls.__bases__ directly in most cases. A: if you need to know to order in which super() would call the classes you can use C.__mro__ and don't need inspect therefore. A: @John: Your snippet doesn't work -- you are returning the class of the base classes (which are also known as metaclasses). You really just want cls.__bases__: class A: pass class B: pass class C(A, B): pass c = C() # Instance assert C.__bases__ == (A, B) # Works assert c.__class__.__bases__ == (A, B) # Works def magicGetSuperClasses(clz): return tuple([base.__class__ for base in clz.__bases__]) assert magicGetSuperClasses(C) == (A, B) # Fails Also, if you're using Python 2.4+ you can use generator expressions instead of creating a list (via []), then turning it into a tuple (via tuple). For example: def get_base_metaclasses(cls): """Returns the metaclass of all the base classes of cls.""" return tuple(base.__class__ for base in clz.__bases__) That's a somewhat confusing example, but genexps are generally easy and cool. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/25807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Multiple YACC grammars in one program How do I compile, link and call different YACC grammars using yyparse() in one program? A: Use the -p option for each separate yacc grammar generation -p prefix Use prefix instead of yy as the prefix for all external names produced by yacc. For X/Open compliance, when the environment variable _XPG is set, then the -p option will work as described in the previous sentence. If the environment variable _XPG is not set, then the -p option will work as described below in the -P option.
{ "language": "en", "url": "https://stackoverflow.com/questions/25823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to open VS 2008 solution in VS 2005? I have seen Solutions created in Visual Studio 2008 cannot be opened in Visual Studio 2005 and tried workaround 1. Yet to try the workaround 2. But as that link was bit old and out of desperation asking here: Is there any convertor available? I dont have VS2008 yet and i wanted to open an opensource solution which was done in vs2008. Guess i have to fiddle around or wait till the vs2008 is shipped. A: Here's a visual studio 2008 to 2005 downgrade tool And another one. I haven't tried either of these, so please report back if they are successful for you ;-) A: I have a project that I work on in both VS 2005 and VS 2008. The trick is just to have to different solution files, and to make sure they stay in sync. Remember that projects keep track of their files, so the main thing solutions do is keep track of which projects they contain; pretty easy to keep in sync. So just create a new blank solution in VS 2005, and then add each of your projects to it, one by one. Be sure to name the solutions appropriately. (I call mine ProjectName.sln and ProjectNameVs2008.sln.) Which is a long way of saying you should try workaround #2. A: You can download and use Visual Studio 2008 Express editions. They're free... A: I'd say you should restore your 2005 version from source control, assuming you have source control and a 2005 copy of the file. Otherwise, there are plenty of pages on the net that details the changes, but unfortunately no ready-made converter program that will do it for you. Be aware that as soon as you open the file in 2008 again, it'll be upgraded once more. Perhaps the solution (no pun intended) is to keep separate copy of the project and solution files for 2005 and 2008? Why do you want to open it in 2005 anyway? A: I could resolve the problem of opening a VS 2008 web service project in VS2005. Steps to follow - creates a new web service project in vs 2005. - Compile the project. - OPen the Project file in notepad. - Copy the bold font lines ** - Debug AnyCPU 8.0.50727 2.0 {3C596F22-0A57-4B9A-ABD3-C2BEFA5DA0B7} {349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc} Library Properties WebService1 WebService1 true full false bin\ DEBUG;TRACE prompt 4 pdbonly true bin\ TRACE prompt 4 ** Service1.asmx Component ** - ** </Target> </Target> --> False True 3124 / False *Paste them over the webservice file created in the vs 2008. Do not replace the whole file your ref will go replace only where version is given. My problem is resolved following these i am sure your's will also be resolved
{ "language": "en", "url": "https://stackoverflow.com/questions/25833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Maximum buffer length for sendto? How do you get the maximum number of bytes that can be passed to a sendto(..) call for a socket opened as a UDP port? A: On Mac OS X there are different values for sending (SO_SNDBUF) and receiving (SO_RCVBUF). This is the size of the send buffer (man getsockopt): getsockopt(sock, SOL_SOCKET, SO_SNDBUF, (int *)&optval, &optlen); Trying to send a bigger message (on Leopard 9216 octets on UDP sent via the local loopback) will result in "Message too long / EMSGSIZE". A: As UDP is not connection oriented there's no way to indicate that two packets belong together. As a result you're limited by the maximum size of a single IP packet (65535). The data you can send is somewhat less that that, because the IP packet size also includes the IP header (usually 20 bytes) and the UDP header (8 bytes). Note that this IP packet can be fragmented to fit in smaller packets (eg. ~1500 bytes for ethernet). I'm not aware of any OS restricting this further. Bonus SO_MAX_MSG_SIZE of UDP packet * *IPv4: 65,507 bytes *IPv6: 65,527 bytes A: Use getsockopt(). This site has a good breakdown of the usage and options you can retrieve. In Windows, you can do: int optlen = sizeof(int); int optval; getsockopt(socket, SOL_SOCKET, SO_MAX_MSG_SIZE, (int *)&optval, &optlen); For Linux, according to the UDP man page, the kernel will use MTU discovery (it will check what the maximum UDP packet size is between here and the destination, and pick that), or if MTU discovery is off, it'll set the maximum size to the interface MTU and fragment anything larger. If you're sending over Ethernet, the typical MTU is 1500 bytes.
{ "language": "en", "url": "https://stackoverflow.com/questions/25841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Installing Apache Web Server on 64 Bit Mac I know that MAC OS X 10.5 comes with Apache installed but I would like to install the latest Apache without touching the OS Defaults incase it causes problems in the future with other udpates. So I have used the details located at: http://diymacserver.com/installing-apache/compiling-apache-on-leopard/ But I'm unsure how to make this the 64 Bit version of Apache as it seems to still install the 32 bit version. Any help is appreciated Cheers A: Add this to your ~/.bash_profile which means that your architecture is 64-bit ant you’d like to compile Universal binaries. export CFLAGS="-arch x86_64" A: This page claims that a flag for gcc (maix64) should do the trick. Give it a whirl, and if you need any more help, post back here. A: Be aware that you may run into issues with your apache modules. If they are compiled in 32-bit mode, then you will not be able to load them into a 64-bit apache. I had this issue with mod_python, took a bit of thinking to figure out this was the reason. A: Don't export CFLAGS from your .bash_profile or any other dot file. Your home directory could live on for decades, the system you're currently using is transient. There's a guide on Apple's web site, Porting UNIX/Linux Applications to Mac OS X, that talks specifically about how to make autoconf and make and other similar build systems fit into the Mac OS X Universal Binary scheme. If you're going to build cross-Unix applications on Mac OS X, you need to read and understand this guide. That said, I strongly question why you want to build Apache 64-bit. Just because Leopard can run 64-bit software doesn't mean you want all software on your system to be 64-bit. (It's not Linux.) In fact, virtually none of the software that ships with Leopard runs 64-bit by default, and most of the applications included with Leopard only ship 32-bit. Unless you have a pressing need to run Apache 64-bit, I wouldn't bother trying to build it that way. A: If you would have read a bit further on the same site there is some information on compiling Apache in 64 bits mode! http://diymacserver.com/2008/10/04/update-on-64-bits-compilation/
{ "language": "en", "url": "https://stackoverflow.com/questions/25846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is a language binding? My good friend, Wikipedia, didn't give me a very good response to that question. So: * *What are language bindings? *How do they work? Specifically accessing functions from code written in language X of a library written in language Y. A: Okay, now the question has been clarified, this isn't really relevant so I'm moving it to a new question Binding generally refers to a mapping of one thing to another - i.e. a datasource to a presentation object. It can typically refer to binding data from a database, or similar source (XML file, web service etc) to a presentation control or element - think list or table in HTML, combo box or data grid in desktop software. ...If that's the kind of binding you're interested in, read on... You generally have to bind the presentation element to the datasource, not the other way around. This would involve some kind of mapping - i.e. which fields from the datasource do you want to appear in the output. For more information in a couple of environments see: * *Data binding in .Net using Windows Forms * *http://www.codeproject.com/KB/database/databindingconcepts.aspx *http://www.akadia.com/services/dotnet_databinding.html *ASP.NET data binding * *http://support.microsoft.com/kb/307860 *http://www.15seconds.com/issue/040630.htm *http://www.w3schools.com/ASPNET/aspnet_databinding.asp *Java data binding * *http://www.xml.com/pub/a/2003/09/03/binding.html *Python data binding * *http://www.xml.com/pub/a/2005/07/27/py-xml.html *General XML data binding * *http://www.rpbourret.com/xml/XMLDataBinding.htm A: Let's say you create a C library to post stuff to stackoverflow. Now you want to be able to use the same library from Python. In this case, you will write Python bindings for your library. Also see SWIG: http://www.swig.org A: In the context of code libraries, bindings are wrapper libraries that bridge between two programming languages so that a library that was written for one language can also be implicitly used in another language. For example, libsvn is the API for Subversion and was written in C. If you want to access Subversion from within Java code you can use libsvn-java. libsvn-java depends on libsvn being installed because libsvn-java is a mere bridge between the Java programming language and libsvn, providing an API that merely calls functions of libsvn to do the real work. A: In Flex (Actionscript 3). Source A data binding copies the value of a property in one object to a property in another object. You can bind the properties of following objects: Flex components, Flex data models, and Flex data services. The object property that provides the data is known as the source property. The object property that receives the data is known as the destination property. The following example binds the text property of a TextInput component (the source property) to the text property of a Label component (the destination property) so that text entered in the TextInput component is displayed by the Label component: <mx:TextInput id="LNameInput"></mx:TextInput> ... <mx:Label text="{LNameInput.text}"></mx:Label> Data binding is usually a simple way to bind a model to user interface components. For example, you have a class with a FirstName property. In flex you could easily bind that property to a textbox by setting the value of the textbox to {Object.FirstName}. Then, every time that FirstName property changes, the textbox will be updated without requiring you to write any code to monitor that property for changes. Hope that helps. Matt
{ "language": "en", "url": "https://stackoverflow.com/questions/25865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: Webpart registration error in event log We created several custom web parts for SharePoint 2007. They work fine. However whenever they are loaded, we get an error in the event log saying: error initializing safe control - Assembly: ... The assembly actually loads fine. Additionally, it is correctly listed in the web.config and GAC. Any ideas about how to stop these (Phantom?) errors would be appreciated. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: You need to add a safecontrol entry to the web,config file, have a look at the following: <SafeControls> <SafeControl Assembly = "Text" Namespace = "Text" Safe = "TRUE" | "FALSE" TypeName = "Text"/> ... </SafeControls> http://msdn.microsoft.com/en-us/library/ms413697.aspx A: I was having this problem too. It turned out that there was a problem with my Manifest.xml file. In the SafeControl tag for my assembly, I had the TypeName specifically defined. When I changed the TypeName to a wildcard value, the error messages in the event log stopped. So to recap: This caused errors in the event log: <SafeControl Assembly="AssemblyName, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bac12230d2e4a0a" Namespace="AssemblyName" **TypeName="AssemblyName"** Safe="True" /> This cleared them up: <SafeControl Assembly="AssemblyName, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bac12230d2e4a0a" Namespace="AssemblyName" **TypeName="*"** Safe="True" /> A: It sure does sound like you have a problem with your safe control entry. I would try: Under the NameSpace and TypeName use "*". Using wildcards in namespace and typeName will register all classes in all namespaces in your assembly as safe. (You generally wouldn't want to do this with 3rd party tools.) A: This is because of the amount of list items in the lists. Your server takes to much time to migrate all the list items and it fails, try deleiting the list items or configuring the server. Regards, Mariano.
{ "language": "en", "url": "https://stackoverflow.com/questions/25871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is a data binding? What is a data binding? A: Binding generally refers to a mapping of one thing to another - i.e. a datasource to a presentation object. It can typically refer to binding data from a database, or similar source (XML file, web service etc) to a presentation control or element - think list or table in HTML, combo box or data grid in desktop software. You generally have to bind the presentation element to the datasource, not the other way around. This would involve some kind of mapping - i.e. which fields from the datasource do you want to appear in the output. For more information in a couple of environments see: * *Data binding in .Net using Windows Forms * *http://www.codeproject.com/KB/database/databindingconcepts.aspx *http://www.akadia.com/services/dotnet_databinding.html *ASP.NET data binding * *http://support.microsoft.com/kb/307860 *http://www.15seconds.com/issue/040630.htm *http://www.w3schools.com/ASPNET/aspnet_databinding.asp *Java data binding * *http://www.xml.com/pub/a/2003/09/03/binding.html *Python data binding * *http://www.xml.com/pub/a/2005/07/27/py-xml.html *General XML data binding * *http://www.rpbourret.com/xml/XMLDataBinding.htm
{ "language": "en", "url": "https://stackoverflow.com/questions/25878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What toolchains exist for Continuous Integration with C++? Continuous Integration toolchains for .NET, Java, and other languages are relatively well defined, but the C++ market seems to have a lot of diversity. By CI "toolchain" I specifically mean tools for the build scripts, automated testing, coding standards checking, etc. What are C++ teams using for CI toolchains? A: Another option might be buildbot. It's written in python, but is not just for python apps. It can execute any script for doing your build. If you look at their success stories, there appear to be a wide variety of languages. A: We implemented our C++ cross platform continous integration infrastructure using Parabuild http://www.viewtier.com/products/parabuild/screenshots.htm We were able to integrate every sort of Win/Mac/Linux QA tool with it and it's really easy to install and maintain: it's one click installation on every platform and the web interface is very handy. While evaluating several continous integration servers the main problem was that they were Java-biased: Parabuild, on the other hand, fits well in the C++ cross platform development and QA workflow A: Visual Build Professional is my favorite tool for pulling together all the other tools. Windows only, of course, but it integrates with all flavors of Visual Studio and a host of test tools, source controls tools, issue trackers, etc. It is windows only, though. I know that's not the entire stack, but it's a start. A: G'day, We actually faced this problem at a site where I was contracting previously. One bloke sat down and wrote tools, mainly shell scripts, to * *check out the current code base every hour or so and do a build to check if it was broken, and *check out the latest good build and do a complete build and run about 8,000 regression tests. We just couldn't find anything commercially available to do this and so Charlie sat down and wrote this in bash shell scripts and it was running on HP-UX. cheers, Rob A: As with seemingly every other task in C++, I'm just barely limping along with continuous integration. My setup starts with Eclipse. I set it to generate make files for my projects. I have ant scripts that do the overall build tasks by running 'make all' or 'make clean' on the appropriate makefiles. These ant scripts are part of my project, and I have to update them when I add a new build configuration or a new piece to the system. It's not that bad though. I use CruiseControl to actually run the builds. Each project (all one of them) has an ant script of its own that performs build specific tasks (copying artifacts, processing results), calling into the project ant script to do the building. I had to use cppunit for my testing and process the results with an xslt file I found somewhere. I also have the wrong svn revision label on each build because I can't find a suitable svn labeler. All I can find is half-completed years-old code and people arguing that other people are doing it wrong. It looks to me like CC is a dying system, but I haven't found anything better for C++. Then again, I also feel like C++ is a dying language, so maybe it's bigger than just this. A: We used scons for continuous integration run by a central build server. Some projects migrated to buildbot. I'm now getting into rake and considering solutions as surveyed in this blog. Fowler mentions that ThoughtWorks occasionally use rake for their build scripting in his Continuous Integration article.
{ "language": "en", "url": "https://stackoverflow.com/questions/25902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: CruiseControl.net duplicate NAnt timings I'm trying to setup CruiseControl.net webdashboard at the moment. So far it works nice, but I have a problem with the NAnt Build Timing Report. Firstly, my current ccnet.config file looks something like this: <project name="bla"> ... <prebuild> <nant .../> </prebuild> <tasks> <nant .../> </tasks> <publishers> <nant .../> </publishers> ... </project> As the build completes, NAnt timing report displays three duplicate summaries. Is there a way to fix this without changing the project structure? ­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: Apparently this can be solved by selecting only the first <buildresults> node in webdashboard's NAntTiming.xsl. Because each duplicate summary contains the same info this change in <div id="NAntTimingReport"> section seems to be sufficient: <xsl:variable name="buildresults" select="//build/buildresults[1]" /> A: Not a direct answer to your question, but you might want to check out Hudson. It has the benefit of being much easier to configure than CruiseControl. There's a bit about using it for NAnt here.
{ "language": "en", "url": "https://stackoverflow.com/questions/25914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is ASP.NET? I've been strictly in a C++ environment for years (and specialized statistical languages). Visual Studio until 2001, and Borland since. Mostly specialized desktop applications for clients. I'm not remaining willfully ignorant of it, but over the years when I've dipped into other things, I've spent my time playing around with JavaScript, PHP, and a lot of Python. Is "ASP.NET" the language? Is C# the language and "ASP.NET" the framework? What's a good answer to "What is ASP.NET"? Is there a correspondence between ASP.NET and anything I'd be familiar with in C++? I know I can google the same title, but I'd rather see answers from this crowd. (Besides, in the future, I think that Google should point here for questions like that.) A: I was going to write a lengthy answer but I felt that Wikipedia had it covered: ASP.NET is a web application framework developed and marketed by Microsoft, that programmers can use to build dynamic web sites, web applications and web services. It was first released in January 2002 with version 1.0 of the .NET Framework, and is the successor to Microsoft's Active Server Pages (ASP) technology. ASP.NET is built on the Common Language Runtime (CLR), allowing programmers to write ASP.NET code using any supported .NET language. So ASP.NET is Microsoft's web development framework and the latest version is 4.0. How do I get started? Check out the following resources: * *Learn ASP.NET *ASP.NET Documentation *ASP.NET Developer Center A: ASP.NET is the framework, just like .NET The code itself, will be a mix of HTML, JavaScript(for Client-Side) and any .NET compatible language. So C#, VB.NET, C++.NET, heck...even IronPython A: ASP.NET is a framework, it delivers: * *A class hierachy you hook into, that allows both usage of supplied components, as well as development of your own. *Integration with and easy access to the underlying webserver. *An event model, which is probably the "best" thing about it. *A general abstraction from the underlying medium of HTML and HTTP. Not sure if ASP.NET compares to any C++ frameworks you may be familiar with. Web frameworks usually tend to be unique due to the statelessness of HTTP and the relatively low-tech technologies involved (HTML, scripting, etc). A: ASP.NET is a web application framework developed and marketed by Microsoft, that programmers can use to build dynamic web sites, web applications and web services. It was first released in January 2002 with version 1.0 of the .NET Framework, and is the successor to Microsoft's Active Server Pages (ASP) technology. ASP.NET is built on the Common Language Runtime (CLR), allowing programmers to write ASP.NET code using any supported .NET language. ASP.NET (Wikipedia) That's on the second result searching on Google so I'm guessing (half-expecting) that you don't understand what that means either. Webpage development started with simple static HTML pages. That meant the client asked for a page by means of an URL and the server sent the page back to him/her exactly as it has been designed. Sometime after that several technologies emerged in order to provide a more "dynamic" or personalized experience. Several "server side languages" were developed (PHP, Perl, ASP...) which allowed the server to process the Web page before sending it back to the client. This way when a client requested a webpage the server could interpret the request, process it (for example connecting to a database and fetching some results) and send it back modifying the contents and making them "dynamic". The fact that the process took place on the server stands for the name of "server side". So the original ASP (predecessor of the ASP.NET) was a server side language that was focused on serving web pages. In such way it supported several shortcuts such as the possibility to intercalate HTML and ASP source into the file which was on that time much popular due to PHP implementation. It was also (as most of these languages) a dynamic language and it was interpreted. ASP.NET is an evolution of that original ASP with some improvements. First it does truly (try to) separate the presentation (HTML) from the code (.cs) which may be implemented by using Visual Basic or C# syntax. It also incorporate some sort of compilation to the final ASP pages, encapsulating them into assemblies and thus improving performance. Finally it has access to the full .NET framework which supports a wide number of helper classes. So, summing up, it is a programming language located on the server and designed to make webpages. A: Let's say it's a technique from MS to build web applications. ASP stands for Active Server Pages, .NET is the framework behind it. C# and VB.NET are the languages which can be used, but I guess other .NET languages also can be used. A: Take a look at MS' info for those who don't know or understand the platform. http://www.asp.net/get-started
{ "language": "en", "url": "https://stackoverflow.com/questions/25921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Summary fields in Crystal Report VS2008 I need to have a summary field in each page of the report and in page 2 and forward the same summary has to appear at the top of the page. Anyone know how to do this? Ex: > > Page 1 > > Name Value > a 1 > b 3 > Total 4 > > Page 2 > Name Value > Total Before 4 > c 5 > d 1 > Total 10 A: Create a new Running Total Field called, for example "RTotal". In "Field to summarize" select "Value", in "Type of summary" select "sum", under "Evaluate" select "For each record". You can then drag this field into your report to use as the "Total" at the bottom of each page. You cannot use this running total field in the page header too, however, because Crystal will add the value in the first row on the page to it first (so in your example it would show 9 rather than 4 at the top of page 2). To work around this, create a formula field which subtracts the current value of the Value field from the running total (e.g. {#RTotal}-{TableName.Value}), and put this formula field in your page header. A: I do not understand your question all the way. If you need an overall summary that is repeated, you would need a sub-report that have shown in the report multiple times.
{ "language": "en", "url": "https://stackoverflow.com/questions/25938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Ruby Performance I'm pretty keen to develop my first Ruby app, as my company has finally blessed its use internally. In everything I've read about Ruby up to v1.8, there is never anything positive said about performance, but I've found nothing about version 1.9. The last figures I saw about 1.8 had it drastically slower than just about everything out there, so I'm hoping this was addressed in 1.9. Has performance drastically improved? Are there some concrete things that can be done with Ruby apps (or things to avoid) to keep performance at the best possible level? A: There are some benchmarks of 1.8 vs 1.9 at http://www.rubychan.de/share/yarv_speedups.html. Overall, it looks like 1.9 is a lot faster in most cases. A: If scalability and performance are really important to you you can also check out Ruby Enterprise Edition. It's a custom implementation of the Ruby interpreter that's supposed to be much better about memory allocation and garbage collection. I haven't seen any objective metrics comparing it directly to JRuby, but all of the anectdotal evidence I've heard has been very very good. This is from the same company that created Passenger (aka mod_rails) which you should definitely check out as a rails deployment solution if you decide not to go the JRuby route. A: Matz ruby 1.8.6 is much slower when it comes to performance and 1.9 and JRuby do alot to speed it up. But the performance isn't such that it will prevent you from doing anything you want in a web application. There are many large Ruby on Rails sites that do just fine with the "slower interpreted" language. When you get to scaling out web apps there are many more pressing performance issues than the speed of the language you are writing it in. A: I've actually heard really good things performance with about the JVM implementation, JRuby. Completly anecdotal, but perhaps worth looking into. See also http://en.wikipedia.org/wiki/JRuby#Performance A: Check out "Writing Efficient Ruby Code" from Addison Wesley Professional: http://safari.oreilly.com/9780321540034 I found some very helpful and interesting insights in this short work. And if you sign up for the free 10-day trial you could read it for free. (It's 50 pages and the trial gets you (AFAIR) 100 page views.) https://ssl.safaribooksonline.com/promo A: I am not a Ruby programmer but I have been pretty tightly involved in a JRuby deployment lately and can thus draw some conclusions. Do not expect to much from JRuby's performance. In interpreted mode, it seems to be somewhere in the range of C Ruby. JIT mode might be faster, but only in theory. In practice, we tried JIT mode on Glassfish for a decently-sized Rails application on a medium-sized server (dual core, 8GB RAM). And the truth is, the JITting took so freakingly much time, that the server needed 20-30 minutes before it answered the first request. Memory usage was astronomic, profiling did not work because the whole system grinded to halt with a profiler attached. Bottom line: JRuby has its merits (multithreading, solid platform, easy Java integration), but given that interpreted mode is the only mode that worked for us in practice, it may be expected to be no better performance-wise than C Ruby. A: I'd second the recommendation of the use of Passenger - it makes deployment and management of Rails applications trivial
{ "language": "en", "url": "https://stackoverflow.com/questions/25950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Best programming based games Back when I was at school, I remember tinkering with a Mac game where you programmed little robots in a sort of pseudo-assembler language which could then battle each other. They could move themselves around the arena, look for opponents in different directions, and fire some sort of weapon. Pretty basic stuff, but I remember it quite fondly, even if I can't remember the name. Are there any good modern day equivalents? A: I'm surprised that Space Chem isn't mentioned yet. Programming with symbols, but programming nevertheless. http://spacechemthegame.com/ A: Another good one is CEEBot. It teaches C / Java style programming in a fun, robot-programming kind of game. It is aimed at 10-15 year olds, but it is a good one. A: Colobot It's usually easy for new programmers to pick up on languages like C++ when you have a strong understanding of Java basics. Colobot allows you to program automated craft using a language almost identical to Java and to move, sense, and manipulate their environment in order to accomplish missions on a dangerous planet. A: I was also keen on these kind of games. One modern example which I have used is http://www.robotbattle.com/. There are various others - for example the ones listed at http://www.google.com/Top/Games/Video_Games/Simulation/Programming_Games/Robotics/ A: Core Wars is the classic, of course. But Rocky's Boots is another one. Imagine! There was a time (1982) when you could sell a commercial game based on logic gates! A: If you are willing to look at single player games like Light Bot and Manufactoria then I highly recommend RoboZZle. It has conditional commands which include function calls. This allows for complex stack manipulation. There are thousands of user created puzzles from pathetically obvious to mind blowing enigmas. They have recently added support for smartphones. I also think The Codex of Alchemical Engineering is worth a look. A: I think .NET Terrarium is one of the best 'learn-to-program' games for the .NET platform. A: I like Ruby Warrior. It is still somewhat under development, but it is a great game with a clever interface. A: I used to have a lot of fun coding my own robot with Robocode in college. It is Java based, the API is detailled and it's pretty easy to get a challenging robot up and running. Here is an example : public class MyFirstRobot extends Robot { public void run() { while (true) { ahead(100); turnGunRight(360); back(100); turnGunRight(360); } } public void onScannedRobot(ScannedRobotEvent e) { fire(1); } } A: Just found Light Bot. Program your robot to move around and perform tasks to complete a puzzle. Even includes subroutines. Program the bot by dragging tiles into slots. The game is very polished. Update Lightbot is now the most recent version of the game, and has versions specifically designed for kids ages 4-8 or ages 9+ (with no upper limit) and also features kind of an if screen of lightbot 1 http://www.lostateminor.com/wp-content/uploads/2008/10/light-bot.jpg A: Core Wars A: There's also mySQLgame, I found it pretty amusing (shortly after finding out I suck). Here's what Casual Gameplay has to say about it. A: Kara is about programming a bug(!) coming up in various versions, e.g. Finite State Machine, Java, Turing Machine, Multithreading Kara http://www.swisseduc.ch/compscience/karatojava/kara/icons/kara-worldeditor.gif A: Planetwars is a game specifically written for Google Ai Contest, bots are controlling fleets for conquering planets, they support many languages A: I think the original game was called Core Wars (this Wikipedia article contains a lot of interesting links); there still seem to be programs and competitions around, for example at corewars.org. I never had the time to look into these games, but they seem like great fun. A: I'd say the most famous programming game there has been is the core wars. I don't know if you can still find active "rings" although there was a lot when I tried it some time ago (4 or 5 years). A: I've never heard or Core Wars before, but it looks interesting. I do have to vouch for RoboCode, though. That's fun and challenging, especially if you have a group of people competing against either other. A: http://en.wikipedia.org/wiki/Hacker_(computer_game) http://en.wikipedia.org/wiki/Hacker_2 There is also a great hacking game the name of which I simply cannot remember. Hrm. A: Matt, I think the game you're referring to is CRobots (or one of its clones, perhaps -- my first contact was with PRobots, in Pascal, if I remember correctly). It was a lot of fun. A: While it was more logic than programming per se, one I really enjoyed back in elementary school was Rocky's Boots. It had sensors, AND gates, OR gates, NOT gates, wires, timers, and all sorts of other stuff. Fantastic program for teaching a kid logic. Go to the link and you can still play it! A: Carnage Heart for PlayStation was fun. It would let you program little mechs to do battle using a flow diagram. A: In the flash game Manufactoria you "program" a factory by laying out the conveyor belts and switches in a way that's very similar to a FSM, but more powerful. This game is really great. Give it a try, especially if you're into formal languages and automata! Manufactoria screen shot http://www.tomdalling.com/wp-content/uploads/manufactoria-bubble-sort.png A: A game in which you have to graphically construct and train artificial neural networks in order to control a bug is Bug Brain. Bug Brain screen shot http://www.infionline.net/~wtnewton/oldcomp/bugbrain.jpg A: The game in question was definitely Robowar for the Mac. My son had a lot of fun with it and went on to program real robots. As mentioned earlier by Proud, there is a wiki page for it: http://en.wikipedia.org/wiki/RoboWar Although there has not been a lot of activity surrounding the game over the last few years, there was a tournament held recently, and there is a yahoo email group. A: If you want to step away from your keyboard, Wizards of the Coast relased a game called RoboRally that is a combative programming board game. http://www.wizards.com/roborally/ A: http://www.pythonchallenge.com/ highly addictive, and a great way to learn python A: I have to give a shout out to RobotWar which was the first programming "game" that I played way back in the Apple II days. It was written by Silas Warner of Castle Wolfenstein fame. A: I got myself addicted to uplink a few months ago. It's not really coding based, more hacking. It's still fun and super geeky. A: Although not strictly programming-based, I enjoyed a lot Robot Odyssey, a game where you wired logic gates to sensors and motors in a robot, to make it move and react to environment, to get out of a city, escaping obstacles. I played in on Apple //e, it was one of the best games on this computer (with Lode Runner! :-)). A: You must be thinking of RoboWar. Oh how lovely it is. Still exists, though the community is slowly dying. http://robowar.sourceforge.net/RoboWar5/index.html http://tech.groups.yahoo.com/group/robowar/ A: There's racing car simulator game TORCS also where on top of the typical end user playing it (you actually "driving" the cars), you can program robots which control the cars. Regular races are held between robots created by different people. A: Another game in this vein is Origin's Omega. Tanks are constructed on a budget, and then the user programs them in a BASIC-like language with a structured editor. The tanks battle on fields with varying terrain. A: My favourite was PCRobots back in the 90's - you could write your bot in pretty much any language that could compile a DOS executable. Still runs quite nicely in DOSBox :) A: Omega is one of them, I played it on C64 :) A: There is a Spanish Java Page who organice a football leage in wich the users program the skills of their team and the strategy. You only need to download the framework and implement a little interface, then you can simulate matchs which are seen in the screen. When you are happy with your team and strategy you submit the code to the page and enters in the tournament. Tutorials, videos and downloads: Java Cup A: I've been trying to find the original game I was thinking of - I think it was called 'bots or something like that, and ran on my Mac back in around system 6 days. I'll have to do some digging next time I'm back at my parents place. Thinking more about it over the last day or so, I suppose it's really not all that different to writing brains for bolo (http://www.lgm.com/bolo/) or bots for Quake and those sort of games. A: The game was Robowar--I used to play a bit back in college. Here's the wiki for it. I guess it's now open source and available on windows. A: I played RoboWar, but the programming game I remember on the Mac was Chipwits. It came out in 1984. Completely graphical, but entertaining. From what I've seen of Lego Mindstorms, the programming style is similar. A: For a modern equivalent, check out CodeRally, it's a Java programming challenge where you write a class to control a race car. The car drives around a track trying to hit check points, refilling when the gas tank runs low, and avoiding obstacles. I think you can throw tires at your opponents. You can run a tournament with several players submitting code to a central server. There are several other programming games listed on IBM's high school outreach page, including Robocode that others already mentioned. A: One interesting historical game is the old Robot Odyssey, which was essentially a game where you programmed robots by building logic circuits out of gates and flip flops. I remembered it clearly when I took real EE classes over a decade later. A: AI Bots II is a programmer's game. Instead of playing the game directly, players are required to write a program to do it. There is an arena where two teams of players (called bots) are loaded. Each team needs one program, multiple instances of which will control each player of that team. [...] You write your program in C/C++. Your program runs parallely with the game. (copied this description directly from the game homepage) A: I have so far found the game Cells to be quite satisfying. The whole object of the game is to program a "hive mind", which is capable of surviving, breeding, and destroying its enemies. Lots are good "minds" are provided by the author and various contributors, and it's easy to write others by looking at them. However, the author seems to be progressing on it very slowly - the last commit was about a year ago. A: In Grobots you give a program for various types of robots in your army (think gatherers, fighters, builders). And the best: they can replicate. Comes with its own programming language.
{ "language": "en", "url": "https://stackoverflow.com/questions/25952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "113" }
Q: Insert into ... values ( SELECT ... FROM ... ) I am trying to INSERT INTO a table using the input from another table. Although this is entirely feasible for many database engines, I always seem to struggle to remember the correct syntax for the SQL engine of the day (MySQL, Oracle, SQL Server, Informix, and DB2). Is there a silver-bullet syntax coming from an SQL standard (for example, SQL-92) that would allow me to insert the values without worrying about the underlying database? A: select * into tmp from orders Looks nice, but works only if tmp doesn't exists (creates it and fills). (SQL sever) To insert into existing tmp table: set identity_insert tmp on insert tmp ([OrderID] ,[CustomerID] ,[EmployeeID] ,[OrderDate] ,[RequiredDate] ,[ShippedDate] ,[ShipVia] ,[Freight] ,[ShipName] ,[ShipAddress] ,[ShipCity] ,[ShipRegion] ,[ShipPostalCode] ,[ShipCountry] ) select * from orders set identity_insert tmp off A: Both the answers I see work fine in Informix specifically, and are basically standard SQL. That is, the notation: INSERT INTO target_table[(<column-list>)] SELECT ... FROM ...; works fine with Informix and, I would expect, all the DBMS. (Once upon 5 or more years ago, this is the sort of thing that MySQL did not always support; it now has decent support for this sort of standard SQL syntax and, AFAIK, it would work OK on this notation.) The column list is optional but indicates the target columns in sequence, so the first column of the result of the SELECT will go into the first listed column, etc. In the absence of the column list, the first column of the result of the SELECT goes into the first column of the target table. What can be different between systems is the notation used to identify tables in different databases - the standard has nothing to say about inter-database (let alone inter-DBMS) operations. With Informix, you can use the following notation to identify a table: [dbase[@server]:][owner.]table That is, you may specify a database, optionally identifying the server that hosts that database if it is not in the current server, followed by an optional owner, dot, and finally the actual table name. The SQL standard uses the term schema for what Informix calls the owner. Thus, in Informix, any of the following notations could identify a table: table "owner".table dbase:table dbase:owner.table dbase@server:table dbase@server:owner.table The owner in general does not need to be quoted; however, if you do use quotes, you need to get the owner name spelled correctly - it becomes case-sensitive. That is: someone.table "someone".table SOMEONE.table all identify the same table. With Informix, there's a mild complication with MODE ANSI databases, where owner names are generally converted to upper-case (informix is the exception). That is, in a MODE ANSI database (not commonly used), you could write: CREATE TABLE someone.table ( ... ) and the owner name in the system catalog would be "SOMEONE", rather than 'someone'. If you enclose the owner name in double quotes, it acts like a delimited identifier. With standard SQL, delimited identifiers can be used many places. With Informix, you can use them only around owner names -- in other contexts, Informix treats both single-quoted and double-quoted strings as strings, rather than separating single-quoted strings as strings and double-quoted strings as delimited identifiers. (Of course, just for completeness, there is an environment variable, DELIMIDENT, that can be set - to any value, but Y is safest - to indicate that double quotes always surround delimited identifiers and single quotes always surround strings.) Note that MS SQL Server manages to use [delimited identifiers] enclosed in square brackets. It looks weird to me, and is certainly not part of the SQL standard. A: IF you want to insert some data into a table without want to write column name. INSERT INTO CUSTOMER_INFO (SELECT CUSTOMER_NAME, MOBILE_NO, ADDRESS FROM OWNER_INFO cm) Where the tables are: CUSTOMER_INFO || OWNER_INFO ----------------------------------------||------------------------------------- CUSTOMER_NAME | MOBILE_NO | ADDRESS || CUSTOMER_NAME | MOBILE_NO | ADDRESS --------------|-----------|--------- || --------------|-----------|--------- A | +1 | DC || B | +55 | RR Result: CUSTOMER_INFO || OWNER_INFO ----------------------------------------||------------------------------------- CUSTOMER_NAME | MOBILE_NO | ADDRESS || CUSTOMER_NAME | MOBILE_NO | ADDRESS --------------|-----------|--------- || --------------|-----------|--------- A | +1 | DC || B | +55 | RR B | +55 | RR || A: Two approaches for insert into with select sub-query. * *With SELECT subquery returning results with One row. *With SELECT subquery returning results with Multiple rows. 1. Approach for With SELECT subquery returning results with one row. INSERT INTO <table_name> (<field1>, <field2>, <field3>) VALUES ('DUMMY1', (SELECT <field> FROM <table_name> ),'DUMMY2'); In this case, it assumes SELECT Sub-query returns only one row of result based on WHERE condition or SQL aggregate functions like SUM, MAX, AVG etc. Otherwise it will throw error 2. Approach for With SELECT subquery returning results with multiple rows. INSERT INTO <table_name> (<field1>, <field2>, <field3>) SELECT 'DUMMY1', <field>, 'DUMMY2' FROM <table_name>; The second approach will work for both the cases. A: To add something in the first answer, when we want only few records from another table (in this example only one): INSERT INTO TABLE1 (COLUMN1, COLUMN2, COLUMN3, COLUMN4) VALUES (value1, value2, (SELECT COLUMN_TABLE2 FROM TABLE2 WHERE COLUMN_TABLE2 like "blabla"), value4); A: If you go the INSERT VALUES route to insert multiple rows, make sure to delimit the VALUES into sets using parentheses, so: INSERT INTO `receiving_table` (id, first_name, last_name) VALUES (1002,'Charles','Babbage'), (1003,'George', 'Boole'), (1001,'Donald','Chamberlin'), (1004,'Alan','Turing'), (1005,'My','Widenius'); Otherwise MySQL objects that "Column count doesn't match value count at row 1", and you end up writing a trivial post when you finally figure out what to do about it. A: Instead of VALUES part of INSERT query, just use SELECT query as below. INSERT INTO table1 ( column1 , 2, 3... ) SELECT col1, 2, 3... FROM table2 A: Most of the databases follow the basic syntax, INSERT INTO TABLE_NAME SELECT COL1, COL2 ... FROM TABLE_YOU_NEED_TO_TAKE_FROM ; Every database I have used follow this syntax namely, DB2, SQL Server, MY SQL, PostgresQL A: This can be done without specifying the columns in the INSERT INTO part if you are supplying values for all columns in the SELECT part. Let's say table1 has two columns. This query should work: INSERT INTO table1 SELECT col1, col2 FROM table2 This WOULD NOT work (value for col2 is not specified): INSERT INTO table1 SELECT col1 FROM table2 I'm using MS SQL Server. I don't know how other RDMS work. A: This is another example using values with select: INSERT INTO table1(desc, id, email) SELECT "Hello World", 3, email FROM table2 WHERE ... A: Just use parenthesis for SELECT clause into INSERT. For example like this : INSERT INTO Table1 (col1, col2, your_desired_value_from_select_clause, col3) VALUES ( 'col1_value', 'col2_value', (SELECT col_Table2 FROM Table2 WHERE IdTable2 = 'your_satisfied_value_for_col_Table2_selected'), 'col3_value' ); A: To get only one value in a multi value INSERT from another table I did the following in SQLite3: INSERT INTO column_1 ( val_1, val_from_other_table ) VALUES('val_1', (SELECT val_2 FROM table_2 WHERE val_2 = something)) A: Simple insertion when table column sequence is known: Insert into Table1 values(1,2,...) Simple insertion mentioning column: Insert into Table1(col2,col4) values(1,2) Bulk insertion when number of selected columns of a table(#table2) are equal to insertion table(Table1) Insert into Table1 {Column sequence} Select * -- column sequence should be same. from #table2 Bulk insertion when you want to insert only into desired column of a table(table1): Insert into Table1 (Column1,Column2 ....Desired Column from Table1) Select Column1,Column2..desired column from #table2 from #table2 A: If you create table firstly you can use like this; select * INTO TableYedek From Table This metot insert values but differently with creating new copy table. A: Try: INSERT INTO table1 ( column1 ) SELECT col1 FROM table2 This is standard ANSI SQL and should work on any DBMS It definitely works for: * *Oracle *MS SQL Server *MySQL *Postgres *SQLite v3 *Teradata *DB2 *Sybase *Vertica *HSQLDB *H2 *AWS RedShift *SAP HANA *Google Spanner A: Here is another example where source is taken using more than one table: INSERT INTO cesc_pf_stmt_ext_wrk( PF_EMP_CODE , PF_DEPT_CODE , PF_SEC_CODE , PF_PROL_NO , PF_FM_SEQ , PF_SEQ_NO , PF_SEP_TAG , PF_SOURCE) SELECT PFl_EMP_CODE , PFl_DEPT_CODE , PFl_SEC , PFl_PROL_NO , PF_FM_SEQ , PF_SEQ_NO , PFl_SEP_TAG , PF_SOURCE FROM cesc_pf_stmt_ext, cesc_pfl_emp_master WHERE pfl_sep_tag LIKE '0' AND pfl_emp_code=pf_emp_code(+); COMMIT; A: Here's how to insert from multiple tables. This particular example is where you have a mapping table in a many to many scenario: insert into StudentCourseMap (StudentId, CourseId) SELECT Student.Id, Course.Id FROM Student, Course WHERE Student.Name = 'Paddy Murphy' AND Course.Name = 'Basket weaving for beginners' (I realise matching on the student name might return more than one value but you get the idea. Matching on something other than an Id is necessary when the Id is an Identity column and is unknown.) A: You could try this if you want to insert all column using SELECT * INTO table. SELECT * INTO Table2 FROM Table1; A: I actually prefer the following in SQL Server 2008: SELECT Table1.Column1, Table1.Column2, Table2.Column1, Table2.Column2, 'Some String' AS SomeString, 8 AS SomeInt INTO Table3 FROM Table1 INNER JOIN Table2 ON Table1.Column1 = Table2.Column3 It eliminates the step of adding the Insert () set, and you just select which values go in the table. A: This worked for me: insert into table1 select * from table2 The sentence is a bit different from Oracle's. A: INSERT INTO yourtable SELECT fielda, fieldb, fieldc FROM donortable; This works on all DBMS A: For Microsoft SQL Server, I will recommend learning to interpret the SYNTAX provided on MSDN. With Google it's easier than ever, to look for syntax. For this particular case, try Google: insert site:microsoft.com The first result will be http://msdn.microsoft.com/en-us/library/ms174335.aspx scroll down to the example ("Using the SELECT and EXECUTE options to insert data from other tables") if you find it difficult to interpret the syntax given at the top of the page. [ WITH <common_table_expression> [ ,...n ] ] INSERT { [ TOP ( expression ) [ PERCENT ] ] [ INTO ] { <object> | rowset_function_limited [ WITH ( <Table_Hint_Limited> [ ...n ] ) ] } { [ ( column_list ) ] [ <OUTPUT Clause> ] { VALUES ( { DEFAULT | NULL | expression } [ ,...n ] ) [ ,...n ] | derived_table <<<<------- Look here ------------------------ | execute_statement <<<<------- Look here ------------------------ | <dml_table_source> <<<<------- Look here ------------------------ | DEFAULT VALUES } } } [;] This should be applicable for any other RDBMS available there. There is no point in remembering all the syntax for all products IMO. A: INSERT INTO FIRST_TABLE_NAME (COLUMN_NAME) SELECT COLUMN_NAME FROM ANOTHER_TABLE_NAME WHERE CONDITION; A: Claude Houle's answer: should work fine, and you can also have multiple columns and other data as well: INSERT INTO table1 ( column1, column2, someInt, someVarChar ) SELECT table2.column1, table2.column2, 8, 'some string etc.' FROM table2 WHERE table2.ID = 7; I've only used this syntax with Access, SQL 2000/2005/Express, MySQL, and PostgreSQL, so those should be covered. It should also work with SQLite3. A: Best way to insert multiple records from any other tables. INSERT INTO dbo.Users ( UserID , Full_Name , Login_Name , Password ) SELECT UserID , Full_Name , Login_Name , Password FROM Users_Table (INNER JOIN / LEFT JOIN ...) (WHERE CONDITION...) (OTHER CLAUSE) A: In informix it works as Claude said: INSERT INTO table (column1, column2) VALUES (value1, value2); A: Postgres supports next: create table company.monitor2 as select * from company.monitor;
{ "language": "en", "url": "https://stackoverflow.com/questions/25969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1756" }
Q: How can I get the name of the executing .exe? The Compact Framework doesn't support Assembly.GetEntryAssembly to determine the launching .exe. So is there another way to get the name of the executing .exe? EDIT: I found the answer on Peter Foot's blog: http://peterfoot.net/default.aspx Here is the code: byte[] buffer = new byte[MAX_PATH * 2]; int chars = GetModuleFileName(IntPtr.Zero, buffer, MAX_PATH); if (chars > 0) { string assemblyPath = System.Text.Encoding.Unicode.GetString(buffer, 0, chars * 2); } [DllImport("coredll.dll", SetLastError = true)] private static extern int GetModuleFileName(IntPtr hModule, byte[] lpFilename, int nSize); A: I am not sure whether it works from managed code (or even the compact framework), but in Win32 you can call GetModuleFileName to find the running exe file. MSDN: GetModuleFileName A: string exefile = Assembly.GetExecutingAssembly().GetName().CodeBase; But if you put it in a DLL assembly, I believe it will give you the assembly file name. The same call on the "Full" framework would return the .exe file with a "file:\" prefix. A: In managed code, i think you can use this: http://msdn.microsoft.com/en-us/library/system.windows.forms.application.executablepath.aspx Application.ExecutablePath
{ "language": "en", "url": "https://stackoverflow.com/questions/25975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I measure the similarity between two images? I would like to compare a screenshot of one application (could be a Web page) with a previously taken screenshot to determine whether the application is displaying itself correctly. I don't want an exact match comparison, because the aspect could be slightly different (in the case of a Web app, depending on the browser, some element could be at a slightly different location). It should give a measure of how similar are the screenshots. Is there a library / tool that already does that? How would you implement it? A: You'll need pattern recognition for that. To determine small differences between two images, Hopfield nets work fairly well and are quite easy to implement. I don't know any available implementations, though. A: This depends entirely on how smart you want the algorithm to be. For instance, here are some issues: * *cropped images vs. an uncropped image *images with a text added vs. another without *mirrored images The easiest and simplest algorithm I've seen for this is just to do the following steps to each image: * *scale to something small, like 64x64 or 32x32, disregard aspect ratio, use a combining scaling algorithm instead of nearest pixel *scale the color ranges so that the darkest is black and lightest is white *rotate and flip the image so that the lighest color is top left, and then top-right is next darker, bottom-left is next darker (as far as possible of course) Edit A combining scaling algorithm is one that when scaling 10 pixels down to one will do it using a function that takes the color of all those 10 pixels and combines them into one. Can be done with algorithms like averaging, mean-value, or more complex ones like bicubic splines. Then calculate the mean distance pixel-by-pixel between the two images. To look up a possible match in a database, store the pixel colors as individual columns in the database, index a bunch of them (but not all, unless you use a very small image), and do a query that uses a range for each pixel value, ie. every image where the pixel in the small image is between -5 and +5 of the image you want to look up. This is easy to implement, and fairly fast to run, but of course won't handle most advanced differences. For that you need much more advanced algorithms. A: A ruby solution can be found here From the readme: Phashion is a Ruby wrapper around the pHash library, "perceptual hash", which detects duplicate and near duplicate multimedia files A: How to measure similarity between two images entirely depends on what you would like to measure, for example: contrast, brightness, modality, noise... and then choose the best suitable similarity measure there is for you. You can choose from MAD (mean absolute difference), MSD (mean squared difference) which are good for measuring brightness...there is also available CR (correlation coefficient) which is good in representing correlation between two images. You could also choose from histogram based similarity measures like SDH (standard deviation of difference image histogram) or multimodality similarity measures like MI (mutual information) or NMI (normalized mutual information). Because this similarity measures cost much in time, it is advised to scale images down before applying these measures on them. A: I wonder (and I'm really just throwing the idea out there to be shot down) if something could be derived by subtracting one image from the other, and then compressing the resulting image as a jpeg of gif, and taking the file size as a measure of similarity. If you had two identical images, you'd get a white box, which would compress really well. The more the images differed, the more complex it would be to represent, and hence the less compressible. Probably not an ideal test, and probably much slower than necessary, but it might work as a quick and dirty implementation. A: The 'classic' way of measuring this is to break the image up into some canonical number of sections (say a 10x10 grid) and then computing a histogram of RGB values inside of each cell and compare corresponding histograms. This type of algorithm is preferred because of both its simplicity and it's invariance to scaling and (small!) translation. A: You might look at the code for the open source tool findimagedupes, though it appears to have been written in perl, so I can't say how easy it will be to parse... Reading the findimagedupes page that I liked, I see that there is a C++ implementation of the same algorithm. Presumably this will be easier to understand. And it appears you can also use gqview. A: Use a normalised colour histogram. (Read the section on applications here), they are commonly used in image retrieval/matching systems and are a standard way of matching images that is very reliable, relatively fast and very easy to implement. Essentially a colour histogram will capture the colour distribution of the image. This can then be compared with another image to see if the colour distributions match. This type of matching is pretty resiliant to scaling (once the histogram is normalised), and rotation/shifting/movement etc. Avoid pixel-by-pixel comparisons as if the image is rotated/shifted slightly it may lead to a large difference being reported. Histograms would be straightforward to generate yourself (assuming you can get access to pixel values), but if you don't feel like it, the OpenCV library is a great resource for doing this kind of stuff. Here is a powerpoint presentation that shows you how to create a histogram using OpenCV. A: Well, not to answer your question directly, but I have seen this happen. Microsoft recently launched a tool called PhotoSynth which does something very similar to determine overlapping areas in a large number of pictures (which could be of different aspect ratios). I wonder if they have any available libraries or code snippets on their blog. A: to expand on Vaibhav's note, hugin is an open-source 'autostitcher' which should have some insight on the problem. A: There's software for content-based image retrieval, which does (partially) what you need. All references and explanations are linked from the project site and there's also a short text book (Kindle): LIRE A: Don't video encoding algorithms like MPEG compute the difference between each frame of a video so they can just encode the delta? You might look into how video encoding algorithms compute those frame differences. Look at this open source image search application http://www.semanticmetadata.net/lire/. It describes several image similarity algorighms, three of which are from the MPEG-7 standard: ScalableColor, ColorLayout, EdgeHistogram and Auto Color Correlogram. A: You could use a pure mathematical approach of O(n^2), but it will be useful only if you are certain that there's no offset or something like that. (Although that if you have a few objects with homogeneous coloring it will still work pretty well.) Anyway, the idea is the compute the normalized dot-product of the two matrices. C = sum(Pij*Qij)^2/(sum(Pij^2)*sum(Qij^2)). This formula is actually the "cosine" of the angle between the matrices (wierd). The bigger the similarity (lets say Pij=Qij), C will be 1, and if they're completely different, lets say for every i,j Qij = 1 (avoiding zero-division), Pij = 255, then for size nxn, the bigger n will be, the closer to zero we'll get. (By rough calculation: C=1/n^2). A: You can use Siamese Network to see if the two images are similar or dissimilar following this tutorial. This tutorial cluster the similar images whereas you can use L2 distance to measure the similarity of two images. A: Beyond Compare has pixel-by-pixel comparison for images, e.g., A: If this is something you will be doing on an occasional basis and doesn't need automating, you can do it in an image editor that supports layers, such as Photoshop or Paint Shop Pro (probably GIMP or Paint.Net too, but I'm not sure about those). Open both screen shots, and put one as a layer on top of the other. Change the layer blending mode to Difference, and everything that's the same between the two will become black. You can move the top layer around to minimize any alignment differences. A: Well a really base-level method to use could go through every pixel colour and compare it with the corresponding pixel colour on the second image - but that's a probably a very very slow solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/25977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103" }
Q: What's the simplest way to connect to a .NET remote server object Given that my client code knows everything it needs to about the remoting object, what's the simplest way to connect to it? This is what I'm doing at the moment: ChannelServices.RegisterChannel(new HttpChannel(), false); RemotingConfiguration.RegisterWellKnownServiceType( typeof(IRemoteServer), "RemoteServer.rem", WellKnownObjectMode.Singleton); MyServerObject = (IRemoteServer)Activator.GetObject( typeof(IRemoteServer), String.Format("tcp://{0}:{1}/RemoteServer.rem", server, port)); A: The first two lines are in the server-side code, for marshaling out the server object, yes? In that case, yes, the third line is the simplest you can get at client-side. In addition, you can serve out additional server-side objects from the MyServerObject instance, if you include public accessors for them in IRemoteServer interface, so, accessing those objects become the simple matter of method calls or property accesses on your main server object, so you don't have to use activator for every single thing: //obtain another marshalbyref object of the type ISessionManager: ISessionManager = MyServerObject.GetSessionManager(); A: WCF. I have used IPC before there was a WCF, and believe me, IPC is a bear. And it isn't documented fully/correctly. What’s the simplest way to connect to a .NET remote server object? WCF.
{ "language": "en", "url": "https://stackoverflow.com/questions/25982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Secure Online Highscore Lists for Non-Web Games I'm playing around with a native (non-web) single-player game I'm writing, and it occured to me that having a daily/weekly/all-time online highscore list (think Xbox Live Leaderboard) would make the game much more interesting, adding some (small) amount of community and competition. However, I'm afraid people would see such a feature as an invitation to hacking, which would discourage regular players due to impossibly high scores. I thought about the obvious ways of preventing such attempts (public/private key encryption, for example), but I've figured out reasonably simple ways hackers could circumvent all of my ideas (extracting the public key from the binary and thus sending fake encrypted scores, for example). Have you ever implemented an online highscore list or leaderboard? Did you find a reasonably hacker-proof way of implementing this? If so, how did you do it? What are your experiences with hacking attempts? A: I honestly don't think it's possible. I've done it before using pretty simple key encryption with a compressed binary which worked well enough for the security I required but I honestly think if somebody considers cracking your online high score table a hack it will be done. There are some pretty sad people out there who also happen to be pretty bright unless you can get them all laid it's a lost cause. A: I've been doing some of this with my Flash games, and it's a losing battle really. Especially for ActionScript that can be decompiled into somewhat readable code without too much effort. The way I've been doing it is a rather conventional approach of sending the score and player name in plain text and then a hash of the two (properly salted). Very few people are determined enough to take the effort to figure that out, and the few who are would do it anyway, negating all the time you put into it. To summarize, my philosophy is to spend the time on making the game better and just make it hard enough to cheat. One thing that might be pretty effective is to have the game submit the score to the server several times as you are playing, sending a bit of gameplay information each time, allowing you to validate if the score is "realistic". But that might be a bit over-the-top really. A: If your game has a replay system built in, you can submit replays to the server and have the server calculate the score from the replay. This method isn't perfect, you can still cheat by slowing down the game (if it is action-based), or by writing a bot. A: At the end of the day, you are relying on trusting the client. If the client sends replays to the server, it is easy enough to replicable or modify a successful playthrough and send that to the server. Your best bet is to raise the bar for cheating above what a player would deem worth surmounting. To do this, there are a number of proven (but oft-unmentioned) techniques you can use: * *Leave blacklisted cheaters in a honeypot. They can see their own scores, but no one else can. Unless they verify by logging in with a different account, they think they have successfully hacked your game. *When someone is flagged as a cheater, defer any account repercussions from transpiring until a given point in the future. Make this point random, within one to three days. Typically, a cheater will try multiple methods and will eventually succeed. By deferring account status feedback until a later date, they fail to understand what got them caught. *Capture all game user commands and send them to the server. Verify them against other scores within a given delta. For instance, if the player used the shoot action 200 times, but obtained a score of 200,000, but the neighboring players in the game shot 5,000 times to obtain a score of 210,000, it may trigger a threshold that flags the person for further or human investigation. *Add value and persistence to your user accounts. If your user accounts have unlockables for your game, or if your game requires purchase, the weight of a ban is greater as the user cannot regain his previous account status by simply creating a new account through a web-based proxy. A: That's a really hard question. I've never implemented such thing but here's a simple aproximmation. Your main concern is due to hackers guessing what is it your application is doing and then sending their own results. Well, first of all, unless your application has a great success I wouldn't be worried. Doing such thing is extremely difficult. Encryption won't help with the problem. You see, encryption helps to protect the data on its way but it doesn't protect either of the sides of the transaction before the data is encrypted (which is where the main vulnerability may be). So if you encrypt the sure, the data will remain private but it won't be safe. If you are really worried about it I will suggest obfuscating the code and designing the score system in a way which is not completely obvious what is doing. Here we can borrow some things from an encryption protocol. Here is an example: * *Let's say the score is some number m *Compute some kind of check over the score (for example the CRC or any other system you see feet. In fact, if you just invent one, no matter how lame is it it will work better) *Obtain the private key of the user (D) from your remote server (over a secure connection obviously). You're the only one which know this key. *Compute X=m^D mod n (n being the public module of your public/private key algorithm) (that is, encrypt it :P) As you see that's just obfuscation of another kind. You can go down that way as long as you want. For example you can lookup the nearest two prime numbers to X and use them to encrypt the CRC and send it also to the server so you'll have the CRC and the score separately and with different encryption schemes. If you use that in conjunction with obfuscation I'd say that would be difficult to hack. Nontheless even that could be reverse engingeered, it all depends on the interest and ability of the hacker but ... seriously, what kind of freak takes so much effort to change its results on a game? (Unless is WoW or something) One last note Obfuscator for .NET Obfuscator for Delphi/C++ Obfuscator for assembler (x86) A: No solution is ever going to be perfect while the game is running on a system under the user's control, but there are a few steps you could take to make hacking the system more trouble. In the end, the goal can only be to make hacking the system more trouble than it's worth. * *Send some additional information with the high score requests to validate one the server side. If you get 5 points for every X, and the game only contains 10 Xs, then you've got some extra hoops to make the hacker to jump through to get their score accepted as valid. *Have the server send a random challenge which must be met with a few bytes of the game's binary from that offset. That means the hacker must keep a pristine copy of the binary around (just a bit more trouble). *If you have license keys, require high scores to include them, so you can ban people caught hacking the system. This also lets you track invalid attempts as defined above, to ban people testing out the protocol before the ever even submit a valid score. All in all though, getting the game popular enough for people to care to hack it is probably a far bigger challenge. A: As the other answer says, you are forced to trust a potentially malicious client, and a simple deterant plus a little human monitoring is going to be enough for a small game. If you want to get fancy, you then have to look for fraud patterns in the score data, simmular to a credit card company looking at charge data. The more state the client communicates onto your server, the potentially easier it is to find a pattern of correct or incorrect behavior via code. For example. say that the client had to upload a time based audit log of the score (which maybe you can also use to let another clients watch the top games), the server can then validate if the score log breaks any of the game rules. In the end, this is still about making it expensive enough to discourage cheating the scoreboard. You would want a system where you can always improve the (easier to update)server code to deal with any new attacks on your validation system. A: @Martin. This is how I believe Mario Kart Wii works. The added bonus is that you can let all the other players watch how the high score holder got the high score. The funny thing about this is that if you check out the fastest "Grumble Volcano" time trail, you'll see that somebody found a shortcut that let you skip 95% of the track. I'm not sure if they still have that up as the fastest time. A: You can't do it on a nontrusted client platform. In practice it is possible to defeat even some "trusted" platforms. There are various attacks which are impossible to detect in the general case - mainly modifying variables in memory. If you can't trust your own program's variables, you can't really achieve very much. The other techniques outlined above may help, but don't solve the basic problem of running on a nontrusted platform. Out of interest, are you sure that people will try to hack a high score table? I have had a game online for over two years now with a trivially-crackabe high score table. Many people have played it but I have no evidence that anyone's tried to crack the high scores. A: Usually, the biggest defender against cheating and hacking is a community watch. If a score seems rather suspicious, a user can report the score for cheating. And if enough people report that score, the replay can be checked by the admins for validity. It is fairly easy to see the difference between a bot an an actual player, if there's already a bunch of players playing the game in full legitimacy. The admins must oversee only those scores that get questioned, because there is a small chance that a bunch of users might bandwagon to remove a perfectly hard-earned score. And the admins only have to view the few scores that do get reported, so it's not too much of their time, even less for a small game. Even just knowing that if you work hard to make a bot, just to be shot down again by the report system, is a deterrent in itself. Perhaps even encrypting the replay data wouldn't hurt, either. Replay data is often small, and encrypting it wouldn't take too much more space. And to help improve that, the server itself would try out the replay by the control log, and make sure it matches up with the score achieved. If there's something the anti-cheat system can't find, users will find it.
{ "language": "en", "url": "https://stackoverflow.com/questions/25999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Develop on local Oracle instance I want our team to develop against local instances of an Oracle database. With MS SQL, I can use SQL Express Edition. What are my options? A: I have had a lot of success using Oracle 10g Express Edition. It comes with Oracle Aplication Express which allows the simple admin and creation of software via a web interface. It is limited to 4Gb of Disk Space, 1Gb of Ram and will only use 1 processor. It's free and in my experience has been 100% reliable. It can easily be hosted within a Virtual machine. Also Oracle SQL Developer is a cross platform application that can be used with any version of Oracle and is also free. Oracle 10g is superb. Go for it :-) A: I'm happy with Oracle XE for development purposes. I do have this piece of wisdow to share; if you're having problems like ORA-12519: TNS:no appropriate service handler found or ORA-12560: TNS:protocol adapter error from time to time then try to change your PROCESSES parameter, logon to Oracle using sys as sysdba and execute the following: ALTER SYSTEM SET PROCESSES=150 SCOPE=SPFILE; After changing the PROCESSES parameter restart your Oracle service. A: Oracle allows developers to download and use Oracle for free for the purpose of developing software (at least for the initial prototype, best to read the license terms). Downloads here. A: Oracle has an express edition as well. I believe it is more limited though (IIRC, you can only have one database on an instance) Oracle XE A: We ended up using Oracle XE. Install client, install express, reboot, it just works. A: I don't recommend Oracle XE. My co-workers and I have been doing a project in Oracle and got severely tripped up after trying to use XE for our local development instances. The database worked fine until we started running local stress tests, at which point it started dropping connections. I don't know whether this is an intentional, documented limitation or if perhaps we each just hit a weird bug, but I strongly recommend that you stay away from XE. When we both switched over to the full version, our problems immediately went away. Also, Oracle doesn't require any kind of licensing confirmation for the full server; you have to click something to say that you have indeed acquired a license, but it doesn't make you prove it. So if you indeed have a license to use Oracle, there's no reason why you can't just install the full version on your development machines.
{ "language": "en", "url": "https://stackoverflow.com/questions/26002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Iterating over a complex Associative Array in PHP Is there an easy way to iterate over an associative array of this structure in PHP: The array $searches has a numbered index, with between 4 and 5 associative parts. So I not only need to iterate over $searches[0] through $searches[n], but also $searches[0]["part0"] through $searches[n]["partn"]. The hard part is that different indexes have different numbers of parts (some might be missing one or two). Thoughts on doing this in a way that's nice, neat, and understandable? A: You should be able to use a nested foreach statment from the php manual /* foreach example 4: multi-dimensional arrays */ $a = array(); $a[0][0] = "a"; $a[0][1] = "b"; $a[1][0] = "y"; $a[1][1] = "z"; foreach ($a as $v1) { foreach ($v1 as $v2) { echo "$v2\n"; } } A: Nest two foreach loops: foreach ($array as $i => $values) { print "$i {\n"; foreach ($values as $key => $value) { print " $key => $value\n"; } print "}\n"; } A: I know it's question necromancy, but iterating over Multidimensional arrays is easy with Spl Iterators $iterator = new RecursiveIteratorIterator(new RecursiveArrayIterator($array)); foreach($iterator as $key=>$value) { echo $key.' -- '.$value.'<br />'; } See * *http://php.net/manual/en/spl.iterators.php A: Looks like a good place for a recursive function, esp. if you'll have more than two levels of depth. function doSomething(&$complex_array) { foreach ($complex_array as $n => $v) { if (is_array($v)) doSomething($v); else do whatever you want to do with a single node } } A: Can you just loop over all of the "part[n]" items and use isset to see if they actually exist or not? A: I'm really not sure what you mean here - surely a pair of foreach loops does what you need? foreach($array as $id => $assoc) { foreach($assoc as $part => $data) { // code } } Or do you need something recursive? I'd be able to help more with example data and a context in how you want the data returned. A: Consider this multi dimentional array, I hope this function will help. $n = array('customer' => array('address' => 'Kenmore street', 'phone' => '121223'), 'consumer' => 'wellington consumer', 'employee' => array('name' => array('fname' => 'finau', 'lname' => 'kaufusi'), 'age' => 32, 'nationality' => 'Tonga') ); iterator($n); function iterator($arr){ foreach($arr as $key => $val){ if(is_array($val))iterator($val); echo '<p>key: '.$key.' | value: '.$val.'</p>'; //filter the $key and $val here and do what you want } }
{ "language": "en", "url": "https://stackoverflow.com/questions/26007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: What is the best way to connect and use a sqlite database from C# I've done this before in C++ by including sqlite.h but is there a similarly easy way in C#? A: I'm with, Bruce. I AM using http://system.data.sqlite.org/ with great success as well. Here's a simple class example that I created: using System; using System.Text; using System.Data; using System.Data.SQLite; namespace MySqlLite { class DataClass { private SQLiteConnection sqlite; public DataClass() { //This part killed me in the beginning. I was specifying "DataSource" //instead of "Data Source" sqlite = new SQLiteConnection("Data Source=/path/to/file.db"); } public DataTable selectQuery(string query) { SQLiteDataAdapter ad; DataTable dt = new DataTable(); try { SQLiteCommand cmd; sqlite.Open(); //Initiate connection to the db cmd = sqlite.CreateCommand(); cmd.CommandText = query; //set the passed query ad = new SQLiteDataAdapter(cmd); ad.Fill(dt); //fill the datasource } catch(SQLiteException ex) { //Add your exception code here. } sqlite.Close(); return dt; } } There is also an NuGet package: System.Data.SQLite available. A: There is a list of Sqlite wrappers for .Net at http://www.sqlite.org/cvstrac/wiki?p=SqliteWrappers. From what I've heard http://sqlite.phxsoftware.com/ is quite good. This particular one lets you access Sqlite through ADO.Net just like any other database. A: There's also now this option: http://code.google.com/p/csharp-sqlite/ - a complete port of SQLite to C#. A: https://github.com/praeclarum/sqlite-net is now probably the best option. A: Microsoft.Data.Sqlite by Microsoft has over 9000 downloads every day, so I think you are safe using that one. Example usage from the documentation: using (var connection = new SqliteConnection("Data Source=hello.db")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = @" SELECT name FROM user WHERE id = $id "; command.Parameters.AddWithValue("$id", id); using (var reader = command.ExecuteReader()) { while (reader.Read()) { var name = reader.GetString(0); Console.WriteLine($"Hello, {name}!"); } } } A: Another way of using SQLite database in NET Framework is to use Fluent-NHibernate. [It is NET module which wraps around NHibernate (ORM module - Object Relational Mapping) and allows to configure NHibernate programmatically (without XML files) with the fluent pattern.] Here is the brief 'Getting started' description how to do this in C# step by step: https://github.com/jagregory/fluent-nhibernate/wiki/Getting-started It includes a source code as an Visual Studio project. A: Here I am trying to help you do the job step by step: (this may be the answer to other questions) * *Go to this address , down the page you can see something like "List of Release Packages". Based on your system and .net framework version choose the right one for you. for example if your want to use .NET Framework 4.6 on a 64-bit Windows, choose this version and download it. *Then install the file somewhere on your hard drive, just like any other software. *Open Visual studio and your project. Then in solution explorer, right-click on "References" and choose "add Reference...". *Click the browse button and choose where you install the previous file and go to .../bin/System.Data.SQLite.dll and click add and then OK buttons. that is pretty much it. now you can use SQLite in your project. to use it in your project on the code level you may use this below example code: * *make a connection string: string connectionString = @"URI=file:{the location of your sqlite database}"; *establish a sqlite connection: SQLiteConnection theConnection = new SQLiteConnection(connectionString ); *open the connection: theConnection.Open(); *create a sqlite command: SQLiteCommand cmd = new SQLiteCommand(theConnection); *Make a command text, or better said your SQLite statement: cmd.CommandText = "INSERT INTO table_name(col1, col2) VALUES(val1, val2)"; *Execute the command cmd.ExecuteNonQuery(); that is it. A: I've used this with great success: http://system.data.sqlite.org/ Free with no restrictions. (Note from review: Original site no longer exists. The above link has a link pointing the the 404 site and has all the info of the original) --Bruce A: Mono comes with a wrapper, use theirs! https://github.com/mono/mono/tree/master/mcs/class/Mono.Data.Sqlite/Mono.Data.Sqlite_2.0 gives code to wrap the actual SQLite dll ( http://www.sqlite.org/sqlite-shell-win32-x86-3071300.zip found on the download page http://www.sqlite.org/download.html/ ) in a .net friendly way. It works on Linux or Windows. This seems the thinnest of all worlds, minimizing your dependence on third party libraries. If I had to do this project from scratch, this is the way I would do it. A: if you have any problem with the library you can use Microsoft.Data.Sqlite; public static DataTable GetData(string connectionString, string query) { DataTable dt = new DataTable(); Microsoft.Data.Sqlite.SqliteConnection connection; Microsoft.Data.Sqlite.SqliteCommand command; connection = new Microsoft.Data.Sqlite.SqliteConnection("Data Source= YOU_PATH_BD.sqlite"); try { connection.Open(); command = new Microsoft.Data.Sqlite.SqliteCommand(query, connection); dt.Load(command.ExecuteReader()); connection.Close(); } catch { } return dt; } you can add NuGet Package Microsoft.Data.Sqlite
{ "language": "en", "url": "https://stackoverflow.com/questions/26020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: How is data compression more effective than indexing for search performance? For our application, we keep large amounts of data indexed by three integer columns (source, type and time). Loading significant chunks of that data can take some time and we have implemented various measures to reduce the amount of data that has to be searched and loaded for larger queries, such as storing larger granularities for queries that don't require a high resolution (time-wise). When searching for data in our backup archives, where the data is stored in bzipped text files, but has basically the same structure, I noticed that it is significantly faster to untar to stdout and pipe it through grep than to untar it to disk and grep the files. In fact, the untar-to-pipe was even noticeably faster than just grepping the uncompressed files (i. e. discounting the untar-to-disk). This made me wonder if the performance impact of disk I/O is actually much heavier than I thought. So here's my question: Do you think putting the data of multiple rows into a (compressed) blob field of a single row and search for single rows on the fly during extraction could be faster than searching for the same rows via the table index? For example, instead of having this table CREATE TABLE data ( `source` INT, `type` INT, `timestamp` INT, `value` DOUBLE); I would have CREATE TABLE quickdata ( `source` INT, `type` INT, `day` INT, `dayvalues` BLOB ); with approximately 100-300 rows in data for each row in quickdata and searching for the desired timestamps on the fly during decompression and decoding of the blob field. Does this make sense to you? What parameters should I investigate? What strings might be attached? What DB features (any DBMS) exist to achieve similar effects? A: This made me wonder if the performance impact of disk I/O is actually much heavier than I thought. Definitely. If you have to go to disk, the performance hit is many orders of magnitude greater than memory. This reminds me of the classic Jim Gray paper, Distributed Computing Economics: Computing economics are changing. Today there is rough price parity between (1) one database access, (2) ten bytes of network traffic, (3) 100,000 instructions, (4) 10 bytes of disk storage, and (5) a megabyte of disk bandwidth. This has implications for how one structures Internet-scale distributed computing: one puts computing as close to the data as possible in order to avoid expensive network traffic. The question, then, is how much data do you have and how much memory can you afford? And if the database gets really big -- as in nobody could ever afford that much memory, even in 20 years -- you need clever distributed database systems like Google's BigTable or Hadoop. A: I made a similar discovery when working within Python on a database: the cost of accessing a disk is very, very high. It turned out to be much faster (ie nearly two orders of magnitude) to request a whole chunk of data and iterate through it in python than it was to create seven queries that were narrower. (One per day in question for the data) It blew out even further when I was getting hourly data. 24x7 lots of queries it lots!
{ "language": "en", "url": "https://stackoverflow.com/questions/26021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Cleanest & Fastest server setup for Django I'm about to deploy a mediumsized site powered by Django. I have a dedicated Ubuntu Server. I'm really confused over which serversoftware to use. So i thought to myself: why not ask stackoverflow. What i'm looking for is: * *Easy to set up *Fast and easy on resources *Can serve mediafiles *Able to serve multiple djangosites on same server *I would rather not install PHP or anything else that sucks resources, and for which I have no use for. I have heard of mod_wsgi and mod_python on Apache, nginx and lighty. Which are the pros and cons of these and have i missed someone? @Barry: Somehow i feel like Apache is to bloated for me. What about the alternatives? @BrianLy: Ok I'll check out mod_wsgi some more. But why do i need Apache if i serve static files with lighty? I have also managed to serve the django app itself with lighty. Is that bad in anyway? Sorry for beeing so stupid :-) UPDATE: What about lighty and nginx - which are the uses-cases when these are the perfect choice? A: I'm using Cherokee. According to their benchmarks (grain of salt with them), it handles load better than both Lighttpd and nginx... But that's not why I use it. I use it because if you type cherokee-admin, it starts a new server that you can log into (with a one-time password) and configure the whole server through a beautifully-done webmin. That's a killer feature. It has already saved me a lot of time. And it's saving my server a lot of resources! As for django, I'm running it as a threaded SCGI process. Works well. Cherokee can keep it running too. Again, very nice feature. The current Ubuntu repo version is very old so I'd advise you use their PPA. Good luck. A: As @Barry said, the documentation uses mod_python. I haven't used Ubuntu as a server, but had a good experience with mod_wsgi on Solaris. You can find documentation for mod_wsgi and Django on the mod_wsgi site. A quick review of your requirements: * *Easy to setup I've found apache 2.2 fairly easy to build and install. *Fast and easy on resources I would say that this depends on your usage and traffic. * You may not want to server all files using Apache and use LightTPD (lighty) to server static files. *Can serve media files I assume you mean images, flash files? Apache can do this. *Multiple sites on same server Virtual server hosting on Apache. *Rather not install other extensions Comment out anything you don't want in the Apache config. A: The officially recommended way to deploy a django project is to use mod_python with apache. This is described in the documentation. The main pro with this is that it is the best documented, most supported, and most common way to deploy. The con is that it probably isn't the fastest. A: The best configuration is not so known I think. But here is: * *Use nginx for serving requests (dynamic to app, static content directly). *Use python web server for serving dynamic content. Two most speedy solutions for python-based web server is: * *cogen *fapws2 You need to look into google to find current best configuration for django (still in development). A: Since I was looking for some more in-depth answers, I decided to research the issue myself in depth. Please let me know if I've misunderstood anything. Some general recommendation are to use a separate webserver for handling media. By separate, I mean a webserver which is not running Django. This server can be for instance: * *Lighttpd (Lighty) *Nginx (EngineX) *Or some other light-weight server Then, for Django, you can go down different paths. You can either: * *Serve Django via Apache and: * *mod_python This is the stable and recommended/well documented way. Cons: uses a lot of memory. *mod_wsgi From what I understand, mod_wsgi is a newer alternative. It appears to be faster and easier on resources. *mod_fastcgi When using FastCGI you are delegating the serving of Django to another process. Since mod_python includes a python interpreter in every request it uses a lot of memory. This is a way to bypass that problem. Also there is some security concerns. What you do is that you start your Django FastCGI server in a separate process and then configures apache via rewrites to call this process when needed. Or you can: * *Serve Django without using Apache but with another server that supports FastCGI natively: (The documentation mentions that you can do this if you don't have any Apache specific needs. I guess the reason must be to save memory.) * *Lighttpd This is the server that runs Youtube. It seems fast and easy to use, however i've seen reports on memoryleaks. * *nginx I've seen benchmarks claiming that this server is even faster than lighttpd. It's mostly documented in Russian though. Another thing, due to limitations in Python your server should be running in forked mode, not threaded. So this is my current research, but I want more opinions and experiences. A: I’m using nginx (0.6.32 taken from Sid) with mod_wsgi. It works very well, though I can’t say whether it’s better than the alternatives because I never tried any. Nginx has memcached support built in, which can perhaps interoperate with the Django caching middleware (I don’t actually use it, instead I fill the cache manually using python-memcache and invalidate it when changes are made), so cache hits completely bypass Django (my development machine can serve about 3000 requests per second). A caveat: nginx’ mod_wsgi highly dislikes named locations (it tries to pass them in SCRIPT_NAME), so the obvious ‘error_page 404 = @django’ will cause numerous obscure errors. I had to patch mod_wsgi source to fix that. A: I'm struggling to understand all the options as well. In this blog post I found some benefits of mod_wsgi compared to mod_python explained. Multiple low-traffic sites on a small VPS make RAM consumption the primary concern, and mod_python seems like a bad option there. Using lighttpd and FastCGI, I've managed to get the minimum memory usage of a simple Django site down to 58MiB virtual and 6.5MiB resident (after restarting and serving a single non-RAM-heavy request). I've noticed that upgrading from Python 2.4 to 2.5 on Debian Etch increased the minimum memory footprint of the Python processes by a few percent. On the other hand, 2.5's better memory management might have a bigger opposite effect on long-running processes. A: Keep it simple: Django recommends Apache and mod_wsgi (or mod_python). If serving media files is a very big part of your service, consider Amazon S3 or Rackspace CloudFiles. A: There are many ways, approach to do this.For that reason, I recommend to read carefully the article related to the deployment process on DjangoAdvent.com: Eric Florenzano - Deploying Django with FastCGI: http://djangoadvent.com/1.2/deploying-django-site-using-fastcgi/ Read too: Mike Malone - Scaling Django Stochastictechnologies Blog: The perfect Django Setup Mikkel Hoegh Blog: 35 % Response-time-improvement-switching-uwsgi-nginx Regards A: In my opinion best/fastest stack is varnish-nginx-uwsgi-django. And I'm successfully using it. A: If you're using lighthttpd, you can also use FastCGI for serving Django. I'm not sure how the speed compares to mod_wsgi, but if memory serves correctly, you get a couple of the benefits that you would get with mod_wsgi that you wouldn't get with mod_python. The main one being that you can give each application its own process (which is really helpful for keeping memory of different apps separated as well as for taking advantage of multi-core computers. Edit: Just to add in regards to your update about nginix, if memory serves correctly again, nginix uses "greenlets" to handle concurrency. This means that you may need to be a little bit more careful to make sure that one app doesn't eat up all the server's time. A: We use nginx and FastCGI for all of our Django deployments. This is mostly because we usually deploy over at Slicehost, and don't want to donate all of our memory to Apache. I guess this would be our "use case". As for the remarks about the documentation being mostly in Russian -- I've found most of the information on the English wiki to be very useful and accurate. This site has sample configurations for Django too, from which you can tweak your own nginx configuration. A: I have a warning for using Cherokee. When you make changes to Django Cherokee maintains the OLD process, instead of killing it and starting a new one. On Apache i strongly recommend this article. http://www.djangofoo.com/17/django-mod_wsgi-deploy-exampl Its easy to setup, easy to kill or reset after making changes. Just type in terminal sudo /etc/init.d/apache2 restart and changes are seen instantly.
{ "language": "en", "url": "https://stackoverflow.com/questions/26025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Upgrading Sharepoint 3.0 to SQL 2005 Backend? We're trying to get rid of all of our SQL Server 2000 databases to re purpose our old DB server... Sharepoint 3.0 is being a showstopper. I've looked at a lot of guides from Microsoft and tried the instructions in those. I've also just tried the good ol' exec sp_detach_db / sp_attach_db with no luck. Has anyone actually done this? A: my boss has. it was a real pain. permissions issues. he used the built in sharepoint backup tool. I can more details tomorrow if need. I'll check back. I'm back. Here's the steps he used. * *install an instance of sql server 2005 on the sql 2000 box (side-by-side) *back up the sharepoint site using the sharepoint admin tools. This will create a one mother of a large xml file w/ the whole kit and kaboodle (the site & all it's content) *delete the old-n-busted sharepoint site *create a new hotness sharepoint site using the sql server 2005 as the database. *do a restore from the xml backup using the admin tools - this will take hours to run (thank you xml ...) *Bingo! *P.S. I forgot, the account you use to do the restore must be an 'sa' account.
{ "language": "en", "url": "https://stackoverflow.com/questions/26028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Browser-based game - Which framework to choose? I'm starting to develop a browser-based game (and by this I mean text-based, no Flash or similar stuff on it) and I'm struggling to decide on which development framework to use. As far as requirements are concerned, the most important thing that I can think of right now is the ability to translate it to several languages. A good object-relational mapping and a way to generate forms from logical objects would also be very good, as I've noticed that I always spend lots of time solving the problems that come up when I change any of those things. The programming language is kind of unimportant. I have some experience in PHP and C#, but I don't mind, and I would even like to use this as an excuse, learning some new thing like Python or Ruby. What I do want is something with a good and thriving community and lots of samples and tutorials online to help me. A: I would reccomend sticking to what you know - PHP is more than capable. That's true of course, but: I don't mind, and I would even like to use this as an excuse, learning some new thing like Python or Ruby. Then writing a browser game is an excellent opportunity to do this. Learning something new is never wrong and learning an alternative to PHP can never hurt (eh, Jeff?). While neither Ruby on Rails nor Django are especially useful for writing games, they're still great. We had to write a small browser game in a matter of weeks for a project once and Rails worked charms. On the other hand, all successful browser games have enormous work loads and if you want to scale well you either have to get good hardware and load balancing or you need a non-interpreted framework (sorry, guys!). A: I'd definitely suggest PHP. I've developed browser based games (pbbgs) for about 10 years now. I've tried .Net, Perl and Java. All of them worked, but by far PHP was the best because: * *Speed with which you can develop (that might be due to experience) *Ease/Cost of finding a host for a game site *Flexibility to change/revamp on the fly (game programming seems to always have a different development cycle then normal projects) Ruby is not to bad, but the last time I tried it I rapidly ran into scaling/performance issues. I have not tried Python yet...maybe it's time to give it a shot. Just my two cents, but over the years PHP has saved me a ton of time. A: I would reccomend sticking to what you know - PHP is more than capable. I used to play a game called Hyperiums - a text based browser game like yours - which is created using Java (it's web-based quivalent is JSP?) and servlets. It works fairly well (it has had downtime issues but those were more related to it's running on a pretty crap server). As for which framework to use - why not create your own? Spend a good amount of time pre-coding deciding how you're going to handle various things - such as langauge support: you could use a phrase system or seperate langauge-specific templates. Third party frameworks are probably better tested than one you make but they're not created for a specific purpose, they're created for a wide range of purposes. A: Check out django-mmo!
{ "language": "en", "url": "https://stackoverflow.com/questions/26041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Can DTS Test for Presence of MS-Access Table I have an Access database in which I drop the table and then create the table afresh. However, I need to be able to test for the table in case the table gets dropped but not created (i.e. when someone stops the DTS package just after it starts -roll-eyes- ). If I were doing this in the SQL database I would just do: IF (EXISTS (SELECT * FROM sysobjects WHERE name = 'Table-Name-to-look-for')) BEGIN drop table 'Table-Name-to-look-for' END But how do I do that for an Access database? Optional answer: is there a way to have the DTS package ignore the error and just go to the next step rather than checking to see if it exists? SQL Server 2000 A: Microsoft Access has a system table called MSysObjects that contains a list of all database objects, including tables. Table objects have Type 1, 4 and 6. It is important to reference the type: ... Where Name='TableName' And Type In (1,4,6) Otherwise, what is returned could be a some object other than a table. A: Try the same T-SQL, but in MS ACCESS the sys objects table is called: MSysObjects. Try this: SELECT * FROM MSysObjects WHERE Name = 'your_table'; and see if it works from there. You can take a look at these tables if you go to Tools -> Options -> View (a tab) -> and check Hidden Objects, System Objects. So you can see both. If you open the table, you should see your table names, queries, etc. Do not change this manually or the DB could panic :) Martin. P.D.: Your If Exists should also check of object type: IF EXISTS (SELECT * FROM sysobjects WHERE id = object_id(N'[dbo].[Your_Table_Name]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1) A: I'm not sure whether you can query the system objects table in an Access database from a DTS package. If that doesn't work, why not just try doing a SELECT * from the Access table in question and then catch the error if it fails?
{ "language": "en", "url": "https://stackoverflow.com/questions/26062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Model Based Testing Strategies What strategies have you used with Model Based Testing? * *Do you use it exclusively for integration testing, or branch it out to other areas (unit/functional/system/spec verification)? *Do you build focused "sealed" models or do you evolve complex onibus models over time? *When in the product cycle do you invest in creating MBTs? *What sort of base test libraries do you exclusively create for MBTs? *What difference do you make in your functional base test libraries to better support MBTs? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: [There are several essays worth reading on this. Stack Overflow won't let me post more than one, so I've aggregated them in a blog post, linked at the end of this answer.] First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states. Then one can have scripts execute semi-random permutations of transitions within the state model, logging potentially interesting results. There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with what are the questions you want to answer? In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique. All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test that are best explored by automated, high-volume semi-randomized tests. Harry Robinson (one of the leading theorists and proponents of model-based testing) describes one very colorful example where he discovered many interesting bugs in Google driving directions using a model-based test (written with ruby’s Watir library). 1 Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has a number of helpful essays.[2] Ben Simo (another great testing thinker and writer) has also written quite a bit worth reading on model-based testing.[3] Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent talk on the limits and challenges of Model-Based Testing. This blog post of Bach’s links to his hour long talk (and associated slides).[4] I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. In some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but one should remember that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy. Links to all essays mentioned above can be found here: http://testingjeff.wordpress.com/2009/06/03/question-about-model-based-testing/ A: We haven't done any/much I&T and use unit testing almost exclusively, seasoned with a bit of system testing. But our focus is clearly on unit testing. I'm pretty strict on the APIs we build/provide, so the assumption is, if it works by itself, it will work in conjunction and there hasn't been much wrong in it yet. Our models are focused on a single purpose/module with as little dependencies as possible. The focus is always to start as early as possible (TDD-kinda), but unfortunately we don't always get to it. The problem is, you always have to sell it to management and then it's hard because while testing improves stability (overall QA), the people from the outside (outside of tech) can't really relate to what that means until something bad happened. Since we use PHP, we employ PHPUnit for the unit tests. All in all, we do CI with various different tools. :) A: Harry Robinson, an author of MBT-books and worked a lot with it for example at Google and Microsoft have this site with some great info and whitepapers. http://www.geocities.com/model_based_testing/ A: The best way is to try by yourself a Model based testing tool. It's the best way for know if the model based testing is adapted in your context. And what sort of strategies is the good one. I advise you the "MaTeLo" tool of All4Tec (www.all4tec.net) "MaTeLo is a test cases generator for black box functional and system testing. Conformed to the Model Based Testing approach, MaTeLo uses Markov chains for modeling the test. This statistic addin allows products validation in a Systematic way. The efficiency is achieved by a reduction of the human resources needed, an increase of the model reuse and by the enhancement of the test strategy relevance (due to the reliability target). MaTeLo is independent and user-friendly, offers to the validation activities to pass from test scripting to real test engineering and to focus on the real added value of testing: the test plans" You can ask an evaluation licence and try by yourself. You can find some exemples here : http://www.all4tec.net/wiki/index.php?title=Tutorials
{ "language": "en", "url": "https://stackoverflow.com/questions/26065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Reconnect logic with connectivity notifications Say I have an application that wants a persistent connection to a server. How do I implement connection/re-connection logic so that I'm not wasting resources (power/bandwidth) and I have fast reconnect time when connectivity appears/improves? If I only use connectivity notifications, I can get stuck on problems not related to the local network. Bonus if you could show me the C# version. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: This is a very "huge" question. I can say that we use an O/R Mapper and each "query" to the database needs an object called PersistenceBroker. This class is in charge of all the DB Stuff related to connecting, authenticating etc. We've written a PersistenceBrokerFactory.GetCurrentBroker() which returns the "working" broker. If the DB suddenly fails (for whatever reason), the CONN object will "timeout()" after 30secs (or whatever you define). If that happens, we show the user that he/she is offline and display a reconnect button. On the other hand, to provide a visual indication that the user has connectivity, we have a thread running in the background, that checks for Internet connectivity every 15 seconds. We do 1 ping to google.com. ;) If that fails, we assume Internet is somehow broken, and we update a status bar. I could show you all that code for the network health monitor if you wanted. I took some bits from google and other I made myself :)
{ "language": "en", "url": "https://stackoverflow.com/questions/26072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How can I figure out how much memory a .Net Appdomain is consuming? I'm trying to programmatically restrict the memory consumption of plugins running in a .Net AppDomain, but I can't find any setup parameters for this, nor can I find a way to query an AppDomain to figure out how much memory it is consuming. Any ideas? A: Old question, but in the meantime (since .Net framework 4.0) a new solution is available. You will have to enable ARM (Application domain Resource Monitoring). From that point on, you can request information on total consumed processor time, memory usage etc. See Microsoft documentation over here A: Not sure programatically, but Process Explorer can tell you how much memory a .net AppDomain is using. Maybe they have some documentation out there about how they are querying that info. A: Here's the documentation for querying a process's memory usage. Not the same as the AppDomain, but it might be a place to start. http://msdn.microsoft.com/en-us/library/s80a75e5(VS.80).aspx You can ask the GC what it thinks is currently allocated also. http://msdn.microsoft.com/en-us/library/system.gc.gettotalmemory(VS.71).aspx A: Using the mscoree.CorRuntimeHostClass interop from C:\WINDOWS\Microsoft.NET\Framework\vXXXXXX\mscoree.tlb
{ "language": "en", "url": "https://stackoverflow.com/questions/26074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Resize Infragistics GanttChart task with the mouse I loaded a custom DataTable into an UltraChart of type GanttChart. The data loads successfully. Do you know if it possible to add support for mouse resize(drag) to the tasks that show up into the chart? I have not been able to figure out if this is supported by the Infragistics control. ­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: In this forum post, an Infragistics employee states that this is not implemented (as of Feb '08), but may be doable handling FillSceneGraph.
{ "language": "en", "url": "https://stackoverflow.com/questions/26075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: In a PHP5 class, when does a private constructor get called? Let's say I'm writing a PHP (>= 5.0) class that's meant to be a singleton. All of the docs I've read say to make the class constructor private so the class can't be directly instantiated. So if I have something like this: class SillyDB { private function __construct() { } public static function getConnection() { } } Are there any cases where __construct() is called other than if I'm doing a new SillyDB() call inside the class itself? And why am I allowed to instantiate SillyDB from inside itself at all? A: __construct() would only be called if you called it from within a method for the class containing the private constructor. So for your Singleton, you might have a method like so: class DBConnection { private static $Connection = null; public static function getConnection() { if(!isset(self::$Connection)) { self::$Connection = new DBConnection(); } return self::$Connection; } private function __construct() { } } $dbConnection = DBConnection::getConnection(); The reason you are able/would want to instantiate the class from within itself is so that you can check to make sure that only one instance exists at any given time. This is the whole point of a Singleton, after all. Using a Singleton for a database connection ensures that your application is not making a ton of DB connections at a time. Edit: Added $, as suggested by @emanuele-del-grande
{ "language": "en", "url": "https://stackoverflow.com/questions/26079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Most efficient implementation of a large number class When doing calculations on very large numbers where integral data types such as double or int64 falls short, a separate class to handle such large numbers may be needed. Does anyone care to offer an efficient algorithm on how best to do this? A: In C# 4.0 use the BigInteger type A: Using the built-in features of a language work for me. Java has BigInteger and BigDecimal, and Python automagicaly switches to an object similar to Java's if a number gets out of the range of an integer or whatnot. As for other languages though, I have no idea. I hate re-inventing the wheel. A: You're asking about arbitrary-precision arithmetic, a subject on which books have been written. If you just want a simple and fairly efficient BigNum library for C#, you might want to check out IntX. A: Doing your own BigNum library is complicated, so i'd say like jjnguy. Use whatever your language offers as libraries. In .net, reference the VisualJ dll as they contain the BigInteger and BigDecimal classes. You should however be aware of some limitations of these libraries, like the lack of a square root method, for example. A: There are 2 solutions to your problem: * *Easy way: Use an external library such as 'The GNU MP Bignum Library and forget about implementation details. *Hard way: Design your own class/structure containing multiple higher order datatypes like double or int64 variables and define basic math operations for them using operator overloading (in C++) or via methods named add, subtract, multiply, shift, etc. (in JAVA and other OO languages). Let me know if you need any further help. I have done this a couple of times in the past.
{ "language": "en", "url": "https://stackoverflow.com/questions/26094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Overloaded functions in C++ DLL def file I'm writing a C/C++ DLL and want to export certain functions which I've done before using a .def file like this LIBRARY "MyLib" EXPORTS Foo Bar with the code defined as this, for example: int Foo(int a); void Bar(int foo); However, what if I want to declare an overloaded method of Foo() like: int Foo(int a, int b); As the def file only has the function name and not the full prototype I can't see how it would handle the overloaded functions. Do you just use the one entry and then specify which overloaded version you want when passing in the properly prototyped function pointer to LoadLibrary() ? Edit: To be clear, this is on Windows using Visual Studio 2005 Edit: Marked the non-def (__declspec) method as the answer...I know this doesn't actually solve the problem using def files as I wanted, but it seems that there is likely no (official) solution using def files. Will leave the question open, however, in case someone knows something we don't have overloaded functions and def files. A: I had a similar issue so I wanted to post on this as well. * *Usually using extern "C" __declspec(dllexport) void Foo(); to export a function name is fine. It will usually export the name unmangled without the need for a .def file. There are, however, some exceptions like __stdcall functions and overloaded function names. *If you declare a function to use the __stdcall convention (as is done for many API functions) then extern "C" __declspec(dllexport) void __stdcall Foo(); will export a mangled name like _Foo@4. In this case you may need to explicitly map the exported name to an internal mangled name. A. How to export an unmangled name. In a .def file add ---- EXPORTS ; Explicit exports can go here Foo ----- This will try to find a "best match" for an internal function Foo and export it. In the case above where there is only one foo this will create the mapping Foo = _Foo@4 as can be see via dumpbin /EXPORTS If you have overloaded a function name then you may need to explicitly say which function you want in the .def file by specifying a mangled name using the entryname[=internalname] syntax. e.g. ---- EXPORTS ; Explicit exports can go here Foo=_Foo@4 ----- B. An alternative to .def files is that you can export names "in place" using a #pragma. #pragma comment(linker, "/export:Foo=_Foo@4") C. A third alternative is to declare just one version of Foo as extern "C" to be exported unmangled. See here for details. A: There is no official way of doing what you want, because the dll interface is a C api. The compiler itself uses mangled names as a workaround, so you should use name mangling when you don't want to change too much in your code. A: There isn't a language or version agnostic way of exporting an overloaded function since the mangling convention can change with each release of the compiler. This is one reason why most WinXX functions have funny names like *Ex or *2. A: Systax for EXPORTS definition is: entryname[=internalname] [@ordinal [NONAME]] [PRIVATE] [DATA] entryname is the function or variable name that you want to export. This is required. If the name you export is different from the name in the DLL, specify the export's name in the DLL with internalname. For example, if your DLL exports a function, func1() and you want it to be used as func2(), you would specify: EXPORTS func2=func1 Just see the mangled names (using Dependency walker) and specify your own functions name. Source: http://msdn.microsoft.com/en-us/library/hyx1zcd3(v=vs.71).aspx Edit: This works for dynamic DLLs, where we need to use GetProcAddress() to explicitly fetch a functions in Dll. A: Function overloading is a C++ feature that relies on name mangling (the cryptic function names in the linker error messages). By writing the mangled names into the def file, I can get my test project to link and run: LIBRARY "TestDLL" EXPORTS ?Foo@@YAXH@Z ?Foo@@YAXHH@Z seems to work for void Foo( int x ); void Foo( int x, int y ); So copy the C++ function names from the error message and write them into your def file. However, the real question is: Why do you want to use a def file and not go with __declspec(dllexport) ? The mangled names are non-portable, I tested with VC++ 2008. A: In the code itself, mark the functions you want to export using __declspec(dllexport). For example: #define DllExport __declspec(dllexport) int DllExport Foo( int a ) { // implementation } int DllExport Foo( int a, int b ) { // implementation } If you do this, you do not need to list the functions in the .def file. Alternatively, you may be able to use a default parameter value, like: int Foo( int a, int b = -1 ) This assumes that there exists a value for b that you can use to indicate that it is unused. If -1 is a legal value for b, or if there isn't or shouldn't be a default, this won't work. Edit (Adam Haile): Corrected to use __declspec as __dllspec was not correct so I could mark this as the official answer...it was close enough. Edit (Graeme): Oops - thanks for correcting my typo!
{ "language": "en", "url": "https://stackoverflow.com/questions/26098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Failed to load resources from resource file Get the following error periodically in an IIS application: Failed to load resources from resource file. The full error message in the Application Event Log is: Event Type: Error Event Source: .NET Runtime Event Category: None Event ID: 0 Date: 8/8/2008 Time: 8:8:8 AM User: N/A Computer: BLAH123 Description: The description for Event ID ( 0 ) in Source ( .NET Runtime ) cannot be found. The >local computer may not have the necessary registry information or message DLL files to >display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: .NET Runtime version 1.1.4322.2407- Setup Error: Failed to load resources from resource file Please check your Setup. Application is written in .NET 1.1 but the server runs ASP.NET 2.0. Thanx. Update: Meant to say ASP.NET 2.0 is installed but the default website folder, and the websites inside the folder, are set to ASP.NET 1.1. On the website folder the ISAPI Filter is set to ASP.NET 2.0. My first guess about the problem was having ASP.NET 1.1 and ASP.NET 2.0 running side by side. Update 2: ASP.NET 2.0 is installed but all the websites run only ASP.NET 1.1 (long story and happened before I started). A: Do you have .NET 1.1 and .NET 2.0 apps running on the web server? We had some instances of a similar nature around here and the resolution we found was to create two app pools, one for 1.1 apps and one for 2.0 apps and to assign each application accordingly. A: Do you get a different error without the ISAPI filter in place? A: This error sometimes occurs when a request happens at the exact time that you are rebuilding the application (after a web.config change or a plain recompilation) and IIS cannot find the file in the temporary folders. A: Question was asked before Server Fault, where it probably belongs. Anyway, I never did figure out the error so assume it's closed.
{ "language": "en", "url": "https://stackoverflow.com/questions/26111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is your best tool or techniques for getting the same display on IE6/7 and Firefox? I'm not talking about tools that let one view a page in combinations of operating systems and browsers like crossbrowsertesting.com but in creating or figuring out the actual CSS. A: Use a css reset to level the field across browsers. YUI and Eric Meyer have good ones. A: If you guys are still coding for IE6, you're making a mistake. I use IE7.js to get IE6 to render pages like IE7. IE7 is not perfect, but at least it has some semblance of standards. Since I only have to code for IE7 and FF it makes me 33% more efficient in terms of testing against browsers, something I think makes good business sense. Link: IE7.js A: I write to the standards and both Firefox and IE7 follow a pretty good set in common. IE6 is dead as far as I am concerned but if I get back into professional web dev I'll probably have to revise that ;) A: I try to make a standards-compliant page and do all my testing in Firefox (since it has some excellent development extensions such as Web Developer and Firebug). Then when I'm finished I test the site in IE, then make whatever small changes are necessary. I find that I need to make very few changes, since I don't do anything extraordinarily complex with CSS. I used to have more problems with Javascript differences, but after I started using Javascript libraries (such as jQuery) I stopped having any serious problems with that. A: Padding. IE6 can get a little hinky when using margin to place elements horizontally on a page. If you size your elements and space the content within using padding, you can make many layouts work great in IE6/7, FF, Safari, and Opera without any hacks. IE5.5 makes things a little stickier because of the broken box model, but I think we can pretty much count it out in most circumstances now. A: I try to make a standards-compliant page and do all my testing in Firefox (since it has some excellent development extensions such as Web Developer and Firebug). Then when I'm finished I test the site in IE, then make whatever small changes are necessary. I find that I need to make very few changes, since I don't do anything extraordinarily complex with CSS. The same here, except I don't tend to need to use Firebug and such. I've only had problems with IE6 recently however - which are solved by simple CSS bypasses: /* All browsers read: */ html body { margin: 10px; } /* FF, IE7, Op etc. read: */ html > body { margin: 0; } A: I'm with Eli. Writing against firefox (with firebug installed) makes you have to write "more compatible" code to begin with and then its less of a job later down the line when you come to make it compatible with IE. Use the site QuirksMode to help you find answers to compatbility information A: If it's a brand new project I make it a point to test all html+css changes on all browsers I'm targeting as I make the changes. In the past I tended to focus on my favorite browser and then test with the others after I was done to find that one or more small quirks were present and it was very tedious to pin-point the actual cause. Now I have all browsers open and just go through refreshing each one after each html/css change to make sure the display meets my expectation. When something goes wrong, I know exactly what caused it. It might seem time-consuming to test on all browsers at once, but in the long-run it actually saves time as you catch the problems at once.
{ "language": "en", "url": "https://stackoverflow.com/questions/26113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Accessing .NET components from Powershell I want to use Powershell to write some utilities, leveraging our own .NET components to handle the actual work. This is in place of writing a small console app to tie the calls together. My question is where I would find a good source of documentation or tutorial material to help me fast track this? A: The link that Steven posted is a good example. I don't know of any extensive tutorial. Both the Windows Powershell Cookbook and Windows Powershell In Action have good chapters on the subject. Also, look at the ::LoadFromFile method of the System.Reflection.Assembly class in case your in-house assemblies are not loaded in the GAC. A: If you want to load an assembly into your PowerShell session, you can use reflection and load the assembly. [void][System.Reflection.Assembly]::LoadFrom(PathToYourAssembly) After you load your assembly, you can call static methods and create new instances of a class. A good tutorial can be found here. Both books mentioned by EBGreen are excellent. The PowerShell Cookbook is very task oriented and PowerShell in Action is a great description of the language, its focus and useability. PowerShell in Action is one of my favorite books. :) A: you can use [] or use add-type -AssemblyName "System.example" to use assembly for example use : [system.drawing]::class ...
{ "language": "en", "url": "https://stackoverflow.com/questions/26123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: VBScript/ASP Classic I have a couple of questions regarding VBScript and ASP Classic: * *What is the preferred way to access an MS SQL Server database in VBScript/ASP? *What are best practices in regards to separating model from view from controller? *Any other things I should know about either VBScript or ASP? If you haven't noticed, I'm new at VBScript coding. I realize numbers 2 & 3 are kind of giant "black hole" questions that are overly general, so don't think that I'm expecting to learn everything there is to know about those two questions from here. A: Remember to program into the language rather than program in it. Just because you're using a limited tool set doesn't mean you have to program like it's 1999. I agree with JasonS about classes. It's true you can't do things like inheritance but you can easily fake it Class Dog Private Parent Private Sub Class_Initialize() Set Parent = New Animal End Sub Public Function Walk() Walk = Parent.Walk End Function Public Function Bark() Response.Write("Woof! Woof!") End Function End Class In my projects an ASP page will have the following: INC-APP-CommonIncludes.asp - This includes stuff like my general libraries (Database Access, file functions, etc) and sets up security and includes any configuration files (like connection strings, directory locations, etc) and common classes (User, Permission, etc) and is included in every page. Modules/ModuleName/page.vb.asp - Kind of like a code behind page. Includes page specific BO, BLL and DAL classes and sets up the data required for the page/receives submitted form data, etc Modules/ModuleName/Display/INC-DIS-Page.asp - Displays the data set up in page.vb.asp. A: Echoing some ideas and adding a few of my own: 1) Best way to access the database would to abstract that away into a COM component of some sort that you access from VBScript. 2) If you really wanted to you could write the controller in VBScript and then access that in the page. It would resemble a Page Controller pattern and not a Front Controller that you would see in ASP.NET MVC or MonoRail 3) Why are you doing this to yourself? Most of the tooling required to do this kind of work isn't even available anymore. A: AXE - Asp Xtreme Evolution is a MVC framework for ASP classic There are some attempts at making test frameworks for asp: aspUnit is good, but no longer maintained. I saw a sample on how to make your own one a few months back. The example used nUnit to call functions against the website for automatic testing. I think i got it off here (my line is borked so I can't check) A: ADO is an excellent way to access a database in VBScript/Classic ASP. Dim db: Set db = Server.CreateObject("ADODB.Connection") db.Open "yourconnectionstring -> see connectionstrings.com" Dim rs: Set rs = db.Execute("SELECT firstName from Employees") While Not rs.EOF Response.Write rs("firstName") rs.MoveNext Wend rs.Close More info here: http://www.technowledgebase.com/2007/06/12/vbscript-how-to-create-an-ado-connection-and-run-a-query/ One caveat is that if you are returning a MEMO field in a recordset, be sure you only select ONE MEMO field at a time, and make sure it is the LAST column in your query. Otherwise you will run into problems. (Reference: http://lists.evolt.org/archive/Week-of-Mon-20040329/157305.html ) A: On number 2, I think you have a few options... 1) You can use COM components developed in VB6 or the like to separate some of your business logic from your UI. 2) You can create classes in VBScript. There is no concept of inheritance and other more advanced features are missing from the implementation, but you can encapsulate logic in classes that helps reduce the spagehtti-ness of your app. Check out this: https://web.archive.org/web/20210505200200/http://www.4guysfromrolla.com/webtech/092399-1.shtml A: I agree with @Cirieno, that the selected answer would not be wise to use in production code, for all of the reasons he mentions. That said, if you have just a little experience, this answer is a good starting point as to the basics. In my ASP experience, I preferred to write my database access layer using VB, compiling down to a DLL and referencing the DLL via VBScript. Tough to debug directly through ASP, but it was a nice way to encapsulate all data access code away from the ASP code. A: I had to walk away from my PC when I saw the first answer, and am still distressed that it has been approved by so many people. It's an appalling example of the very worst kind of ASP code, the kind that would ensure your site is SQL-injectable and, if you continue using this code across the site, hackable within an inch of its life. This is NOT the kind of code you should be giving to someone new to ASP coding as they will think it is the professional way of coding in the language! * *NEVER reveal a connection string in your code as it contains the username and password to your database. Use a UDL file instead, or at the very least a constant that can be declared elsewhere and used across the site. *There is no longer any good excuse for using inline SQL for any operation in a web environment. Use a stored procedure -- the security benefits cannot be stressed enough. If you really can't do that then look at inline parameters as a second-best option... Inline SQL will leave your site wide open to SQL injection, malware injection and the rest. *Late declaration of variables can lead to sloppy coding. Use "option explicit" and declare variables at the top of the function. This is best practice rather than a real WTF, but it's best to start as you mean to go on. *No hints to the database as to what type of connection this is -- is it for reading only, or will the user be updating records? The connection can be optimised and the database can handle locking very efficiently if effectively told what to expect. *The database connection is not closed after use, and the recordset object isn't fully destroyed. ASP is still a strong language, despite many folks suggesting moving to .NET -- with good coding practices an ASP site can be written that is easy to maintain, scaleable and fast, but you HAVE to make sure you use every method available to make your code efficient, you HAVE to maintain good coding practices and a little forethought. A good editor will help too, my preference being for PrimalScript which I find more helpful to an ASP coder than any of the latest MS products which seem to be very .NET-centric. Also, where is a "MEMO" field from? Is this Access nomenclature, or maybe MySQL? I ask as such fields have been called TEXT or NTEXT fields in MS-SQL for a decade. A: way way back in the day when VBScript/ASP were still ok I worked in a utility company with a very mixed DB envrionment, I used to swear by this website: http://www.connectionstrings.com/ @michealpryor got it right A: I've been stuck building on ASP, and I feel your pain. 1) The best way to query against SQL Server is with parameterized queries; this will help prevent against SQL injection attacks. Tutorial (not my blog): http://www.nomadpete.com/2007/03/23/classic-asp-which-is-still-alive-and-parametised-queries/ 2) I haven't seen anything regarding MVC specifically geared towards ASP, but I'm definitely interested because it's something I'm having a tough time wrapping my head around. I generally try to at least contain things which are view-like and things which are controller-like in separate functions. I suppose you could possibly write code in separate files and then use server side includes to join them all back together. 3) You're probably coming from a language which has more functionality built in. At first, some things may appear to be missing, but it's often just a matter of writing a lot more lines of code than you're used to. A: Also for database access I have a set of functions - GetSingleRecord, GetRecordset and UpdateDatabase which has similar function to what Michael mentions above
{ "language": "en", "url": "https://stackoverflow.com/questions/26137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Cannot add a launch shortcut (Eclipse Plug-in) I'm making a simple extra java app launcher for Eclipse 3.2 (JBuilder 2007-8) for internal use. So I looked up all the documentations related, including this one The Launching Framework from eclipse.org and have managed to make everything else working with the exception of the launch shortcut. This is the part of my plugin.xml. <extension point="org.eclipse.debug.ui.launchShortcuts"> <shortcut category="mycompany.javalaunchext.launchConfig" class="mycompany.javalaunchext.LaunchShortcut" description="launchshortcutsdescription" icon="icons/k2mountain.png" id="mycompany.javalaunchext.launchShortcut" label="Java Application Ext." modes="run, debug"> <perspective id="org.eclipse.jdt.ui.JavaPerspective"> </perspective> <perspective id="org.eclipse.jdt.ui.JavaHierarchyPerspective"> </perspective> <perspective id="org.eclipse.jdt.ui.JavaBrowsingPerspective"> </perspective> <perspective id="org.eclipse.debug.ui.DebugPerspective"> </perspective> </shortcut> The configuration name in the category section is correct and the class in the class section, i believe, is correctly implemented. (basically copied from org.eclipse.jdt.debug.ui.launchConfigurations.JavaApplicationLaunchShortcut) I'm really not sure if I'm supposed to write a follow-up here but let me clarify my question more. I've extended org.eclipse.jdt.debug.ui.launchConfigurations.JavaLaunchShortcut. Plus, I've added my own logger to constructors and methods, but the class seems like it's never even instantiating. A: I had to add contextualLaunch under org.eclipse.debug.ui.launchShortcuts. The old way seems like it's deprecated a long ago. For other people who are working on the same subject, you might want to extend org.eclipse.ui.commands and bindings, too. I cannot choose this answer but this is the answer that I (the questioner) was looking for. A: You class should implement ILaunchShortcut. Check out the Javadoc. What exception are you getting? Check the error log.
{ "language": "en", "url": "https://stackoverflow.com/questions/26145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }