text
stringlengths
8
267k
meta
dict
Q: Ruby / Rails pre-epoch dates on windows Working with dates in ruby and rails on windows, I'm having problems with pre-epoch dates (before 1970) throwing out of range exceptions. I tried using both Time and DateTime objects, but still have the same problems. A: If you only need dates (no times), the Date class in ruby should handle dates before 1970. But it has only a resolution of days. I don't know if there are solutions, if you also need times before 1970 (source) A: You can also check out the section on dates on ruby-doc.org. I'm still learning Ruby but it sounds like you could use either the Civil or Commerical Date to go back before 1970. A: Ended up using the Date class, problem then became making that work with the rails select helper - which didn't happen, just generated the html myself.
{ "language": "en", "url": "https://stackoverflow.com/questions/28011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to get rid of VSMacros80 folder from project root? How can I have it so Visual Studio doesn't keep re-creating this folder that I never use. It's annoying ot keep looking and unnecessary. A: Add a trailing slash to the default projects location: Get Rid of the Annoying VSMacros80 Folder A: Just add "/" on the end of your "Projects location". (Tested on 2010 SP1.) Remove VSMacros80 directory A: Tools->Options->Addin/Macro Security Change Paths there.
{ "language": "en", "url": "https://stackoverflow.com/questions/28029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQL Server to MySQL I have a backup of an SQL Server DB in .bak format which I've successfully managed to restore to a local instance of SQL Server Express. I now want to export both the structure and data in a format that MySQL will accept. The tools that I use for MySQL management typically allow me to import/export .sql files, but unfortunately Microsoft didn't see fit to make my life this easy! I can't believe I'm the first to run into this, but Google hasn't been a great deal of help. Has anybody managed this before? A: There will be 2 issues: 1) Datatypes. There isn't always a direct analog between an MS SQL type and a MySQL type. For example, MySQL handles timestamps very differently and has the cut-off for when you need to switch between varchar(n) and varchar(max)/text at a different value of n. There are also some small differences in the numeric types. 2) Query syntax. There are a few differences in the query syntax that, again, don't always have a 1:1 analog replacement. The one that comes to the top of my mind is SELECT TOP N * FROM T in MS SQL becomes SELECT * FROM T LIMIT N in MySQL (MySQL makes paging loads easier).
{ "language": "en", "url": "https://stackoverflow.com/questions/28033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best way to share ASP.NET .ascx controls across different website applications? Suppose you have 2 different ASP.NET applications in IIS. Also, you have some ASCX controls that you want to share across these 2 applications. What's the best way to create a "user control library", so that you can use the same control implementation in the 2 applications, withuot having to duplicate code? Controls have ASCX with HTML + code behind. Composite controls will be difficult, because we work with designers who use the HTML syntax in the ASCX files to style the controls. Tundey, we use SVN here. Do you have an example on how to implement your suggestion? How can SVN share the ASP.NET controls? Thanks! A: You would need to create composite controls instead of .ASCX controls if you wanted to be able to use them in separate projects. A: In addition to what Tundey said, NTFS Link shell extension is helpful when it comes to sharing a large chunk of content (e.g.: a folder with .ascx/.aspx) between otherwise independent projects. In case of code, i think making another working copy from VCS is preferable. A: Have a look at this: http://www.codeproject.com/KB/aspnet/ASP2UserControlLibrary.aspx?msg=1782921 A: Scott Guthrie gives some great advice here on how to set up a User Control Library project, then use pre-build events to copy the user controls into multiple projects. It works really well. http://webproject.scottgu.com/CSharp/usercontrols/usercontrols.aspx A: An alternative is to use your source control tool to "share" the ASCX controls between your webapps. This will allow you to make changes to the controls in either application and have the source control ensure the changes are reflected in the our webapps. A: The biggest problem I've noticed with controls in ASP.Net is that you can't easily get designer support for both building the control and using the control in a site once you built it. The only way I've been able to do that is create an .ascx control with no code-behind (ie: all the server-side code is in a script tag in the .ascx file with the runat="server" attribute). But even then, you still have to copy the .ascx file around, so if you ever need to make a change that means updating the file at every location where you've used it. So yeah, make sure it's in source control. A: I managed to do this by sacrificing some of the ease of building the controls in the first place. You can create a Control Library project that will generate a control library DLL for you. The drawback is that you have to create the controls with code only. In my last project, this was fine. In more complicated controls, this may be a problem. Here's an example: <DefaultProperty("Text"), ToolboxData("<{0}:BreadCrumb runat=server />")> _ Public Class BreadCrumb WebControl <Bindable(True)> _ Property Text() As String '...' End Property Protected Overrides Sub RenderContents(output as HtmlTextWriter) output.write(Text) End Sub Private Sub Page_Load(...) Handles MyBase.Load ' Setup your breadcrumb and store the HTML output ' ' in the Text property ' End Sub End Class Anything you put in that Text property will be rendered. Then, any controls you put in here can function just like any other control you use. Just import it into your Toolbox, make your registration reference, then plop it onto the ASP page. A: I use StarTeam here and it allows you to "share" objects (files, change requests, requirements etc) across multiple folders. Not sure if Subversion (SVN) has that feature. If it doesn't, here's another trick you can use: create a junction from the primary location of the controls to a location in the other projects. A junction is just like a Unix symbolic link. You can download the tool for creating junctions in Windows from here A: I recently did a web application that just referenced the files (about 90 in total) from one web application (aspx, master and ascx) without too much of an issue. That said I was using a heavily modified version of the MVP pattern, a lot of interfaces and conventions to keep the complexity down, the same middle tier and one site was a subset of the other. Big issues: * *Master pages (and in turn designers and html view formatting) don’t work on a referenced file so you lose a lot of functionality. A pre-build step and a lot of svn:ignore entries was my hack around this. It was also a pain to get CruiseControl.NET to get the pre-build task to execute in the right folders. *Shared pages/controls need to be extremely aware of what they touch and reference as to avoid bringing in extra dependencies. *Both sites are locked together for deployment. *I now have to pray that the maintainer reads my pokey little document about the mess I made. It’s so far outside of what I’ve seen in ASP.NET projects. I was under a massive time pressure to get it to work, it does and now both apps are in production. I wouldn’t recommend it but if you’re interested start at: Add Existing Item, select some files, click on the Add button’s arrow and say Add as a Link. A: I have a suggestion.WE can use user control across multiples application by creating user control inside website project as normally.Then change the website property Use fixed naming and single page assemblies.Then we can use the user control dll into multiple applications.
{ "language": "en", "url": "https://stackoverflow.com/questions/28051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What are some 'good use' examples of dynamic casting? We often hear/read that one should avoid dynamic casting. I was wondering what would be 'good use' examples of it, according to you? Edit: Yes, I'm aware of that other thread: it is indeed when reading one of the first answers there that I asked my question! A: This recent thread gives an example of where it comes in handy. There is a base Shape class and classes Circle and Rectangle derived from it. In testing for equality, it is obvious that a Circle cannot be equal to a Rectangle and it would be a disaster to try to compare them. While iterating through a collection of pointers to Shapes, dynamic_cast does double duty, telling you if the shapes are comparable and giving you the proper objects to do the comparison on. Vector iterator not dereferencable A: Here's something I do often, it's not pretty, but it's simple and useful. I often work with template containers that implement an interface, imagine something like template<class T> class MyVector : public ContainerInterface ... Where ContainerInterface has basic useful stuff, but that's all. If I want a specific algorithm on vectors of integers without exposing my template implementation, it is useful to accept the interface objects and dynamic_cast it down to MyVector in the implementation. Example: // function prototype (public API, in the header file) void ProcessVector( ContainerInterface& vecIfce ); // function implementation (private, in the .cpp file) void ProcessVector( ContainerInterface& vecIfce) { MyVector<int>& vecInt = dynamic_cast<MyVector<int> >(vecIfce); // the cast throws bad_cast in case of error but you could use a // more complex method to choose which low-level implementation // to use, basically rolling by hand your own polymorphism. // Process a vector of integers ... } I could add a Process() method to the ContainerInterface that would be polymorphically resolved, it would be a nicer OOP method, but I sometimes prefer to do it this way. When you have simple containers, a lot of algorithms and you want to keep your implementation hidden, dynamic_cast offers an easy and ugly solution. You could also look at double-dispatch techniques. HTH A: My current toy project uses dynamic_cast twice; once to work around the lack of multiple dispatch in C++ (it's a visitor-style system that could use multiple dispatch instead of the dynamic_casts), and once to special-case a specific subtype. Both of these are acceptable, in my view, though the former at least stems from a language deficit. I think this may be a common situation, in fact; most dynamic_casts (and a great many "design patterns" in general) are workarounds for specific language flaws rather than something that aim for. A: It can be used for a bit of run-time type-safety when exposing handles to objects though a C interface. Have all the exposed classes inherit from a common base class. When accepting a handle to a function, first cast to the base class, then dynamic cast to the class you're expecting. If they passed in a non-sensical handle, you'll get an exception when the run-time can't find the rtti. If they passed in a valid handle of the wrong type, you get a NULL pointer and can throw your own exception. If they passed in the correct pointer, you're good to go. This isn't fool-proof, but it is certainly better at catching mistaken calls to the libraries than a straight reinterpret cast from a handle, and waiting until some data gets mysteriously corrupted when you pass the wrong handle in. A: Well it would really be nice with extension methods in C#. For example let's say I have a list of objects and I want to get a list of all ids from them. I can step through them all and pull them out but I would like to segment out that code for reuse. so something like List<myObject> myObjectList = getMyObjects(); List<string> ids = myObjectList.PropertyList("id"); would be cool except on the extension method you won't know the type that is coming in. So public static List<string> PropertyList(this object objList, string propName) { var genList = (objList.GetType())objList; } would be awesome. A: It is very useful, however, most of the times it is too useful: if for getting the job done the easiest way is to do a dynamic_cast, it's more often than not a symptom of bad OO design, what in turn might lead to trouble in the future in unforeseen ways.
{ "language": "en", "url": "https://stackoverflow.com/questions/28080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: WPF Listbox style with a button I have a ListBox that has a style defined for ListBoxItems. Inside this style, I have some labels and a button. One that button, I want to define a click event that can be handled on my page (or any page that uses that style). How do I create an event handler on my WPF page to handle the event from my ListBoxItems style? Here is my style (affected code only): <Style x:Key="UsersTimeOffList" TargetType="{x:Type ListBoxItem}"> ... <Grid> <Button x:Name="btnRemove" Content="Remove" Margin="0,10,40,0" Click="btnRemove_Click" /> </Grid> </Style> Thanks! A: As Arcturus posted, RoutedCommands are a great way to achieve this. However, if there's only the one button in your DataTemplate then this might be a bit simpler: You can actually handle any button's Click event from the host ListBox, like this: <ListBox Button.Click="removeButtonClick" ... /> Any buttons contained within the ListBox will fire that event when they're clicked on. From within the event handler you can use e.OriginalSource to get a reference back to the button that was clicked on. Obviously this is too simplistic if your ListBoxItems have more than one button, but in many cases it works just fine. A: Take a look at RoutedCommands. Define your command in myclass somewhere as follows: public static readonly RoutedCommand Login = new RoutedCommand(); Now define your button with this command: <Button Command="{x:Static myclass.Login}" /> You can use CommandParameter for extra information.. Now last but not least, start listening to your command: In the constructor of the class you wish to do some nice stuff, you place: CommandBindings.Add(new CommandBinding(myclass.Login, ExecuteLogin)); or in XAML: <UserControl.CommandBindings> <CommandBinding Command="{x:Static myclass.Login}" Executed="ExecuteLogin" /> </UserControl.CommandBindings> And you implement the delegate the CommandBinding needs: private void ExecuteLogin(object sender, ExecutedRoutedEventArgs e) { //Your code goes here... e has your parameter! } You can start listening to this command everywhere in your visual tree! Hope this helps PS You can also define the CommandBinding with a CanExecute delegate which will even disable your command if the CanExecute says so :) PPS Here is another example: RoutedCommands in WPF A: You could create a user control (.ascx) to house the listbox. Then add a public event for the page. Public Event btnRemove() Then on the button click event in the usercontrol RaiseEvent btnRemove() You can also pass objects through the event just like any other method. This will allow your user control to tell your page what to delete.
{ "language": "en", "url": "https://stackoverflow.com/questions/28092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: PHP equivalent of .NET/Java's toString() How do I convert the value of a PHP variable to string? I was looking for something better than concatenating with an empty string: $myText = $myVar . ''; Like the ToString() method in Java or .NET. A: You can use the casting operators: $myText = (string)$myVar; There are more details for string casting and conversion in the Strings section of the PHP manual, including special handling for booleans and nulls. A: Another option is to use the built in settype function: <?php $foo = "5bar"; // string $bar = true; // boolean settype($foo, "integer"); // $foo is now 5 (integer) settype($bar, "string"); // $bar is now "1" (string) ?> This actually performs a conversion on the variable unlike typecasting and allows you to have a general way of converting to multiple types. A: In addition to the answer given by Thomas G. Mayfield: If you follow the link to the string casting manual, there is a special case which is quite important to understand: (string) cast is preferable especially if your variable $a is an object, because PHP will follow the casting protocol according to its object model by calling __toString() magic method (if such is defined in the class of which $a is instantiated from). PHP does something similar to function castToString($instance) { if (is_object($instance) && method_exists($instance, '__toString')) { return call_user_func_array(array($instance, '__toString')); } } The (string) casting operation is a recommended technique for PHP5+ programming making code more Object-Oriented. IMO this is a nice example of design similarity (difference) to other OOP languages like Java/C#/etc., i.e. in its own special PHP way (whenever it's for the good or for the worth). A: As others have mentioned, objects need a __toString method to be cast to a string. An object that doesn't define that method can still produce a string representation using the spl_object_hash function. This function returns a unique identifier for the object. This id can be used as a hash key for storing objects, or for identifying an object, as long as the object is not destroyed. Once the object is destroyed, its hash may be reused for other objects. I have a base Object class with a __toString method that defaults to calling md5(spl_object_hash($this)) to make the output clearly unique, since the output from spl_object_hash can look very similar between objects. This is particularly helpful for debugging code where a variable initializes as an Object and later in the code it is suspected to have changed to a different Object. Simply echoing the variables to the log can reveal the change from the object hash (or not). A: I think this question is a bit misleading since, toString() in Java isn't just a way to cast something to a String. That is what casting via (string) does, and it works as well in PHP. // Java String myText = (string) myVar; // PHP $myText = (string) $myVar; Note that this can be problematic as Java is type-safe (see here for more details). But as I said, this is casting and therefore not the equivalent of Java's toString(). toString in Java doesn't just cast an object to a String. It instead will give you the String representation. And that's what __toString() in PHP does. // Java class SomeClass{ public String toString(){ return "some string representation"; } } // PHP class SomeClass{ public function __toString() { return "some string representation"; } } And from the other side: // Java new SomeClass().toString(); // "Some string representation" // PHP strval(new SomeClass); // "Some string representation" What do I mean by "giving the String representation"? Imagine a class for a library with millions of books. * *Casting that class to a String would (by default) convert the data, here all books, into a string so the String would be very long and most of the time not very useful. *To String instead will give you the String representation, i.e., only the library's name. This is shorter and therefore gives you less, but more important information. These are both valid approaches but with very different goals, neither is a perfect solution for every case, and you have to choose wisely which fits your needs better. Sure, there are even more options: $no = 421337 // A number in PHP $str = "$no"; // In PHP, the stuff inside "" is calculated and variables are replaced $str = print_r($no, true); // Same as String.format(); $str = settype($no, 'string'); // Sets $no to the String Type $str = strval($no); // Get the string value of $no $str = $no . ''; // As you said concatenate an empty string works too All of these methods will return a String, some of them using __toString internally and some others will fail on Objects. Take a look at the PHP documentation for more details. A: Some, if not all, of the methods in the previous answers fail when the intended string variable has a leading zero, for example, 077543. An attempt to convert such a variable fails to get the intended string, because the variable is converted to base 8 (octal). All these will make $str have a value of 32611: $no = 077543 $str = (string)$no; $str = "$no"; $str = print_r($no,true); $str = strval($no); $str = settype($no, "integer"); A: You can either use typecasting: $var = (string)$varname; or StringValue: $var = strval($varname); or SetType: $success = settype($varname, 'string'); // $varname itself becomes a string They all work for the same thing in terms of Type-Juggling. A: How do I convert the value of a PHP variable to string? A value can be converted to a string using the (string) cast or the strval() function. (Edit: As Thomas also stated). It also should be automatically casted for you when you use it as a string. A: This is done with typecasting: $strvar = (string) $var; // Casts to string echo $var; // Will cast to string implicitly var_dump($var); // Will show the true type of the variable In a class you can define what is output by using the magical method __toString. An example is below: class Bottles { public function __toString() { return 'Ninety nine green bottles'; } } $ex = new Bottles; var_dump($ex, (string) $ex); // Returns: instance of Bottles and "Ninety nine green bottles" Some more type casting examples: $i = 1; // int 1 var_dump((int) $i); // bool true var_dump((bool) $i); // string "1" var_dump((string) 1); A: The documentation says that you can also do: $str = "$foo"; It's the same as cast, but I think it looks prettier. Source: * *Russian *English A: You are looking for strval: string strval ( mixed $var ) Get the string value of a variable. See the documentation on string for more information on converting to string. This function performs no formatting on the returned value. If you are looking for a way to format a numeric value as a string, please see sprintf() or number_format(). A: For primitives just use (string)$var or print this variable straight away. PHP is dynamically typed language and variable will be casted to string on the fly. If you want to convert objects to strings you will need to define __toString() method that returns string. This method is forbidden to throw exceptions. A: Putting it in double quotes should work: $myText = "$myVar"; A: Use print_r: $myText = print_r($myVar,true); You can also use it like: $myText = print_r($myVar,true)."foo bar"; This will set $myText to a string, like: array ( 0 => '11', )foo bar Use var_export to get a little bit more info (with types of variable,...): $myText = var_export($myVar,true); A: I think it is worth mentioning that you can catch any output (like print_r, var_dump) in a variable by using output buffering: <?php ob_start(); var_dump($someVar); $result = ob_get_clean(); ?> Thanks to: How can I capture the result of var_dump to a string? A: You can always create a method named .ToString($in) that returns $in . ''; A: If you're converting anything other than simple types like integers or booleans, you'd need to write your own function/method for the type that you're trying to convert, otherwise PHP will just print the type (such as array, GoogleSniffer, or Bidet). A: PHP is dynamically typed, so like Chris Fournier said, "If you use it like a string it becomes a string". If you're looking for more control over the format of the string then printf is your answer. A: Double quotes should work too... it should create a string, then it should APPEND/INSERT the casted STRING value of $myVar in between 2 empty strings. A: You can also use the var_export PHP function. A: $parent_category_name = "new clothes & shoes"; // To make it to string option one $parent_category = strval($parent_category_name); // Or make it a string by concatenating it with 'new clothes & shoes' // It is useful for database queries $parent_category = "'" . strval($parent_category_name) . "'"; A: For objects, you may not be able to use the cast operator. Instead, I use the json_encode() method. For example, the following will output contents to the error log: error_log(json_encode($args)); A: Try this little strange, but working, approach to convert the textual part of stdClass to string type: $my_std_obj_result = $SomeResponse->return->data; // Specific to object/implementation $my_string_result = implode ((array)$my_std_obj_result); // Do conversion A: __toString method or (string) cast $string=(string)$variable; //force make string you can treat an object as a string class Foo { public function __toString() { return "foo"; } } echo new Foo(); //foo also, have another trick, ı assume ı have int variable ı want to make string it $string=''.$intvariable; A: This can be difficult in PHP because of the way data types are handled internally. Assuming that you don't mean complex types such as objects or resources, generic casting to strings may still result in incorrect conversion. In some cases pack/unpack may even be required, and then you still have the possibility of problems with string encoding. I know this might sound like a stretch but these are the type of cases where standard type juggling such as $myText = $my_var .''; and $myText = (string)$my_var; (and similar) may not work. Otherwise I would suggest a generic cast, or using serialize() or json_encode(), but again it depends on what you plan on doing with the string. The primary difference is that Java and .NET have better facilities with handling binary data and primitive types, and converting to/from specific types and then to string from there, even if a specific case is abstracted away from the user. It's a different story with PHP where even handling hex can leave you scratching your head until you get the hang of it. I can't think of a better way to answer this which is comparable to Java/.NET where _toString() and such methods are usually implemented in a way that's specific to the object or data type. In that way the magic methods __toString() and __serialize()/__unserialize() may be the best comparison. Also keep in mind that PHP doesn't have the same concepts of primitive data types. In essence every data type in PHP can be considered an object, and their internal handlers try to make them somewhat universal, even if it means loosing accuracy such as when converting a float to int. You can't deal with types as you can in Java unless your working with their zvals within a native extension. While PHP userspace doesn't define int, char, bool, or float as an objects, everything is stored in a zval structure which is as close to an object that you can find in C, with generic functions for handling the data within the zval. Every possible way to access data within PHP goes down to the zval structure and the way the zend vm allows you to handles them without converting them to native types and structures. With Java types you have finer grained access to their data and more ways to to manipulate them, but also greater complexity, hence the strong type vs weak type argument. These links my be helpful: https://www.php.net/manual/en/language.types.type-juggling.php https://www.php.net/manual/en/language.oop5.magic.php A: I use variableToString. It handles every PHP type and is flexible (you can extend it if you want).
{ "language": "en", "url": "https://stackoverflow.com/questions/28098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "566" }
Q: SQL Server DateTime conversion failure I have a large table with 1 million+ records. Unfortunately, the person who created the table decided to put dates in a varchar(50) field. I need to do a simple date comparison - datediff(dd, convert(datetime, lastUpdate, 100), getDate()) < 31 But it fails on the convert(): Conversion failed when converting datetime from character string. Apparently there is something in that field it doesn't like, and since there are so many records, I can't tell just by looking at it. How can I properly sanitize the entire date field so it does not fail on the convert()? Here is what I have now: select count(*) from MyTable where isdate(lastUpdate) > 0 and datediff(dd, convert(datetime, lastUpdate, 100), getDate()) < 31 @SQLMenace I'm not concerned about performance in this case. This is going to be a one time query. Changing the table to a datetime field is not an option. @Jon Limjap I've tried adding the third argument, and it makes no difference. @SQLMenace The problem is most likely how the data is stored, there are only two safe formats; ISO YYYYMMDD; ISO 8601 yyyy-mm-dd Thh:mm:ss:mmm (no spaces) Wouldn't the isdate() check take care of this? I don't have a need for 100% accuracy. I just want to get most of the records that are from the last 30 days. @SQLMenace select isdate('20080131') -- returns 1 select isdate('01312008') -- returns 0 @Brian Schkerke Place the CASE and ISDATE inside the CONVERT() function. Thanks! That did it. A: Place the CASE and ISDATE inside the CONVERT() function. SELECT COUNT(*) FROM MyTable WHERE DATEDIFF(dd, CONVERT(DATETIME, CASE IsDate(lastUpdate) WHEN 1 THEN lastUpdate ELSE '12-30-1899' END), GetDate()) < 31 Replace '12-30-1899' with the default date of your choice. A: How about writing a cursor to loop through the contents, attempting the cast for each entry?When an error occurs, output the primary key or other identifying details for the problem record. I can't think of a set-based way to do this. Not totally setbased but if only 3 rows out of 1 million are bad it will save you a lot of time select * into BadDates from Yourtable where isdate(lastUpdate) = 0 select * into GoodDates from Yourtable where isdate(lastUpdate) = 1 then just look at the BadDates table and fix that A: The ISDATE() would take care of the rows which were not formatted properly if it were indeed being executed first. However, if you look at the execution plan you'll probably find that the DATEDIFF predicate is being applied first - thus the cause of your pain. If you're using SQL Server Management Studio hit CTRL+L to view the estimated execution plan for a particular query. Remember, SQL isn't a procedural language and short circuiting logic may work, but only if you're careful in how you apply it. A: How about writing a cursor to loop through the contents, attempting the cast for each entry? When an error occurs, output the primary key or other identifying details for the problem record. I can't think of a set-based way to do this. Edit - ah yes, I forgot about ISDATE(). Definitely a better approach than using a cursor. +1 to SQLMenace. A: I would suggest cleaning up the mess and changing the column to a datetime because doing stuff like this WHERE datediff(dd, convert(datetime, lastUpdate), getDate()) < 31 cannot use an index and it will be many times slower than if you had a datetime colum,n and did where lastUpdate > getDate() -31 You also need to take into account hours and seconds of course A: In your convert call, you need to specify a third style parameter, e.g., the format of the datetimes that are stored as varchar, as specified in this document: CAST and CONVERT (T-SQL) A: Print out the records. Give the hardcopy to the idiot who decided to use a varchar(50) and ask them to find the problem record. Next time they might just see the point of choosing an appropriate data type. A: The problem is most likely how the data is stored, there are only two safe formats ISO YYYYMMDD ISO 8601 yyyy-mm-dd Thh:mm:ss:mmm(no spaces) these will work no matter what your language is. You might need to do a SET DATEFORMAT YMD (or whatever the data is stored as) to make it work A: Wouldn't the isdate() check take care of this? Run this to see what happens select isdate('20080131') select isdate('01312008') A: I am sure that changing the table/column might not be an option due to any legacy system requirements, but have you thought about creating a view which has the date conversion logic built in, if you are using a more recent version of sql, then you can possibly even use an indexed view?
{ "language": "en", "url": "https://stackoverflow.com/questions/28110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: calculating user defined formulas (with c++) We would like to have user defined formulas in our c++ program. e.g. The value v = x + ( y - (z - 2)) / 2. Later in the program the user would define x,y and z -> the program should return the result of the calculation. Somewhen later the formula may get changed, so the next time the program should parse the formula and add the new values. Any ideas / hints how to do something like this ? So far I just came to the solution to write a parser to calculate these formulas - maybe any ideas about that ? A: If it will be used frequently and if it will be extended in the future, I would almost recommend adding either Python or Lua into your code. Lua is a very lightweight scripting language which you can hook into and provide new functions, operators etc. If you want to do more robust and complicated things, use Python instead. A: You can represent your formula as a tree of operations and sub-expressions. You may want to define types or constants for Operation types and Variables. You can then easily enough write a method that recurses through the tree, applying the appropriate operations to whatever values you pass in. A: Building your own parser for this should be a straight-forward operation: ) convert the equation from infix to postfix notation (a typical compsci assignment) (I'd use a stack) ) wait to get the values you want ) pop the stack of infix items, dropping the value for the variable in where needed ) display results A: Using Spirit (for example) to parse (and the 'semantic actions' it provides to construct an expression tree that you can then manipulate, e.g., evaluate) seems like quite a simple solution. You can find a grammar for arithmetic expressions there for example, if needed... (it's quite simple to come up with your own). Note: Spirit is very simple to learn, and quite adapted for such tasks. A: There's generally two ways of doing it, with three possible implementations: * *as you've touched on yourself, a library to evaluate formulas *compiling the formula into code The second option here is usually done either by compiling something that can be loaded in as a kind of plugin, or it can be compiled into a separate program that is then invoked and produces the necessary output. For C++ I would guess that a library for evaluation would probably exist somewhere so that's where I would start. A: If you want to write your own, search for "formal automata" and/or "finite state machine grammar" In general what you will do is parse the string, pushing characters on a stack as you go. Then start popping the characters off and perform tasks based on what is popped. It's easier to code if you force equations to reverse-polish notation. A: To make your life easier, I think getting this kind of input is best done through a GUI where users are restricted in what they can type in. If you plan on doing it from the command line (that is the impression I get from your post), then you should probably define a strict set of allowable inputs (e.g. only single letter variables, no whitespace, and only certain mathematical symbols: ()+-*/ etc.). Then, you will need to: Read in the input char array Parse it in order to build up a list of variables and actions Carry out those actions - in BOMDAS order A: With ANTLR you can create a parser/compiler that will interpret the user input, then execute the calculations using the Visitor pattern. A good example is here, but it is in C#. You should be able to adapt it quickly to your needs and remain using C++ as your development platform.
{ "language": "en", "url": "https://stackoverflow.com/questions/28124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Feasibility of GPU as a CPU? What do you think the future of GPU as a CPU initiatives like CUDA are? Do you think they are going to become mainstream and be the next adopted fad in the industry? Apple is building a new framework for using the GPU to do CPU tasks and there has been alot of success in the Nvidias CUDA project in the sciences. Would you suggest that a student commit time into this field? A: Long-term I think that the GPU will cease to exist, as general purpose processors evolve to take over those functions. Intel's Larrabee is the first step. History has shown that betting against x86 is a bad idea. Study of massively parallel architectures and vector processing will still be useful. A: First of all I don't think this questions really belongs on SO. In my opinion the GPU is a very interesting alternative whenever you do vector-based float mathematics. However this translates to: It will not become mainstream. Most mainstream (Desktop) applications do very few floating-point calculations. It has already gained traction in games (physics-engines) and in scientific calculations. If you consider any of those two as "mainstream", than yes, the GPU will become mainstream. I would not consider these two as mainstream and I therefore think, the GPU will raise to be the next adopted fad in the mainstream industry. If you, as a student have any interest in heavily physics based scientific calculations, you should absolutely commit some time to it (GPUs are very interesting pieces of hardware anyway). A: GPU's will never supplant CPU's. A CPU executes a set of sequential instructions, and a GPU does a very specific type of calculation in parallel. These GPU's have great utility in numerical computing and graphics; however, most programs can in no way utilize this flavor of computing. You will soon begin seeing new processers from Intel and AMD that include GPU-esque floating point vector computations as well as standard CPU computations. A: I think it's the right way to go. Considering that GPUs have been tapped to create cheap supercomputers, it appears to be the natural evolution of things. With so much computing power and R&D already done for you, why not exploit the available technology? So go ahead and do it. It will make for some cool research, as well as a legit reason to buy that high-end graphic card so you can play Crysis and Assassin's Creed on full graphic detail ;) A: Its one of those things that you see 1 or 2 applications for, but soon enough someone will come up with a 'killer app' that figures out how to do something more generally useful with it, at superfast speeds. Pixel shaders to apply routines to large arrays of float values, maybe we'll see some GIS coverage applications or well, I don't know. If you don't devote more time to it than I have then you'll have the same level of insight as me - ie little! I have a feeling it could be a really big thing, as do Intel and S3, maybe it just needs 1 little tweak adding to the hardware, or someone with a lightbulb above their head. A: Commit time if you are interested in scientific and parallel computing. Don't think of CUDA and making a GPU appear as a CPU. It only allows a more direct method of programming GPUs than older GPGPU programming techniques. General purpose CPUs derive their ability to work well on a wide variety of tasks from all the work that has gone into branch prediction, pipelining, superscaler, etc. This makes it possible for them to achieve good performance on a wide variety of workloads, while making them suck at high-throughput memory intensive floating point operations. GPUs were originally designed to do one thing, and do it very, very well. Graphics operations are inherently parallel. You can calculate the colour of all pixels on the screen at the same time, because there are no data dependencies between the results. Additionally, the algorithms needed did not have to deal with branches, since nearly any branch that would be required could be achieved by setting a co-efficient to zero or one. The hardware could therefore be very simple. It is not necessary to worry about branch prediction, and instead of making a processor superscaler, you can simply add as many ALU's as you can cram on the chip. With programmable texture and vertex shaders, GPU's gained a path to general programmability, but they are still limited by the hardware, which is still designed for high throughput floating point operations. Some additional circuitry will probably be added to enable more general purpose computation, but only up to a point. Anything that compromises the ability of a GPU to do graphics won't make it in. After all, GPU companies are still in the graphics business and the target market is still gamers and people who need high end visualization. The GPGPU market is still a drop in the bucket, and to a certain extent will remain so. After all, "it looks pretty" is a much lower standard to meet than "100% guaranteed and reproducible results, every time." So in short, GPU's will never be feasible as CPU's. They are simply designed for different kinds of workloads. I expect GPU's will gain features that make them useful for quickly solving a wider variety of problems, but they will always be graphics processing units first and foremost. It will always be important to always match the problem you have with the most appropriate tool you have to solve it. A: With so much untapped power I cannot see how it would go unused for too long. The question is, though, how the GPU will be used for this. CUDA seems to be a good guess for now but other techologies are emerging on the horizon which might make it more approachable by the average developer. Apple have recently announced OpenCL which they claim is much more than CUDA, yet quite simple. I'm not sure what exactly to make of that but the khronos group (The guys working on the OpenGL standard) are working on the OpenCL standard, and is trying to make it highly interoperable with OpenGL. This might lead to a technology which is better suited for normal software development. It's an interesting subject and, incidentally, I'm about to start my master thesis on the subject of how best to make the GPU power available to the average developers (if possible) with CUDA as the main focus. A: A long time ago, it was really hard to do floating point calculations (thousands/millions of cycles of emulation per instruction on terribly performing (by today's standards) CPUs like the 80386). People that needed floating point performance could get an FPU (for example, the 80387. The old FPU were fairly tightly integrated into the CPU's operation, but they were external. Later on they became integrated, with the 80486 having an FPU built-in. The old-time FPU is analagous to GPU computation. We can already get it with AMD's APUs. An APU is a CPU with a GPU built into it. So, I think the actual answer to your question is, GPU's won't become CPUs, instead CPU's will have a GPU built in.
{ "language": "en", "url": "https://stackoverflow.com/questions/28147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Create an Attribute to Break the Build OK, this kind of follows on from my previous question. What I would really like to do is create some sort of attribute which allows me to decorate a method that will break the build. Much like the Obsolete("reason", true) attribute, but without falsely identifying obsolete code. To clarify: I dont want it to break the build on ANY F6 (Build) press, I only want it to break the build if a method decorated with the attribute is called somewhere else in the code. Like I said, similar to obsolete, but not the same. I know I am not alone in this, since other users want to use it for other reasons. I have never created custom attributes before so it is all new to me! A: I think this would be an excellent feature request for Microsoft: Create an abstract base class attribute CompilerExecutedAttribute that the compiler processes in some manner or that can influence the compiling process. Then we could inherit from this attribute and implement different operations, e.g. emit an error or a warning. A: If this is for XML serialization and NHibernate, where you want the parameterless constructor to be accessible (as is the case in the example you referenced), then use a private or protected parameterless constructor for serialization, or a protected constructor for NHibernate. With the protected version, you are opening yourself up to inherited classes being able to call that code. If you don't want code calling a method, don't make it accessible. EDIT: To perhaps answer the deeper question, AFAIK the compiler only knows about three attributes: Obsolete, Conditional, and AttributeUsage. To add special handling for other attributes would require modifying the compiler. A: If you consider a warning (which is what [Obsolete] throws up) build-breaking, then just use the #warning compiler directive. Edit: I've never used it, but #error is also available. A: I think the only fool proof way would be to extend the Visual Studio (through VSIP) and subscribe to the correct event (maybe in the EnvDTE.BuildEvents) class, and check your code for usage of the constructor, and cancel the build if you detect it. A: This is all starting to sound a bit like Yesterday's TDWTF. :-) A: I will have to agree with Greg: make up an attribute for it. And if you're really serious, maybe find a way to figure out if the constructor is being accessed by anything other than XMLSerializer and throw an exception if it is. A: I'd suggest you to use the #error directive. Another pretty unknown attribute that might do the work is the conditional attribute (depending on what you're trying to ahieve) [Conditional("CONDITION")] public static void MiMethod(int a, string msg) which will remove the method invocation from the IL code itself if "MY_CONDITION" is defined. A: Create an FxCop rule, and add FxCop to your integration build in order to check for this. You'll get warnings, rather than a failing build. Attributes 'run' at reflection time rather than build time. Alternatively (and this is rather nasty) put a compiler directive around the method you don't want to be called. Then your code will break if you call it, but you can set up a build that passes the right compiler directive and doesn't. A: Throw a custom exception and unit test for it as a post build step A: Responding 4 years later :) I had same question if there were an alternative to Obsolete. From what I recall (channel9 videos) a little while ago Microsoft stated that it is working on giving devs access to something like a compiler api at some point so in the future it's conceivable that you could write a compiler "plugin" that would allow to to decorate methods with your own custom attribute and tell the compiler to cancel if it looks like the decorated code could be getting called some where else in the code etc. Which would actually be pretty cool when you think about it. It also reminds me that I should also try and read up on the progress of that compiler api thing MS is working on ... A: Why not just make something up? An unknown attribute would surely break the build. [MyMadeUpAttributeThatBreaksTheBuildForSure] public class NotDoneYet {}
{ "language": "en", "url": "https://stackoverflow.com/questions/28150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Multiple classes in a header file vs. a single header file per class For whatever reason, our company has a coding guideline that states: Each class shall have it's own header and implementation file. So if we wrote a class called MyString we would need an associated MyStringh.h and MyString.cxx. Does anyone else do this? Has anyone seen any compiling performance repercussions as a result? Does 5000 classes in 10000 files compile just as quickly as 5000 classes in 2500 files? If not, is the difference noticeable? [We code C++ and use GCC 3.4.4 as our everyday compiler] A: The term here is translation unit and you really want to (if possible) have one class per translation unit ie, one class implementation per .cpp file, with a corresponding .h file of the same name. It's usually more efficient (from a compile/link) standpoint to do things this way, especially if you're doing things like incremental link and so forth. The idea being, translation units are isolated such that, when one translation unit changes, you don't have to rebuild a lot of stuff, as you would have to if you started lumping many abstractions into a single translation unit. Also you'll find many errors/diagnostics are reported via file name ("Error in Myclass.cpp, line 22") and it helps if there's a one-to-one correspondence between files and classes. (Or I suppose you could call it a 2 to 1 correspondence). A: Overwhelmed by thousands lines of code? Having one set of header/source files per class in a directory can seem overkill. And if the number of classes goes toward 100 or 1000, it can even be frightening. But having played with sources following the philosophy "let's put together everything", the conclusion is that only the one who wrote the file has any hope to not be lost inside. Even with an IDE, it is easy to miss things because when you're playing with a source of 20,000 lines, you just close your mind for anything not exactly referring to your problem. Real life example: the class hierarchy defined in those thousand lines sources closed itself into a diamond-inheritance, and some methods were overridden in child classes by methods with exactly the same code. This was easily overlooked (who wants to explore/check a 20,000 lines source code?), and when the original method was changed (bug correction), the effect was not as universal as excepted. Dependancies becoming circular? I had this problem with templated code, but I saw similar problems with regular C++ and C code. Breaking down your sources into 1 header per struct/class lets you: * *Speed up compilation because you can use symbol forward-declaration instead of including whole objects *Have circular dependencies between classes (§) (i.e. class A has a pointer to B, and B has a pointer to A) In source-controlled code, class dependencies could lead to regular moving of classes up and down the file, just to make the header compile. You don't want to study the evolution of such moves when comparing the same file in different versions. Having separate headers makes the code more modular, faster to compile, and makes it easier to study its evolution through different versions diffs For my template program, I had to divide my headers into two files: The .HPP file containing the template class declaration/definition, and the .INL file containing the definitions of the said class methods. Putting all this code inside one and only one unique header would mean putting class definitions at the beginning of this file, and the method definitions at the end. And then, if someone needed only a small part of the code, with the one-header-only solution, they still would have to pay for the slower compilation. (§) Note that you can have circular dependencies between classes if you know which class owns which. This is a discussion about classes having knowledge of the existence of other classes, not shared_ptr circular dependencies antipattern. One last word: Headers should be self-sufficients One thing, though, that must be respected by a solution of multiple headers and multiple sources. When you include one header, no matter which header, your source must compile cleanly. Each header should be self-sufficient. You're supposed to develop code, not treasure-hunting by greping your 10,000+ source files project to find which header defines the symbol in the 1,000 lines header you need to include just because of one enum. This means that either each header defines or forward-declare all the symbols it uses, or include all the needed headers (and only the needed headers). Question about circular dependencies underscore-d asks: Can you explain how using separate headers makes any difference to circular dependencies? I don't think it does. We can trivially create a circular dependency even if both classes are fully declared in the same header, simply by forward-declaring one in advance before we declare a handle to it in the other. Everything else seems to be great points, but the idea that separate headers facilitate circular dependencies seems way off underscore_d, Nov 13 at 23:20 Let's say you have 2 class templates, A and B. Let's say the definition of class A (resp. B) has a pointer to B (resp. A). Let's also say the methods of class A (resp. B) actually call methods from B (resp. A). You have a circular dependency both in the definition of the classes, and the implementations of their methods. If A and B were normal classes, and A and B's methods were in .CPP files, there would be no problem: You would use a forward declaration, have a header for each class definitions, then each CPP would include both HPP. But as you have templates, you actually have to reproduce that patterns above, but with headers only. This means: * *a definition header A.def.hpp and B.def.hpp *an implementation header A.inl.hpp and B.inl.hpp *for convenience, a "naive" header A.hpp and B.hpp Each header will have the following traits: * *In A.def.hpp (resp. B.def.hpp), you have a forward declaration of class B (resp. A), which will enable you to declare a pointer/reference to that class *A.inl.hpp (resp. B.inl.hpp) will include both A.def.hpp and B.def.hpp, which will enable methods from A (resp. B) to use the class B (resp. A). *A.hpp (resp. B.hpp) will directly include both A.def.hpp and A.inl.hpp (resp. B.def.hpp and B.inl.hpp) *Of course, all headers need to be self sufficient, and protected by header guards The naive user will include A.hpp and/or B.hpp, thus ignoring the whole mess. And having that organization means the library writer can solve the circular dependencies between A and B while keeping both classes in separate files, easy to navigate once you understand the scheme. Please note that it was an edge case (two templates knowing each other). I expect most code to not need that trick. A: Most places where I have worked have followed this practice. I've actually written coding standards for BAE (Aust.) along with the reasons why instead of just carving something in stone with no real justification. Concerning your question about source files, it's not so much time to compile but more an issue of being able to find the relevant code snippet in the first place. Not everyone is using an IDE. And knowing that you just look for MyClass.h and MyClass.cpp really saves time compared to running "grep MyClass *.(h|cpp)" over a bunch of files and then filtering out the #include MyClass.h statements... Mind you there are work-arounds for the impact of large numbers of source files on compile times. See Large Scale C++ Software Design by John Lakos for an interesting discussion. You might also like to read Code Complete by Steve McConnell for an excellent chapter on coding guidelines. Actualy, this book is a great read that I keep coming back to regularly. N.B. You need the first edition of Code Complete that is easily available online for a copy. The interesting section on coding and naming guidelines didn't make it into Code Complete 2. A: It's common practice to do this, especially to be able to include .h in the files that need it. Of course the performance is affected but try not to think about this problem until it arises :). It's better to start with the files separated and after that try to merge the .h's that are commonly used together to improve performance if you really need to. It all comes down to dependencies between files and this is very specific to each project. A: The best practice, as others have said, is to place each class in its own translation unit from a code maintenance and understandability perspective. However on large scale systems this is sometimes not advisable - see the section entitled "Make Those Source Files Bigger" in this article by Bruce Dawson for a discussion of the tradeoffs. A: We do that at work, its just easier to find stuff if the class and files have the same name. As for performance, you really shouldn't have 5000 classes in a single project. If you do, some refactoring might be in order. That said, there are instances when we have multiple classes in one file. And that is when it's just a private helper class for the main class of the file. A: +1 for separation. I just came onto a project where some classes are in files with a different name, or lumped in with another class, and it is impossible to find these in a quick and efficient manner. You can throw more resources at a build - you can't make up lost programmer time because (s)he can't find the right file to edit. A: In addition to simply being "clearer", separating classes into separate files makes it easier for multiple developers not to step on each others toes. There will be less merging when it comes time to commit changes to your version control tool. A: The same rule applies here, but it notes a few exceptions where it is allowed Like so: * *Inheritance trees *Classes that are only used within a very limited scope *Some Utilities are simply placed in a general 'utils.h' A: It is very helpful to have only have one class per file, but if you do your building via bulkbuild files which include all the individual C++ files, it makes for faster compilations since startup time is relatively large for many compilers. A: I found these guidelines particularly useful when it comes to header files : http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Header_Files A: Two words: Ockham's Razor. Keep one class per file with the corresponding header in a separate file. If you do otherwise, like keeping a piece of functionality per file, then you have to create all kinds of rules about what constitutes a piece of functionality. There is much more to gain by keeping one class per file. And, even half decent tools can handle large quantities of files. Keep it simple my friend. A: I'm surprised that almost everyone is in favor of having one file per class. The problem with that is that in the age of 'refactoring' one may have a hard time keeping the file and class names in synch. Everytime you change a class name, you then have to change the file name too, which means that you have to also make a change everywhere the file is included. I personally group related classes into a single files and then give such a file a meaningful name that won't have to change even if a class name changes. Having fewer files also makes scrolling through a file tree easier. I use Visual Studio on Windows and Eclipse CDT on Linux, and both have shortcut keys that take you straight to a class declaration, so finding a class declaration is easy and quick. Having said that, I think once a project is completed, or its structure has 'solidified', and name changes become rare, it may make sense to have one class per file. I wish there was a tool that could extract classes and place them in distinct .h and .cpp files. But I don't see this as essential. The choice also depends on the type of project one works on. In my opinion the issue doesn't deserve a black and white answer since either choice has pros and cons.
{ "language": "en", "url": "https://stackoverflow.com/questions/28160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: Does PHP have an equivalent to this type of Python string substitution? Python has this wonderful way of handling string substitutions using dictionaries: >>> 'The %(site)s site %(adj)s because it %(adj)s' % {'site':'Stackoverflow', 'adj':'rocks'} 'The Stackoverflow site rocks because it rocks' I love this because you can specify a value once in the dictionary and then replace it all over the place in the string. I've tried to achieve something similar in PHP using various string replace functions but everything I've come up with feels awkward. Does anybody have a nice clean way to do this kind of string substitution in PHP? Edit Here's the code from the sprintf page that I liked best. <?php function sprintf3($str, $vars, $char = '%') { $tmp = array(); foreach($vars as $k => $v) { $tmp[$char . $k . $char] = $v; } return str_replace(array_keys($tmp), array_values($tmp), $str); } echo sprintf3( 'The %site% site %adj% because it %adj%', array('site'=>'Stackoverflow', 'adj'=>'rocks')); ?> A: function subst($str, $dict){ return preg_replace(array_map(create_function('$a', 'return "/%\\($a\\)s/";'), array_keys($dict)), array_values($dict), $str); } You call it like so: echo subst('The %(site)s site %(adj)s because it %(adj)s', array('site'=>'Stackoverflow', 'adj'=>'rocks')); A: @Marius I don't know if it's faster, but you can do it without regexes: function subst($str, $dict) { foreach ($dict AS $key, $value) { $str = str_replace($key, $value, $str); } return $str; } A: Some of the user-contributed notes and functions in PHP's documentation for sprintf come quite close. Note: search the page for "sprintf2".
{ "language": "en", "url": "https://stackoverflow.com/questions/28165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Why does Visual Studio create a new .vsmdi file? If I open a solution in Visual Studio 2008 and run a unit test then VS creates a new .vsmdi file in the Solution Items folder and gives it the next number available e.g. My Solution2.vsmdi. Any idea why VS is doing this and how I can get it to stop doing this? A: It appears that the VSMDI problem is a known bug and has been around since VS2005 Team System but it has no clear fix as yet. Another reason to NOT use MS Test. An MSDN blog details how to run unit tests without VSMDI files. A: Assuming that the VSMDI file is under source control, here's a Microsoft Support article about this issue: Multiple vsmdi Files after Running Team Test with VSMDI file under Source Control Which says: Someone ran a test while someone else was modifying the vsmdi file. Team Test detects that the VSMDI file is out of sync;therefore, Team Test it makes one and thus you see the incrementing vsmdi files. And: Going forward you want to make sure the file is not marked for auto checkout when it is modified. When the current tester has the VSMDI file checked out you, do not want other users to be able to check it out. You want your developers to checkout the file, run a test, and check it back in. A: I work around this by always checking out the .vsmdi. It seems that this only happens when the .vsmdi file is read-only, e.g. not checked-out in a version control system that uses that kind of lock-local-files behavior (Perforce etc). A: An old post but vsmdi is a meta data file created by the test system.
{ "language": "en", "url": "https://stackoverflow.com/questions/28171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: How to select posts with specific tags/categories in WordPress This is a very specific question regarding MySQL as implemented in WordPress. I'm trying to develop a plugin that will show (select) posts that have specific 'tags' and belong to specific 'categories' (both multiple) I was told it's impossible because of the way categories and tags are stored: * *wp_posts contains a list of posts, each post have an "ID" *wp_terms contains a list of terms (both categories and tags). Each term has a TERM_ID *wp_term_taxonomy has a list of terms with their TERM_IDs and has a Taxonomy definition for each one of those (either a Category or a Tag) *wp_term_relationships has associations between terms and posts How can I join the tables to get all posts with tags "Nuclear" and "Deals" that also belong to the category "Category1"? A: I misunderstood you. I thought you wanted Nuclear or Deals. The below should give you only Nuclear and Deals. select p.* from wp_posts p, wp_terms t, wp_term_taxonomy tt, wp_term_relationship tr, wp_terms t2, wp_term_taxonomy tt2, wp_term_relationship tr2 wp_terms t2, wp_term_taxonomy tt2, wp_term_relationship tr2 where p.id = tr.object_id and t.term_id = tt.term_id and tr.term_taxonomy_id = tt.term_taxonomy_id and p.id = tr2.object_id and t2.term_id = tt2.term_id and tr2.term_taxonomy_id = tt2.term_taxonomy_id and p.id = tr3.object_id and t3.term_id = tt3.term_id and tr3.term_taxonomy_id = tt3.term_taxonomy_id and (tt.taxonomy = 'category' and tt.term_id = t.term_id and t.name = 'Category1') and (tt2.taxonomy = 'post_tag' and tt2.term_id = t2.term_id and t2.name = 'Nuclear') and (tt3.taxonomy = 'post_tag' and tt3.term_id = t3.term_id and t3.name = 'Deals') A: Try this: select p.* from wp_posts p, wp_terms t, wp_term_taxonomy tt, wp_term_relationship tr wp_terms t2, wp_term_taxonomy tt2, wp_term_relationship tr2 where p.id = tr.object_id and t.term_id = tt.term_id and tr.term_taxonomy_id = tt.term_taxonomy_id and p.id = tr2.object_id and t2.term_id = tt2.term_id and tr2.term_taxonomy_id = tt2.term_taxonomy_id and (tt.taxonomy = 'category' and tt.term_id = t.term_id and t.name = 'Category1') and (tt2.taxonomy = 'post_tag' and tt2.term_id = t2.term_id and t2.name in ('Nuclear', 'Deals')) Essentially I'm employing 2 copies of the pertinent child tables - terms, term_taxonomy, and term_relationship. One copy applies the 'Category1' restriction, the other the 'Nuclear' or 'Deals' restriction. BTW, what kind of project is this with posts all about nuclear deals? You trying to get us on some government list? ;) A: What a gross DB structure. Anyway, I'd do something like this (note I prefer EXISTS to joins, but you can re-write them as joins if you like; most query analyzers will collapse them to the same query plan anyway). You may have to do some additional juggling one way or another to make it work... SELECT * FROM wp_posts p WHERE EXISTS( SELECT * FROM wp_term_relationship tr WHERE tr.object_id = p.id AND EXISTS( SELECT * FROM wp_term_taxonomy tt WHERE tt.term_taxonomy_id = tr.term_taxonomy_id AND tt.taxonomy = 'category' AND EXISTS( SELECT * FROM wp_terms t WHERE t.term_id = tt.term_id AND t.name = "Category1" ) ) AND EXISTS( SELECT * FROM wp_term_taxonomy tt WHERE tt.term_taxonomy_id = tr.term_taxonomy_id AND tt.taxonomy = 'post_tag' AND EXISTS( SELECT * FROM wp_terms t WHERE t.term_id = tt.term_id AND t.name = "Nuclear" ) AND EXISTS( SELECT * FROM wp_terms t WHERE t.term_id = tt.term_id AND t.name = "Deals" ) ) ) A: So I tried both options on my WordPress db. I looked for the category "Tech" in my posts with the tags "Perl" AND "Programming". Eric's worked once I added a missing comma in the initial select statement. It returned 3 records. The problem is that the section that is looking for the "post_tag" is actually working as an OR option. One of my posts only had one tag not both. Also it would be good to make the SELECT DISTINCT. I tried Matt's version, but it kept returning an empty set. I may try to "juggle" with it. A: Thanks @Eric it works! Just a few code corrections for future reference: * *the first select statements misses a coma after wp_term_relationship tr2 *In the same select statemt the following must be change: wp_terms t2, wp_term_taxonomy tt2, wp_term_relationship tr2 should be wp_terms t3, wp_term_taxonomy tt3, wp_term_relationship tr3 A: Really so great answer .. helped me a lot.. great bcoz., it gave me basic approach to build my complex query ! one small correction, for ready users like me :) "wp_term_relationship" will give 'doesn't exist error' .. use wp_term_relationships as it is the correct table name. Thanks Eric
{ "language": "en", "url": "https://stackoverflow.com/questions/28196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Do you follow the Personal Software Process? Does your organization/team follow the Team Software Process? For more information - Personal Software Process on Wikipedia and Team Software Process on Wikipedia. I have two questions: * *What benefits have you seen from these processes? *What tools and/or methods do you use to follow these processes? A: I got into this once, even tried using PSP Dashboard. It's just too hard to keep up with. Who wants to use a stop watch for all their activities? Follow Joel's advice on Painless Scheduling and Evidence Based Scheduling. +1 this question, -1 to PSP. A: I have used the PSP and TSP process by heart for 4 years (though it was in the begining of my software career). As an idealist you will love what is being done by you and ofcourse Yes there are amazing results as well. Though PSP advocates the recording of your defects to the core (such as ; or typo's), I was in a conversation with Mr. Watts Humphrey where lot of people asked him about the advancements of compilers and missing of object orientedness (which I felt, how is it missing, as I was an OO Programmer and was using it successfully). There was a very good answer provided by him. It went on like, "PSP, or as a matter of fact any process methodology, is not a concept thats stuck on a single idea. The core idea is to introduce people to the quality methods and analysis. "Its always adaptive. You can tailor it to fit to your needs. If you feel like you will go with Function Point methodology, you are alright to go ahead with it. Same for any estimation techniques. But you should do it constantly and repetetively. "Same with the advancement of compilers. If you feel like the WBS in the structure of PSP won't fit to your development, do modify it and use but again do it constinuously. "As you do it continuously, you will have collected the historical data of yours and will be statistically do a predectable and accurate estimates on all the parameters" May be I am giving this answer late but when I read all the replies, I felt I wanted to share this. As per the tools, we have Process Dashboard, the PSP excel sheet and all. A: For the PSP, I have seen the Software Process Dashboard, but it seems awfully difficult to use. A: I learned it just this last semester in college and it worked great for me. I know that by following it to the letter I can be confident I can hit Compile and won't have any errors and by hitting Run I won't have to spend time anymore fixing and re-compiling the program to run it again and again till the mess is fixed. People complain about having to record the "missing semi-colons" and such but by the time you're on program 7, you're no longer making such trivial mistakes and instead your defects are found in the important bits of your program. I have not had the opportunity to apply it to a real scenario though but im really looking forward to! A: I try to follow the PSP 2.1 process when possible. It really helps me keep a focus on not skipping important, but less exciting, portions of a project. Usually this is design and design review for small projects. To keep track of time you can use the PSP Dashboard, which has a bunch of built in features and scripts that help you follow the process. If you are just looking for a time-tracking tool, I also like http://slimtimer.com. It can also do some decent reports. A: I've been using PSP for the last six months. It is time consuming. For my estimations I had to spend 7% of my time filling forms. It is frustating to have to put the mistake "missing semicolon" over and over again. But on the other hand as I get used to the process, it became important as I started to see which errors I was mainly doing and I started "naturally" avoiding them. It also makes you "review" your code so you can see if there's any problem before hitting the compile button. For tools I recommend using Timetracker: http://0xff.net/ I recommend at least trying PSP for a couple of months, because you will create some habits that help reduce the time you spend compiling and correcting minor bugs. A: I have completed the PSP course, the next one is supposed to be TSP which is meant for team dynamics as others say. I have mixed feelings about PSP (mostly negative, but the results were interesting), I arrived to the following conclusions: * *First of all my main source of frustration is that the design templates are way too tedious and impractical. Change them for UML and BPMN, tell your instructors from the start, IMPOSE IF NECESSARY. The book itself says that the design templates are for people who don't know or want to learn UML. *Secondly, estimations were the only valuable part for me. The book itself says that you can use other stuff appart from lines of code and it even tells you how to know how relevant they are statistically. My take on this (counting lines of code) is that a tool/plugin that connects with your VCS (git, mercurial) must exist and automate the building of your personal database, otherwise is too tedious to track base/added/reused parts. *The process itself is nice, but not applicable to big projects, why?, because it just doesn't cope with iterations. In the real world, due to requirement changes you will always have to reiterate on a project. You can still apply the discipline to small programmatic tasks, this is: plan, design, review your design (have design standards and a small checklist that u can memorize), code, review your code (have clear coding standards and a small mental checklist you can memorize), test, ponder on your mistakes. Any experienced programmer will know these are eventually intuitive steps to follow. My recommendation in real practice: follow the process but don't document other stuff than your design, and if you do implement unit tests, document them well. *This process might actually be worth to follow and practical... for real-time system programming where there is absolutely no room for mistakes, otherwise doesn't feel worth it. *If you are seeking for a methodology to organize and improve focus, try GTD (Get Things Done) and Pomodoro first. *If you have obsessive-compulsive disorder you might actually enjoy PSP =). My final recommendation, learn from it as a reference, might lead to better and more practical stuff. This thing is just too academic. P.S.: R.I.P. Watts Humphrey A: I went through the training and then my company paid for me to go to Carnegie Mellon and go through the PSP instructor training course to get certified as an instructor. I think the goal was to use this as part of our company's CMM/CMMI effort. I met Watts Humphrey and found him to be a kind, gentle soul with some deeply held ideas about process. I read several of his books as well. Here's my take on it in a nutshell - it is too structured for most people to follow, assuming you follow things to the letter. The idea of estimation based on historic info is OK, particularly in the classroom setting, but in the real world where estimates are undone in a day due to the changing tide of requirements and direction, it is far less useful. I've also done Wide Band Delphi estimation and that was OK but honestly wasn't necessarily any better than the 'best guess' I'd make. My team was less than enthusiastic about PSP and that is part of the problem - developer buy-in. My company was doing it for the wrong reason - simply to say "hey, look we use PSP and have some certified instructors!". In the end, I've found using a 'agile' approach to be better. I have a backlog of work to do and can generally estimate it pretty well. I've been doing it long enough that I can make pretty good rough estimates on time and frankly don't think that the time tracking really improves things much. Perhaps in some environments it would work well, but at my place, we'll keep pumping out quality software without all the process hoops that yield questionable benefits. Just my two cents. A: I followed the PSP for a few weeks some years ago, because my group wanted to experiment with it. I found it very disappointing and even irritating to work with. It exhausted my patience. My main negative points are: * *Ridiculous emphasys in things like typos or missing semicolons. *Impractical forms that you have to fill by hand. *Focus on procedural programming instead of OO. *Estimating involves counting the number of loops, functions, etc. I found it a massive waste of time. I'd rather choose to leave this profession than to be forced to follow the PSP. Related material: My answer about a PSP book in the "What programming book would you NOT recommend to developers" question. A: I used it during university but at work we really don't have a process at all. Only recently have we started using version control. My experience with it was that it seemed far too tedious to be useful. If it's not automated, then it can go away.
{ "language": "en", "url": "https://stackoverflow.com/questions/28197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Best Apache Ant Template Every time I create a new project I copy the last project's ant file to the new one and make the appropriate changes (trying at the same time to make it more flexible for the next project). But since I didn't really thought about it at the beginning, the file started to look really ugly. Do you have an Ant template that can be easily ported in a new project? Any tips/sites for making one? Thank you. A: An alternative to making a template is to evolve one by gradually generalising your current project's Ant script so that there are fewer changes to make the next time you copy it for use on a new project. There are several things you can do. Use ${ant.project.name} in file names, so you only have to mention your application name in the project element. For example, if you generate myapp.jar: <project name="myapp"> ... <target name="jar"> ... <jar jarfile="${ant.project.name}.jar" ... Structure your source directory structure so that you can package your build by copying whole directories, rather than naming individual files. For example, if you are copying JAR files to a web application archive, do something like: <copy todir="${war}/WEB-INF/lib" flatten="true"> <fileset dir="lib" includes="**/*.jar"> </copy> Use properties files for machine-specific and project-specific build file properties. <!-- Machine-specific property over-rides --> <property file="/etc/ant/build.properties" /> <!-- Project-specific property over-rides --> <property file="build.properties" /> <!-- Default property values, used if not specified in properties files --> <property name="jboss.home" value="/usr/share/jboss" /> ... Note that Ant properties cannot be changed once set, so you override a value by defining a new value before the default value. A: You can give http://import-ant.sourceforge.net/ a try. It is a set of build file snippets that can be used to create simple custom build files. A: If you are working on several projects with similar directory structures and want to stick with Ant instead of going to Maven use the Import task. It allows you to have the project build files just import the template and define any variables (classpath, dependencies, ...) and have all the real build script off in the imported template. It even allows overriding of the tasks in the template which allows you to put in project specific pre or post target hooks. A: I had the same problem and generalized my templates and grow them into in own project: Antiplate. Maybe it's also useful for you. A: I used to do exactly the same thing.... then I switched to maven. Maven relies on a simple xml file to configure your build and a simple repository to manage your build's dependencies (rather than checking these dependencies into your source control system with your code). One feature I really like is how easy it is to version your jars - easily keeping previous versions available for legacy users of your library. This also works to your benefit when you want to upgrade a library you use - like junit. These dependencies are stored as separate files (with their version info) in your maven repository so old versions of your code always have their specific dependencies available. It's a better Ant. A: I used to do exactly the same thing.... then I switched to maven. Oh, it's Maven 2. I was afraid that someone was still seriously using Maven nowadays. Leaving the jokes aside: if you decide to switch to Maven 2, you have to take care while looking for information, because Maven 2 is a complete reimplementation of Maven, with some fundamental design decisions changed. Unfortunately, they didn't change the name, which has been a great source of confusion in the past (and still sometimes is, given the "memory" nature of the web). Another thing you can do if you want to stay in the Ant spirit, is to use Ivy to manage your dependencies. A: One thing to look at -- if you're using Eclipse, check out the ant4eclipse tasks. I use a single build script that asks for the details set up in eclipse (source dirs, build path including dependency projects, build order, etc). This allows you to manage dependencies in one place (eclipse) and still be able to use a command-line build to automation.
{ "language": "en", "url": "https://stackoverflow.com/questions/28202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How can I convert types in C++? I'm using two different libraries in my project, and both of them supply a basic rectangle struct. The problem with this is that there seems to be no way to insert a conversion between the types, so I can't call a function in one library with the result from a function in the other. If I was the author of either of these, I could create conversions, from the outside, I can't. library a: typedef struct rectangle { sint16 x; sint16 y; uint16 w; uint16 h; } rectangle; library b: class Rect { int x; int y; int width; int height; /* ... */ }; Now, I can't make a converter class, because C++ will only look for a conversion in one step. This is probably a good thing, because there would be a lot of possibilities involving creating new objects of all kinds of types. I can't make an operator that takes the struct from a and supplies an object of the class from b: foo.cpp:123 error: ‘operator b::Rect(const rectangle&)’ must be a nonstatic member function So, is there a sensible way around this? edit: I should perhaps also point out that I'd really like some solution that makes working with the result seamless, since I don't expect to be that coder. (Though I agree, old-school, explicit, conversion would have been a good choice. The other branch, reinterpret_cast has the same problem..) edit2: Actually, none of the suggestions really answer my actual question, Konrad Rudolph seems to be correct. C++ actually can't do this. Sucks, but true. (If it makes any difference, I'm going to try subclassing as suggested by CodingTheWheel. A: Create an intermediate shim type "RectangleEx", and define custom conversions to/from the 3rd-party string types. Whenever you speak to either API, do so through the shim class. Another way would be to derive a class from either rect or Rectangle, and insert conversions/constructors there. A: Not sure how sensible this is, but how about something like this: class R { public: R(const rectangle& r) { ... }; R(const Rect& r) { ... }; operator rectangle() const { return ...; } operator Rect() const { return ...; } private: ... }; Then you can just wrap every rectangle in R() and the "right thing" will happen. A: If you can't modify the structures then you have no alternative to writing a manual conversion function because overloading conversion operators only works within the class body. There's no other way. A: It may not be feasible in your case, but I've seen people employ a little preprocessor-foo to massage the types into compatibility. Even this assumes that you are building one or both libraries. It is also possible that you don't want to do this at all, but want to re-evaulate some early decision. Or not. A: If the structs were the same internally, you could do a reinterpret_cast; however, since it looks like you have 16-bit vs 32-bit fields, you're probably stuck converting on each call, or writing wrappers for all functions of one of the libraries. A: Why not something simple like this: (note this may/probably won't compile) but you get the idea... private Rect* convert(const rectangle& src) { return new Rect(src.x,src.y,src.w,src.h); } int main() { rectangle r; r.x = 1; r.y = 2; r.w = 3; r.h = 4; Rect* foo = convert(&r); ... delete foo; } EDIT: Looks like koko's and I have the same idea. A: Maybe you could try it with operator overloading ? (Maybe a = operator which is not a method of your class) ? Rect operator= (const Rect&,const rectangle&) More about this in the C++ programming language by Bjarne Stroustrup or maybe on this page: http://www.cs.caltech.edu/courses/cs11/material/cpp/donnie/cpp-ops.html
{ "language": "en", "url": "https://stackoverflow.com/questions/28212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: In ASP.NET, what are the different ways to inline code in the .aspx? Can I get a 'when to use' for these and others? <% %> <%# EVAL() %> Thanks A: You can also use <%= Class.Method() %> And it will print the result, just as you can do in Ruby on Rails. A: Just want to add, there's also the resources expression <%$ Resources:resource, welcome%> and asp.net would look for localized version of "welcome" in satellite assemblies automatically. A: Check out the Web Forms Syntax Reference on MSDN. For basics, * *<% %> is used for pure code blocks. I generally only use this for if statements   <div class="authenticated">   <div class="unauthenticated"> * is used to add text into your markup; that is, it equates to <div class='<%= IsLoggedIn ? "authenticated" : "unauthenticated" %>'> *<%# Expression %> is very similar to the above, but it is evaluated in a DataBinding scenario. One thing that this means is that you can use these expressions to set values of runat="server" controls, which you can't do with the <%= %> syntax. Typically this is used inside of a template for a databound control, but you can also use it in your page, and then call Page.DataBind() (or Control.DataBind()) to cause that code to evaluate. The others mentioned in the linked article are less common, though certainly have their uses, too. A: In ASP.NET 4.0, comes <%: %> syntax for writing something html encoded. <%: "<script>alert('Hello XSS')</script>" %> The above can be used instead of the belove. <%= Html.Encode("<script>alert('Hello XSS')</script>")%>
{ "language": "en", "url": "https://stackoverflow.com/questions/28219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Bash reg-exp substitution Is there a way to run a regexp-string replace on the current line in the bash? I find myself rather often in the situation, where I have typed a long commandline and then realize, that I would like to change a word somewhere in the line. My current approach is to finish the line, press Ctrl+A (to get to the start of the line), insert a # (to comment out the line), press enter and then use the ^oldword^newword syntax (^oldword^newword executes the previous command after substituting oldword by newword). But there has to be a better (faster) way to achieve this. (The mouse is not possible, since I am in an ssh-sessions most of the time). Probably there is some emacs-like key-command for this, that I don't know about. Edit: I have tried using vi-mode. Something strange happened. Although I am a loving vim-user, I had serious trouble using my beloved bash. All those finger-movements, that have been burned into my subconscious suddenly stopped working. I quickly returned to emacs-mode and considered, giving emacs a try as my favorite editor (although I guess, the same thing might happen again). A: G'day, What about using vi mode instead? Just enter set -o vi Then you can go to the word you want to change and just do a cw or cW depending on what's in the word? Oops, forgot to add you enter a ESC k to o to the previous line in the command history. What do you normally use for an editor? cheers, Rob Edit: What I forgot to say in my original reply was that you need to think of the vi command line in bash using the commands you enter when you are in "ex" mode in vi, i.e. after you've entered the colon. Worst thing is that you need to move around through your command history using the ancient vi commands of h (to the left) and l (to the right). You can use w (or W) to bounce across words though. Once you get used to it though, you have all sorts of commands available, e.g. entering ESC / my_command will look back through you r history, most recent first, to find the first occurrance of the command line containing the text my_command. Once it has found that, you can then use n to find the next occurrance, etc. And N to reverse the direction of the search. I'd go have a read of the man page for bash to see what's available under vi mode. Once you get over the fact that up-arrow and down-arrow are replaced by ESC k, and then j, you'll see that vi mode offers more than emacs mode for command line editing in bash. IMHO natchurly! (-: Emacs? Eighty megs and constantly swapping! cheers, Rob A: in ksh, in vi mode, if you hit 'v' while in command mode it will spawn a full vi session on the contents of your current command line. You can then edit using the full range of vi commands (global search and replace in your case). When :wq from vi, the edited command is executed. I'm sure something similar exists for bash. Since bash tends to extend its predecessors, there's probably something similar. A: Unfortunately, no, there's not really a better way. If you're just tired of making the keystrokes, you can use macros to trim them down. Add the following to your ~/.inputrc: "\C-x6": "\C-a#\C-m^" "\C-x7": "\C-m\C-P\C-a\C-d\C-m" Now, in a new bash instance (or after reloading .inputrc in your current shell by pressing C-x C-r), you can do the following: * *Type a bogus command (e.g., ls abcxyz). *Press Ctrl-x, then 6. The macro inserts a # at the beginning of the line, executes the commented line, and types your first ^. *Type your correction (e.g., xyz^def). *Press Ctrl-x, then 7. The macro completes your substitution, then goes up to the previous (commented) line, removes the comment character, and executes it again. It's not exactly elegant, but I think it's the best you're going to get with readline.
{ "language": "en", "url": "https://stackoverflow.com/questions/28224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Should I be doing JSPX instead of JSP? Using JDeveloper, I started developing a set of web pages for a project at work. Since I didn't know much about JDev at the time, I ran over to Oracle to follow some tutorials. The JDev tutorials recommended doing JSPX instead of JSP, but didn't really explain why. Are you developing JSPX pages? Why did you decide to do so? What are the pros/cons of going the JSPX route? A: Hello fellow JDeveloper developer! I have been working with JSPX pages for over two years and I never had any problems with them being JSPX opposed to JSP. The choice for me to go with JSPX was kinda forced since I use JHeadstart to automatically generate ADF Faces pages and by default, JHeadstart generates everything in JSPX. JSPX specifies that the document has to be a well-formed XML document. This allows stuff to properly and efficiently parse it. I have heard developers say that this helps your pages be more 'future proof' opposed to JSP. A: As stated in Spring 3.1 official documentation "Spring provides a couple of out-of-the-box solutions for JSP and JSTL views." Also you have to think about the fact that JSPX aims to produce pure XML compliant output. So if your target is HTML5 (which can be XML compliant but increase complexity see my next comments) you got some pain to achieve your goal if you are using Eclipse IDE... If your goal is to produce XHTML then go for JSPX and JDeveloper will support you... In one of our cie projects we made a POC with both JSP and JSPX and made PROS and CONS and my personal recommandation was to use JSP because we found it much less restrictive and natural to produce HTML5 in a non XML way which is also less restrictive and more compact syntax. We prefer to pick something less restrictive and add "best practices" recommandations like "do not put java scriptlets" inside jsp files. (BTW JSPX also allows you to put scriplets with jsp:scriplet instead of <% ... %>) A: The main difference is that a JSPX file (officially called a 'JSP document') may be easier to work with because the requirement for well-formed XML may allow your editor to identify more typos and syntax errors as you type. However, there are also disadvantages. For example, well-formed XML must escape things like less-than signs, so your file could end up with content like: <script type="text/javascript"> if (number &lt; 0) { The XML syntax may also be more verbose. A: JSPX has a few inconvenients, on top of my head: * *It's hard to generate some kinds of dynamic content; esp. generating an HTML tag with optional attributes (i.e. or depending on a condition). The standard JSP tags which should solve this problem didn't work properly back in the day I started doing JSPX. *No more & nbsp; :-p *You'll really want to put all your Javascript in separate files (or use CDATA sections, etc.). IMHO, you should be using jQuery anyway, so you really don't need to have onclick, etc. attributes... *Tools might not work properly; maybe your IDE does not support anything above plain JSP. *On Tomcat 6.x, at least the versions/config I tried, the generated output does not have any formatting; just a small annoyance, though On the other hand: * *It forces you to write correct XML, which can be manipulated more easily than JSP *Tools might perform instant validation, catching mistakes sooner *Simpler syntax, in my humble opinion A: @Matthew- ADF! The application I'm presently working on has 90% of the presentation layer generated by mod PL/SQL. I started working on a few new screens and wanted to investigate other options that might fit into our architecture, without being to much of a learning burden (increasing the complexity of the system/crashing developer's mental models of the system) on fellow developers on the team. So ADF is how I came across JSPX, too. I saw a "future proof" observation as well...but didn't know how well founded that was. A: JSPX is also the recommended view technology in Spring MVC / Spring Web Flow. A: A totally different line of reasoning why you should use jspx instead of jsp: JSPX and EL makes including javascript and embedded java codes much harder and much less natural to do than jsp. EL is a language specifically tailored for presentation logic. All this pushes you towards a cleaner separation of UI rendering and other logic. The disadvantage of lots of embedded code within a JSP(X) page is that it's virtually impossible to test easily, whereas practicing this separation of concerns makes most of your logic fully unit-testable. A: Also, another problem I have found with JSPX is when you want to use scriptlets. I agree that clean code is generally good and Java logic in the JSP is generally bad but there are certain instances where you want to use a utility function to return a string value or something where a TagLib or the model (request attributes) would be overkill. What are everyone's thoughts regarding scriptlets in JSP?
{ "language": "en", "url": "https://stackoverflow.com/questions/28235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Java Singleton vs static - is there a real performance benefit? I am merging a CVS branch and one of the larger changes is the replacement wherever it occurs of a Singleton pattern with abstract classes that have a static initialisation block and all static methods. Is this something that's worth keeping since it will require merging a lot of conflicts, what sort of situation would I be looking at for this refactoring to be worthwhile? We are running this app under Weblogic 8.1 (so JDK 1.4.2) sorry Thomas, let me clarify.. the HEAD version has the traditional singleton pattern (private constructor, getInstance() etc) the branch version has no constructor, is a 'public abstract class' and modified all the methods on the object to be 'static'. The code that used to exist in the private constructor is moved into a static block. Then all usages of the class are changed which causes multiple conflicts in the merge. There are a few cases where this change was made. A: If my original post was the correct understanding and the discussion from Sun that was linked to is accurate (which I think it might be), then I think you have to make a trade off between clarity and performance. Ask yourself these questions: * *Does the Singleton object make what I'm doing more clear? *Do I need an object to do this task or is it more suited to static methods? *Do I need the performance that I can gain by not using a Singleton? A: From my experience, the only thing that matters is which one is easier to mock in unit tests. I always felt Singleton is easier and natural to mock out. If your organization lets you use JMockit, it doesn't matter since you can overcome these concerns. A: From a strict runtime performance point of view, the difference is really negligible. The main difference between the two lies down in the fact that the "static" lifecycle is linked to the classloader, whereas for the singleton it's a regular instance lifecycle. Usually it's better to stay away from the ClassLoader business, you avoid some tricky problems, especially when you try to reload the web application. A: I would use a singleton if it needed to store any state, and static classes otherwise. There's no point in instantiating something, even a single instance, unless it needs to store something. A: Static is bad for extensibility since static methods and fields cannot be extended or overridden by subclasses. It's also bad for unit tests. Within a unit test you cannot keep the side effects of different tests from spilling over since you cannot control the classloader. Static fields initialized in one unit test will be visible in another, or worse, running tests concurrently will yield unpredictable results. Singleton is generally an ok pattern when used sparingly. I prefer to use a DI framework and let that manage my instances for me (possibly within different scopes, as in Guice). A: Does this discussion help? (I don't know if it's taboo to link to another programming forum, but I'd rather not just quote the whole discussion =) ) Sun Discussion on this subject The verdict seems to be that it doesn't make enough of a difference to matter in most cases, though technically the static methods are more efficient. A: Write some code to measure the performance. The answer is going to be dependent on the JVM(Sun's JDK might perform differently than JRockit) and the VM flags your application uses.
{ "language": "en", "url": "https://stackoverflow.com/questions/28241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: cannot install ruby gems - zlib error I'm trying to install some Ruby Gems so I can use Ruby to notify me when I get twitter messages. However, after doing a gem update --system, I now get a zlib error every time I try and do a gem install of anything. below is the console output I get when trying to install ruby gems. (along with the output from gem environment). C:\data\ruby>gem install twitter ERROR: While executing gem ... (Zlib::BufError) buffer error C:\data\ruby>gem update --system Updating RubyGems ERROR: While executing gem ... (Zlib::BufError) buffer error C:\data\ruby>gem environment RubyGems Environment: - RUBYGEMS VERSION: 1.2.0 - RUBY VERSION: 1.8.6 (2007-03-13 patchlevel 0) [i386-mswin32] - INSTALLATION DIRECTORY: c:/ruby/lib/ruby/gems/1.8 - RUBY EXECUTABLE: c:/ruby/bin/ruby.exe - EXECUTABLE DIRECTORY: c:/ruby/bin - RUBYGEMS PLATFORMS: - ruby - x86-mswin32-60 - GEM PATHS: - c:/ruby/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - REMOTE SOURCES: - http://gems.rubyforge.org/ A: Found it! I had the same problem on windows (it appeared suddenly without me doing an update, but whatever): It has something to do with multiple conflicting zlib versions (I think). In ruby/lib/ruby/1.8/i386-msvcrt, make sure that there exists a zlib.so file. In my case, it was already there. If not, you may try to install ruby-zlib. Then go to ruby/lib/ruby/site_ruby/1.8./i386-msvcrt and delete the zlib.so file there. In ruby/bin, there should be a zlib1.dll. For some reason my Ruby version did not use this dll. I downloaded the most recent version (1.2.3) and installed it there. I had to rename it to zlib.dll for it to be used. And tada! Rubygems worked again. Hope this helps. A: Firstly, I thank the person, who came up with the solution to the missing zlib problem. (It wasn't me. :-) Unfortunately I lost the link to the original posting, but the essence of the solution on Linux is to compile the Ruby while zlib header files are available to the Ruby configure script. On Debian it means that zlib development packages have to be installed before one starts to compile the Ruby. The rest of my text here does not contain anything new and it is encouraged to omit it, if You feel comfortable at customizing Your execution environment at UNIX-like operating systems. The following is a combination of a brief intro to some basics and step by step instructions. ------The-start-of-the-HOW-TO------------------------- If one wants to execute a program, let's say, irb, from a console, then the file named irb is searched from folders in an order that is described by an environment variable called PATH. It's possible to see the value of the PATH by typing to a bash shell (and pressing Enter key): echo $PATH For example, if there are 2 versions of irb in the system, one installed by the "official" package management system, let's say, yum or apt-get, to /usr/bin/irb and the other one that is compiled by the user named scoobydoo and resides in /home/scoobydoo/ourcompiledruby/bin then the question arises, which one of the two irb-s gets executed. If one writes to the /home/scoobydoo/.bashrc a line like: export PATH="/home/scoobydoo/ourcompiledruby/bin:/usr/bin" and restarts the bash shell by closing the terminal window and opening a new one, then by typing irb to the console, the /home/scoobydoo/ourcompiledruby/bin/irb gets executed. If one wrote export PATH="/usr/bin:/home/scoobydoo/ourcompiledruby/bin" to the /home/scoobydoo/.bashrc ,then the /usr/bin/irb would get executed. In practice one wants to write export PATH="/home/scoobydoo/ourcompiledruby/bin:$PATH" because this prepends all of the values that the PATH had prior to this assignment to the /home/scoobydoo/ourcompiledruby/bin. Otherwise there will be problems, because not all common tools reside in the /usr/bin and one probably wants to have multiple custom-built applications in use. The same logic applies to libraries, except that the name of the environment variable is LD_LIBRARY_PATH The use of the LD_LIBRARY_PATH and PATH allow ordinary users, who do not have root access or who want to experiment with not-that-trusted software, to build them and use them without needing any root privileges. The rest of this mini-how-to assumes that we'll be building our own version of ruby and use our own version of it almost regardless of what is installed on the system by the distribution's official package management software. 1)============================= First, one creates a few folders and set the environment variables, so that the folders are "useful". mkdir /home/scoobydoo/ourcompiledruby mkdir -p /home/scoobydoo/lib/our_gems One adds the following 2 lines to the /home/scoobydoo/.bashrc export PATH="/home/scoobydoo/ourcompiledruby/bin:$PATH" export GEM_HOME="/home/scoobydoo/lib/our_gems" Restart the bash shell by closing the current terminal window and opening a new one or by typing bash on the command line of the currently open window. The changes to the /home/scoobydoo/.bashrc do not have any effect on terminal windows/sessions that were started prior to the saving of the modified version of the /home/scoobydoo/.bashrc The idea is that the /home/scoobydoo/.bashrc is executed automatically at the start of a session, even if one logs on over ssh. 2)============================= Now one makes sure that the zlib development packages are available on the system. As of April 2011 I haven't sorted the details of it out, but apt-get install zlibc zlib1g-dev zlib1g seems to be sufficient on a Debian system. The idea is that both, the library file and header files, are available in the system's "official" search path. Usually apt-get and alike place the header files to the /usr/include and library files to the /usr/lib 3)============================= Download and unpack the source tar.gz from the http://www.ruby-lang.org ./configure --prefix=/home/scoobydoo/ourcompiledruby make make install 4)============================= If a console command like which ruby prints to the console /home/scoobydoo/ourcompiledruby/bin/ruby then the newly compiled version is the one that gets executed on the command ruby --help 5)============================= The rest of the programs, gem, irb, etc., can be properly executed by using commands like: ruby `which gem` install rake ruby `which irb` It shouldn't be like that but as of April 2011 I haven't figured out any more elegant ways of doing it. If the ruby `which gem` install rake gives the zlib missing error again, then one should just try to figure out, how to make the zlib include files and library available to the Ruby configure script and recompile. (Sorry, currently I don't have a better solution to offer.) May be a dirty solution might be to add the following lines to the /home/scoobydoo/.bashrc alias gem="`which ruby` `which gem` " alias irb="`which ruby` `which irb` " Actually, I usually use alias irb="`which ruby` -KU " but the gem should be executed without giving the ruby the "-KU" args, because otherwise there will be errors. ------The-end-of-the-HOW-TO------------------------ A: I just started getting this tonight as well. Googling turned up a bunch of suggestions that didn't deliver results gem update --system and some paste in code from jamis that is supposed to replace a function in package.rb but the original it is supposed to replace is nowhere to be found. Reinstalling rubygems didn't help. I'm reinstalling ruby right now.........and it is fixed. Pain though. A: How about cd into rubysrc/ext/zlib, then ruby extendconf.rb, then make, make install. After do that, reinstall ruby. I did this on ubuntu 10.04 and was successful. A: A reinstall of Ruby sorted this issue out. It's not what I wanted; I wanted to know why I was getting the issue, but it's all sorted out. A: It most often shows up when your download failed -- i.e. you have a corrupt gem, due to network timeout, faulty manual download, or whatever. Just try again, or download gems manually and point gem at the files. A: if gem update --system not works and rename ruby/bin/zlib1.dll to zlib.dll not helps try: Open file RUBY_DIR\lib\ruby\site_ruby\1.8\rubygems.rb And replace existed def self.gunzip(data) by this: def self.gunzip(data) require 'stringio' require 'zlib' data = StringIO.new data # Zlib::GzipReader.new(data).read data.read(10) # skip the gzip header zis = Zlib::Inflate.new(-Zlib::MAX_WBITS) is = StringIO.new(zis.inflate(data.read)) end A: Try updating ZLib before you do anything else. I had a similar problem on OS X and updating Compress::Zlib (a Perl interface to ZLib) cured it - so I think an old version of ZLib (is now 1.2.3) may be where your problem lies... A: install pure ruby zlib if all else fails
{ "language": "en", "url": "https://stackoverflow.com/questions/28243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Equation (expression) parser with precedence? I've developed an equation parser using a simple stack algorithm that will handle binary (+, -, |, &, *, /, etc) operators, unary (!) operators, and parenthesis. Using this method, however, leaves me with everything having the same precedence - it's evaluated left to right regardless of operator, although precedence can be enforced using parenthesis. So right now "1+11*5" returns 60, not 56 as one might expect. While this is suitable for the current project, I want to have a general purpose routine I can use for later projects. Edited for clarity: What is a good algorithm for parsing equations with precedence? I'm interested in something simple to implement and understand that I can code myself to avoid licensing issues with available code. Grammar: I don't understand the grammar question - I've written this by hand. It's simple enough that I don't see the need for YACC or Bison. I merely need to calculate strings with equations such as "2+3 * (42/13)". Language: I'm doing this in C, but I'm interested in an algorithm, not a language specific solution. C is low level enough that it'll be easy to convert to another language should the need arise. Code Example I posted the test code for the simple expression parser I was talking about above. The project requirements altered and so I never needed to optimize the code for performance or space as it wasn't incorporated into the project. It's in the original verbose form, and should be readily understandable. If I do anything further with it in terms of operator precedence, I'll probably choose the macro hack because it matches the rest of the program in simplicity. If I ever use this in a real project, though, I'll be going for a more compact/speedy parser. Related question Smart design of a math parser? -Adam A: Have you thought about using Boost Spirit? It allows you to write EBNF-like grammars in C++ like this: group = '(' >> expression >> ')'; factor = integer | group; term = factor >> *(('*' >> factor) | ('/' >> factor)); expression = term >> *(('+' >> term) | ('-' >> term)); A: The hard way You want a recursive descent parser. To get precedence you need to think recursively, for example, using your sample string, 1+11*5 to do this manually, you would have to read the 1, then see the plus and start a whole new recursive parse "session" starting with 11... and make sure to parse the 11 * 5 into its own factor, yielding a parse tree with 1 + (11 * 5). This all feels so painful even to attempt to explain, especially with the added powerlessness of C. See, after parsing the 11, if the * was actually a + instead, you would have to abandon the attempt at making a term and instead parse the 11 itself as a factor. My head is already exploding. It's possible with the recursive decent strategy, but there is a better way... The easy (right) way If you use a GPL tool like Bison, you probably don't need to worry about licensing issues since the C code generated by bison is not covered by the GPL (IANAL but I'm pretty sure GPL tools don't force the GPL on generated code/binaries; for example Apple compiles code like say, Aperture with GCC and they sell it without having to GPL said code). Download Bison (or something equivalent, ANTLR, etc.). There is usually some sample code that you can just run bison on and get your desired C code that demonstrates this four function calculator: http://www.gnu.org/software/bison/manual/html_node/Infix-Calc.html Look at the generated code, and see that this is not as easy as it sounds. Also, the advantages of using a tool like Bison are 1) you learn something (especially if you read the Dragon book and learn about grammars), 2) you avoid NIH trying to reinvent the wheel. With a real parser-generator tool, you actually have a hope at scaling up later, showing other people you know that parsers are the domain of parsing tools. Update: People here have offered much sound advice. My only warning against skipping the parsing tools or just using the Shunting Yard algorithm or a hand rolled recursive decent parser is that little toy languages1 may someday turn into big actual languages with functions (sin, cos, log) and variables, conditions and for loops. Flex/Bison may very well be overkill for a small, simple interpreter, but a one off parser+evaluator may cause trouble down the line when changes need to be made or features need to be added. Your situation will vary and you will need to use your judgement; just don't punish other people for your sins [2] and build a less than adequate tool. My favorite tool for parsing The best tool in the world for the job is the Parsec library (for recursive decent parsers) which comes with the programming language Haskell. It looks a lot like BNF, or like some specialized tool or domain specific language for parsing (sample code [3]), but it is in fact just a regular library in Haskell, meaning that it compiles in the same build step as the rest of your Haskell code, and you can write arbitrary Haskell code and call that within your parser, and you can mix and match other libraries all in the same code. (Embedding a parsing language like this in a language other than Haskell results in loads of syntactic cruft, by the way. I did this in C# and it works quite well but it is not so pretty and succinct.) Notes: 1 Richard Stallman says, in Why you should not use Tcl The principal lesson of Emacs is that a language for extensions should not be a mere "extension language". It should be a real programming language, designed for writing and maintaining substantial programs. Because people will want to do that! [2] Yes, I am forever scarred from using that "language". Also note that when I submitted this entry, the preview was correct, but SO's less than adequate parser ate my close anchor tag on the first paragraph, proving that parsers are not something to be trifled with because if you use regexes and one off hacks you will probably get something subtle and small wrong. [3] Snippet of a Haskell parser using Parsec: a four function calculator extended with exponents, parentheses, whitespace for multiplication, and constants (like pi and e). aexpr = expr `chainl1` toOp expr = optChainl1 term addop (toScalar 0) term = factor `chainl1` mulop factor = sexpr `chainr1` powop sexpr = parens aexpr <|> scalar <|> ident powop = sym "^" >>= return . (B Pow) <|> sym "^-" >>= return . (\x y -> B Pow x (B Sub (toScalar 0) y)) toOp = sym "->" >>= return . (B To) mulop = sym "*" >>= return . (B Mul) <|> sym "/" >>= return . (B Div) <|> sym "%" >>= return . (B Mod) <|> return . (B Mul) addop = sym "+" >>= return . (B Add) <|> sym "-" >>= return . (B Sub) scalar = number >>= return . toScalar ident = literal >>= return . Lit parens p = do lparen result <- p rparen return result A: As you put your question there is no need for recursion whatsoever. The answer is three things: Postfix notation plus Shunting Yard algorithm plus Postfix expression evaluation: 1). Postfix notation = invented to eliminate the need for explicit precedence specification. Read more on the net but here is the gist of it: infix expression ( 1 + 2 ) * 3 while easy for humans to read and process not very efficient for computing via machine. What is? Simple rule that says "rewrite expression by caching in precedence,then always process it left-to-right". So infix ( 1 + 2 ) * 3 becomes a postfix 12+3*. POST because operator is placed always AFTER the operands. 2). Evaluating postfix expression. Easy. Read numbers off postfix string. Push them on a stack until an operator is seen. Check operator type - unary? binary? tertiary? Pop as many operands off stack as needed to evaluate this operator. Evaluate. Push result back on stack! And u r almost done. Keep doing so until stack has only one entry = value u r looking for. Let's do ( 1 + 2 ) * 3 which is in postfix is "12+3*". Read first number = 1. Push it on stack. Read next. Number = 2. Push it on stack. Read next. Operator. Which one? +. What kind? Binary = needs two operands. Pop stack twice = argright is 2 and argleft is 1. 1 + 2 is 3. Push 3 back on stack. Read next from postfix string. Its a number. 3.Push. Read next. Operator. Which one? *. What kind? Binary = needs two numbers -> pop stack twice. First pop into argright, second time into argleft. Evaluate operation - 3 times 3 is 9.Push 9 on stack. Read next postfix char. It's null. End of input. Pop stack onec = that's your answer. 3). Shunting Yard is used to transform human (easily) readable infix expression into postfix expression (also human easily readable after some practice). Easy to code manually. See comments above and net. A: I would suggest cheating and using the Shunting Yard Algorithm. It's an easy means of writing a simple calculator-type parser and takes precedence into account. If you want to properly tokenise things and have variables, etc. involved then I would go ahead and write a recursive descent parser as suggested by others here, however if you simply require a calculator-style parser then this algorithm should be sufficient :-) A: Another resource for precedence parsing is the Operator-precedence parser entry on Wikipedia. Covers Dijkstra's shunting yard algorithm, and a tree alternate algorithm, but more notably covers a really simple macro replacement algorithm that can be trivially implemented in front of any precedence ignorant parser: #include <stdio.h> int main(int argc, char *argv[]){ printf("(((("); for(int i=1;i!=argc;i++){ if(argv[i] && !argv[i][1]){ switch(argv[i]){ case '^': printf(")^("); continue; case '*': printf("))*(("); continue; case '/': printf("))/(("); continue; case '+': printf(")))+((("); continue; case '-': printf(")))-((("); continue; } } printf("%s", argv[i]); } printf("))))\n"); return 0; } Invoke it as: $ cc -o parenthesise parenthesise.c $ ./parenthesise a \* b + c ^ d / e ((((a))*((b)))+(((c)^(d))/((e)))) Which is awesome in its simplicity, and very understandable. A: Is there a language you want to use? ANTLR will let you do this from a Java perspective. Adrian Kuhn has an excellent writeup on how to write an executable grammar in Ruby; in fact, his example is almost exactly your arithmetic expression example. A: It depends on how "general" you want it to be. If you want it to be really really general such as be able to parse mathematical functions as well like sin(4+5)*cos(7^3) you will probably need a parse tree. In which, I do not think that a complete implementation is proper to be pasted here. I'd suggest that you check out one of the infamous "Dragon book". But if you just want precedence support, then you could do that by first converting the expression to postfix form in which an algorithm that you can copy-and-paste should be available from google or I think you can code it up yourself with a binary tree. When you have it in postfix form, then it's piece of cake from then on since you already understand how the stack helps. A: I found this on the PIClist about the Shunting Yard algorithm: Harold writes: I remember reading, a long time ago, of an algorithm that converted algebraic expressions to RPN for easy evaluation. Each infix value or operator or parenthesis was represented by a railroad car on a track. One type of car split off to another track and the other continued straight ahead. I don't recall the details (obviously!), but always thought it would be interesting to code. This is back when I was writing 6800 (not 68000) assembly code. This is the "shunting yard algorythm" and it is what most machine parsers use. See the article on parsing in Wikipedia. An easy way to code the shunting yard algorythm is to use two stacks. One is the "push" stack and the other the "reduce" or "result" stack. Example: pstack = () // empty rstack = () input: 1+2*3 precedence = 10 // lowest reduce = 0 // don't reduce start: token '1': isnumber, put in pstack (push) token '+': isoperator set precedence=2 if precedence < previous_operator_precedence then reduce() // see below put '+' in pstack (push) token '2': isnumber, put in pstack (push) token '*': isoperator, set precedence=1, put in pstack (push) // check precedence as // above token '3': isnumber, put in pstack (push) end of input, need to reduce (goal is empty pstack) reduce() //done to reduce, pop elements from the push stack and put them into the result stack, always swap the top 2 items on pstack if they are of the form 'operator' 'number': pstack: '1' '+' '2' '' '3' rstack: () ... pstack: () rstack: '3' '2' '' '1' '+' if the expression would have been: 1*2+3 then the reduce trigger would have been the reading of the token '+' which has lower precendece than the '*' already pushed, so it would have done: pstack: '1' '' '2' rstack: () ... pstack: () rstack: '1' '2' '' and then pushed '+' and then '3' and then finally reduced: pstack: '+' '3' rstack: '1' '2' '' ... pstack: () rstack: '1' '2' '' '3' '+' So the short version is: push numbers, when pushing operators check the precedence of the previous operator. If it was higher than the operator's that is to be pushed now, first reduce, then push the current operator. To handle parens simply save the precedence of the 'previous' operator, and put a mark on the pstack that tells the reduce algorythm to stop reducing when solving the inside of a paren pair. The closing paren triggers a reduction as does the end of input, and also removes the open paren mark from the pstack, and restores the 'previous operation' precedence so parsing can continue after the close paren where it left off. This can be done with recursion or without (hint: use a stack to store the previous precedence when encountering a '(' ...). The generalized version of this is to use a parser generator implemented shunting yard algorythm, f.ex. using yacc or bison or taccle (tcl analog of yacc). Peter -Adam A: I have posted source for an ultra compact (1 class, < 10 KiB) Java Math Evaluator on my web site. This is a recursive descent parser of the type that caused the cranial explosion for the poster of the accepted answer. It supports full precedence, parenthesis, named variables and single-argument functions. A: i released an expression parser based on Dijkstra's Shunting Yard algorithm, under the terms of the Apache License 2.0: http://projects.congrace.de/exp4j/index.html A: http://www.engr.mun.ca/~theo/Misc/exp_parsing.htm Very good explanation of different approaches: * *Recursive-descent recognition *The shunting yard algorithm *The classic solution *Precedence climbing Written in simple language and pseudo-code. I like 'precedence climbing' one. A: I've implemented a recursive descent parser in Java in the MathEclipse Parser project. It could also be used in as a Google Web Toolkit module A: I'm currently working on a series of articles building a regular expression parser as a learning tool for design patterns and readable programing. You can take a look at readablecode. The article presents a clear use of shunting yards algorithm. A: I wrote an expression parser in F# and blogged about it here. It uses the shunting yard algorithm, but instead of converting from infix to RPN, I added a second stack to accumulate the results of calculations. It correctly handles operator precedence, but doesn't support unary operators. I wrote this to learn F#, not to learn expression parsing, though. A: A Python solution using pyparsing can be found here. Parsing infix notation with various operators with precedence is fairly common, and so pyparsing also includes the infixNotation (formerly operatorPrecedence) expression builder. With it you can easily define boolean expressions using "AND", "OR", "NOT", for example. Or you can expand your four-function arithmetic to use other operators, such as ! for factorial, or '%' for modulus, or add P and C operators to compute permutations and combinations. You could write an infix parser for matrix notation, that includes handling of '-1' or 'T' operators (for inversion and transpose). The operatorPrecedence example of a 4-function parser (with '!' thrown in for fun) is here and a more fully featured parser and evaluator is here. A: The shunting yard algorithm is the right tool for this. Wikipedia is really confusing about this, but basically the algorithm works like this: Say, you want to evaluate 1 + 2 * 3 + 4. Intuitively, you "know" you have to do the 2 * 3 first, but how do you get this result? The key is to realize that when you're scanning the string from left to right, you will evaluate an operator when the operator that follows it has a lower (or equal to) precedence. In the context of the example, here's what you want to do: * *Look at: 1 + 2, don't do anything. *Now look at 1 + 2 * 3, still don't do anything. *Now look at 1 + 2 * 3 + 4, now you know that 2 * 3 has to to be evaluated because the next operator has lower precedence. How do you implement this? You want to have two stacks, one for numbers, and another for operators. You push numbers onto the stack all the time. You compare each new operator with the one at the top of the stack, if the one on top of the stack has higher priority, you pop it off the operator stack, pop the operands off the number stack, apply the operator and push the result onto the number stack. Now you repeat the comparison with the top of stack operator. Coming back to the example, it works like this: N = [ ] Ops = [ ] * *Read 1. N = [1], Ops = [ ] *Read +. N = [1], Ops = [+] *Read 2. N = [1 2], Ops = [+] *Read *. N = [1 2], Ops = [+ *] *Read 3. N = [1 2 3], Ops = [+ *] *Read +. N = [1 2 3], Ops = [+ *] * *Pop 3, 2 and execute 2*3, and push result onto N. N = [1 6], Ops = [+] *+ is left associative, so you want to pop 1, 6 off as well and execute the +. N = [7], Ops = []. *Finally push the [+] onto the operator stack. N = [7], Ops = [+]. *Read 4. N = [7 4]. Ops = [+]. *You're run out off input, so you want to empty the stacks now. Upon which you will get the result 11. There, that's not so difficult, is it? And it makes no invocations to any grammars or parser generators. A: There's a nice article here about combining a simple recursive-descent parser with operator-precedence parsing. If you've been recently writing parsers, it should be very interesting and instructive to read. A: Long time ago, I made up my own parsing algorithm, that I couldn't find in any books on parsing (like the Dragon Book). Looking at the pointers to the Shunting Yard algorithm, I do see the resemblance. About 2 years ago, I made a post about it, complete with Perl source code, on http://www.perlmonks.org/?node_id=554516. It's easy to port to other languages: the first implementation I did was in Z80 assembler. It's ideal for direct calculation with numbers, but you can use it to produce a parse tree if you must. Update Because more people can read (or run) Javascript, I've reimplemented my parser in Javascript, after the code has been reorganized. The whole parser is under 5k of Javascript code (about 100 lines for the parser, 15 lines for a wrapper function) including error reporting, and comments. You can find a live demo at http://users.telenet.be/bartl/expressionParser/expressionParser.html. // operator table var ops = { '+' : {op: '+', precedence: 10, assoc: 'L', exec: function(l,r) { return l+r; } }, '-' : {op: '-', precedence: 10, assoc: 'L', exec: function(l,r) { return l-r; } }, '*' : {op: '*', precedence: 20, assoc: 'L', exec: function(l,r) { return l*r; } }, '/' : {op: '/', precedence: 20, assoc: 'L', exec: function(l,r) { return l/r; } }, '**' : {op: '**', precedence: 30, assoc: 'R', exec: function(l,r) { return Math.pow(l,r); } } }; // constants or variables var vars = { e: Math.exp(1), pi: Math.atan2(1,1)*4 }; // input for parsing // var r = { string: '123.45+33*8', offset: 0 }; // r is passed by reference: any change in r.offset is returned to the caller // functions return the parsed/calculated value function parseVal(r) { var startOffset = r.offset; var value; var m; // floating point number // example of parsing ("lexing") without aid of regular expressions value = 0; while("0123456789".indexOf(r.string.substr(r.offset, 1)) >= 0 && r.offset < r.string.length) r.offset++; if(r.string.substr(r.offset, 1) == ".") { r.offset++; while("0123456789".indexOf(r.string.substr(r.offset, 1)) >= 0 && r.offset < r.string.length) r.offset++; } if(r.offset > startOffset) { // did that work? // OK, so I'm lazy... return parseFloat(r.string.substr(startOffset, r.offset-startOffset)); } else if(r.string.substr(r.offset, 1) == "+") { // unary plus r.offset++; return parseVal(r); } else if(r.string.substr(r.offset, 1) == "-") { // unary minus r.offset++; return negate(parseVal(r)); } else if(r.string.substr(r.offset, 1) == "(") { // expression in parens r.offset++; // eat "(" value = parseExpr(r); if(r.string.substr(r.offset, 1) == ")") { r.offset++; return value; } r.error = "Parsing error: ')' expected"; throw 'parseError'; } else if(m = /^[a-z_][a-z0-9_]*/i.exec(r.string.substr(r.offset))) { // variable/constant name // sorry for the regular expression, but I'm too lazy to manually build a varname lexer var name = m[0]; // matched string r.offset += name.length; if(name in vars) return vars[name]; // I know that thing! r.error = "Semantic error: unknown variable '" + name + "'"; throw 'unknownVar'; } else { if(r.string.length == r.offset) { r.error = 'Parsing error at end of string: value expected'; throw 'valueMissing'; } else { r.error = "Parsing error: unrecognized value"; throw 'valueNotParsed'; } } } function negate (value) { return -value; } function parseOp(r) { if(r.string.substr(r.offset,2) == '**') { r.offset += 2; return ops['**']; } if("+-*/".indexOf(r.string.substr(r.offset,1)) >= 0) return ops[r.string.substr(r.offset++, 1)]; return null; } function parseExpr(r) { var stack = [{precedence: 0, assoc: 'L'}]; var op; var value = parseVal(r); // first value on the left for(;;){ op = parseOp(r) || {precedence: 0, assoc: 'L'}; while(op.precedence < stack[stack.length-1].precedence || (op.precedence == stack[stack.length-1].precedence && op.assoc == 'L')) { // precedence op is too low, calculate with what we've got on the left, first var tos = stack.pop(); if(!tos.exec) return value; // end reached // do the calculation ("reduce"), producing a new value value = tos.exec(tos.value, value); } // store on stack and continue parsing ("shift") stack.push({op: op.op, precedence: op.precedence, assoc: op.assoc, exec: op.exec, value: value}); value = parseVal(r); // value on the right } } function parse (string) { // wrapper var r = {string: string, offset: 0}; try { var value = parseExpr(r); if(r.offset < r.string.length){ r.error = 'Syntax error: junk found at offset ' + r.offset; throw 'trailingJunk'; } return value; } catch(e) { alert(r.error + ' (' + e + '):\n' + r.string.substr(0, r.offset) + '<*>' + r.string.substr(r.offset)); return; } } A: It would help if you could describe the grammar you are currently using to parse. Sounds like the problem might lie there! Edit: The fact that you don't understand the grammar question and that 'you've written this by hand' very likely explains why you're having problems with expressions of the form '1+11*5' (i.e., with operator precedence). Googling for 'grammar for arithmetic expressions', for example, should yield some good pointers. Such a grammar need not be complicated: <Exp> ::= <Exp> + <Term> | <Exp> - <Term> | <Term> <Term> ::= <Term> * <Factor> | <Term> / <Factor> | <Factor> <Factor> ::= x | y | ... | ( <Exp> ) | - <Factor> | <Number> would do the trick for example, and can be trivially augmented to take care of some more complicated expressions (including functions for example, or powers,...). I suggest you have a look at this thread, for example. Almost all introductions to grammars/parsing treat arithmetic expressions as an example. Note that using a grammar does not at all imply using a specific tool (a la Yacc, Bison,...). Indeed, you most certainly are already using the following grammar: <Exp> :: <Leaf> | <Exp> <Op> <Leaf> <Op> :: + | - | * | / <Leaf> :: <Number> | (<Exp>) (or something of the kind) without knowing it! A: I know this is a late answer, but I've just written a tiny parser that allows all operators (prefix, postfix and infix-left, infix-right and nonassociative) to have arbitrary precedence. I'm going to expand this for a language with arbitrary DSL support, but I just wanted to point out that one doesn't need custom parsers for operator precedence, one can use a generalized parser that doesn't need tables at all, and just looks up the precedence of each operator as it appears. People have been mentioning custom Pratt parsers or shunting yard parsers that can accept illegal inputs - this one doesn't need to be customized and (unless there's a bug) won't accept bad input. It isn't complete in a sense, it was written to test the algorithm and its input is in a form that will need some preprocessing, but there are comments that make it clear. Note some common kinds of operators are missing for instance the sort of operator used for indexing ie table[index] or calling a function function(parameter-expression, ...) I'm going to add those, but think of both as postfix operators where what comes between the delimeters '[' and ']' or '(' and ')' is parsed with a different instance of the expression parser. Sorry to have left that out, but the postfix part is in - adding the rest will probably almost double the size of the code. Since the parser is just 100 lines of racket code, perhaps I should just paste it here, I hope this isn't longer than stackoverflow allows. A few details on arbitrary decisions: If a low precedence postfix operator is competing for the same infix blocks as a low precedence prefix operator the prefix operator wins. This doesn't come up in most languages since most don't have low precedence postfix operators. - for instance: ((data a) (left 1 +) (pre 2 not)(data b)(post 3 !) (left 1 +) (data c)) is a+not b!+c where not is a prefix operator and ! is postfix operator and both have lower precedence than + so they want to group in incompatible ways either as (a+not b!)+c or as a+(not b!+c) in these cases the prefix operator always wins, so the second is the way it parses Nonassociative infix operators are really there so that you don't have to pretend that operators that return different types than they take make sense together, but without having different expression types for each it's a kludge. As such, in this algorithm, non-associative operators refuse to associate not just with themselves but with any operator with the same precedence. That's a common case as < <= == >= etc don't associate with each other in most languages. The question of how different kinds of operators (left, prefix etc) break ties on precedence is one that shouldn't come up, because it doesn't really make sense to give operators of different types the same precedence. This algorithm does something in those cases, but I'm not even bothering to figure out exactly what because such a grammar is a bad idea in the first place. #lang racket ;cool the algorithm fits in 100 lines! (define MIN-PREC -10000) ;format (pre prec name) (left prec name) (right prec name) (nonassoc prec name) (post prec name) (data name) (grouped exp) ;for example "not a*-7+5 < b*b or c >= 4" ;which groups as: not ((((a*(-7))+5) < (b*b)) or (c >= 4))" ;is represented as '((pre 0 not)(data a)(left 4 *)(pre 5 -)(data 7)(left 3 +)(data 5)(nonassoc 2 <)(data b)(left 4 *)(data b)(right 1 or)(data c)(nonassoc 2 >=)(data 4)) ;higher numbers are higher precedence ;"(a+b)*c" is represented as ((grouped (data a)(left 3 +)(data b))(left 4 *)(data c)) (struct prec-parse ([data-stack #:mutable #:auto] [op-stack #:mutable #:auto]) #:auto-value '()) (define (pop-data stacks) (let [(data (car (prec-parse-data-stack stacks)))] (set-prec-parse-data-stack! stacks (cdr (prec-parse-data-stack stacks))) data)) (define (pop-op stacks) (let [(op (car (prec-parse-op-stack stacks)))] (set-prec-parse-op-stack! stacks (cdr (prec-parse-op-stack stacks))) op)) (define (push-data! stacks data) (set-prec-parse-data-stack! stacks (cons data (prec-parse-data-stack stacks)))) (define (push-op! stacks op) (set-prec-parse-op-stack! stacks (cons op (prec-parse-op-stack stacks)))) (define (process-prec min-prec stacks) (let [(op-stack (prec-parse-op-stack stacks))] (cond ((not (null? op-stack)) (let [(op (car op-stack))] (cond ((>= (cadr op) min-prec) (apply-op op stacks) (set-prec-parse-op-stack! stacks (cdr op-stack)) (process-prec min-prec stacks)))))))) (define (process-nonassoc min-prec stacks) (let [(op-stack (prec-parse-op-stack stacks))] (cond ((not (null? op-stack)) (let [(op (car op-stack))] (cond ((> (cadr op) min-prec) (apply-op op stacks) (set-prec-parse-op-stack! stacks (cdr op-stack)) (process-nonassoc min-prec stacks)) ((= (cadr op) min-prec) (error "multiply applied non-associative operator")) )))))) (define (apply-op op stacks) (let [(op-type (car op))] (cond ((eq? op-type 'post) (push-data! stacks `(,op ,(pop-data stacks) ))) (else ;assume infix (let [(tos (pop-data stacks))] (push-data! stacks `(,op ,(pop-data stacks) ,tos))))))) (define (finish input min-prec stacks) (process-prec min-prec stacks) input ) (define (post input min-prec stacks) (if (null? input) (finish input min-prec stacks) (let* [(cur (car input)) (input-type (car cur))] (cond ((eq? input-type 'post) (cond ((< (cadr cur) min-prec) (finish input min-prec stacks)) (else (process-prec (cadr cur)stacks) (push-data! stacks (cons cur (list (pop-data stacks)))) (post (cdr input) min-prec stacks)))) (else (let [(handle-infix (lambda (proc-fn inc) (cond ((< (cadr cur) min-prec) (finish input min-prec stacks)) (else (proc-fn (+ inc (cadr cur)) stacks) (push-op! stacks cur) (start (cdr input) min-prec stacks)))))] (cond ((eq? input-type 'left) (handle-infix process-prec 0)) ((eq? input-type 'right) (handle-infix process-prec 1)) ((eq? input-type 'nonassoc) (handle-infix process-nonassoc 0)) (else error "post op, infix op or end of expression expected here")))))))) ;alters the stacks and returns the input (define (start input min-prec stacks) (if (null? input) (error "expression expected") (let* [(cur (car input)) (input-type (car cur))] (set! input (cdr input)) ;pre could clearly work with new stacks, but could it reuse the current one? (cond ((eq? input-type 'pre) (let [(new-stack (prec-parse))] (set! input (start input (cadr cur) new-stack)) (push-data! stacks (cons cur (list (pop-data new-stack)))) ;we might want to assert here that the cdr of the new stack is null (post input min-prec stacks))) ((eq? input-type 'data) (push-data! stacks cur) (post input min-prec stacks)) ((eq? input-type 'grouped) (let [(new-stack (prec-parse))] (start (cdr cur) MIN-PREC new-stack) (push-data! stacks (pop-data new-stack))) ;we might want to assert here that the cdr of the new stack is null (post input min-prec stacks)) (else (error "bad input")))))) (define (op-parse input) (let [(stacks (prec-parse))] (start input MIN-PREC stacks) (pop-data stacks))) (define (main) (op-parse (read))) (main) A: Here is a simple case recursive solution written in Java. Note it does not handle negative numbers but you can do add that if you want to: public class ExpressionParser { public double eval(String exp){ int bracketCounter = 0; int operatorIndex = -1; for(int i=0; i<exp.length(); i++){ char c = exp.charAt(i); if(c == '(') bracketCounter++; else if(c == ')') bracketCounter--; else if((c == '+' || c == '-') && bracketCounter == 0){ operatorIndex = i; break; } else if((c == '*' || c == '/') && bracketCounter == 0 && operatorIndex < 0){ operatorIndex = i; } } if(operatorIndex < 0){ exp = exp.trim(); if(exp.charAt(0) == '(' && exp.charAt(exp.length()-1) == ')') return eval(exp.substring(1, exp.length()-1)); else return Double.parseDouble(exp); } else{ switch(exp.charAt(operatorIndex)){ case '+': return eval(exp.substring(0, operatorIndex)) + eval(exp.substring(operatorIndex+1)); case '-': return eval(exp.substring(0, operatorIndex)) - eval(exp.substring(operatorIndex+1)); case '*': return eval(exp.substring(0, operatorIndex)) * eval(exp.substring(operatorIndex+1)); case '/': return eval(exp.substring(0, operatorIndex)) / eval(exp.substring(operatorIndex+1)); } } return 0; } } A: Algorithm could be easily encoded in C as recursive descent parser. #include <stdio.h> #include <ctype.h> /* * expression -> sum * sum -> product | product "+" sum * product -> term | term "*" product * term -> number | expression * number -> [0..9]+ */ typedef struct { int value; const char* context; } expression_t; expression_t expression(int value, const char* context) { return (expression_t) { value, context }; } /* begin: parsers */ expression_t eval_expression(const char* symbols); expression_t eval_number(const char* symbols) { // number -> [0..9]+ double number = 0; while (isdigit(*symbols)) { number = 10 * number + (*symbols - '0'); symbols++; } return expression(number, symbols); } expression_t eval_term(const char* symbols) { // term -> number | expression expression_t number = eval_number(symbols); return number.context != symbols ? number : eval_expression(symbols); } expression_t eval_product(const char* symbols) { // product -> term | term "*" product expression_t term = eval_term(symbols); if (*term.context != '*') return term; expression_t product = eval_product(term.context + 1); return expression(term.value * product.value, product.context); } expression_t eval_sum(const char* symbols) { // sum -> product | product "+" sum expression_t product = eval_product(symbols); if (*product.context != '+') return product; expression_t sum = eval_sum(product.context + 1); return expression(product.value + sum.value, sum.context); } expression_t eval_expression(const char* symbols) { // expression -> sum return eval_sum(symbols); } /* end: parsers */ int main() { const char* expression = "1+11*5"; printf("eval(\"%s\") == %d\n", expression, eval_expression(expression).value); return 0; } next libs might be useful: yupana - strictly arithmetic operations; tinyexpr - arithmetic operations + C math functions + one provided by user; mpc - parser combinators Explanation Let's capture sequence of symbols that represent algebraic expression. First one is a number, that is a decimal digit repeated one or more times. We will refer such notation as production rule. number -> [0..9]+ Addition operator with its operands is another rule. It is either number or any symbols that represents sum "*" sum sequence. sum -> number | sum "+" sum Try substitute number into sum "+" sum that will be number "+" number which in turn could be expanded into [0..9]+ "+" [0..9]+ that finally could be reduced to 1+8 which is correct addition expression. Other substitutions will also produce correct expression: sum "+" sum -> number "+" sum -> number "+" sum "+" sum -> number "+" sum "+" number -> number "+" number "+" number -> 12+3+5 Bit by bit we could resemble set of production rules aka grammar that express all possible algebraic expression. expression -> sum sum -> difference | difference "+" sum difference -> product | difference "-" product product -> fraction | fraction "*" product fraction -> term | fraction "/" term term -> "(" expression ")" | number number -> digit+ To control operator precedence alter position of its production rule against others. Look at grammar above and note that production rule for * is placed below + this will force product evaluate before sum. Implementation just combines pattern recognition with evaluation and thus closely mirrors production rules. expression_t eval_product(const char* symbols) { // product -> term | term "*" product expression_t term = eval_term(symbols); if (*term.context != '*') return term; expression_t product = eval_product(term.context + 1); return expression(term.value * product.value, product.context); } Here we eval term first and return it if there is no * character after it this is left choise in our production rule otherwise - evaluate symbols after and return term.value * product.value this is right choise in our production rule i.e. term "*" product A: Actually there's a way to do this without recursion, which allows you to go through the entire expression once, character by character. This is O(n) for time and space. It takes all of 5 milliseconds to run even for a medium-sized expression. First, you'd want to do a check to ensure that your parens are balanced. I'm not doing it here for simplicity. Also, I'm acting as if this were a calculator. Calculators do not apply precedence unless you wrap an expression in parens. I'm using two stacks, one for the operands and another for the operators. I increase the priority of the operation whenever I reach an opening '(' paren and decrease the priority whenever I reach a closing ')' paren. I've even revised the code to add in numbers with decimals. This is in c#. NOTE: This doesn't work for signed numbers like negative numbers. Probably is just a simple revision. internal double Compute(string sequence) { int priority = 0; int sequenceCount = sequence.Length; for (int i = 0; i < sequenceCount; i++) { char s = sequence[i]; if (Char.IsDigit(s)) { double value = ParseNextNumber(sequence, i); numberStack.Push(value); i = i + value.ToString().Length - 1; } else if (s == '+' || s == '-' || s == '*' || s == '/') { Operator op = ParseNextOperator(sequence, i, priority); CollapseTop(op, numberStack, operatorStack); operatorStack.Push(op); } if (s == '(') { priority++; ; continue; } else if (s == ')') { priority--; continue; } } if (priority != 0) { throw new ApplicationException("Parens not balanced"); } CollapseTop(new Operator(' ', 0), numberStack, operatorStack); if (numberStack.Count == 1 && operatorStack.Count == 0) { return numberStack.Pop(); } return 0; } Then to test this out: Calculator c = new Calculator(); double value = c.Compute("89.8+((9*3)+8)+(9*2)+1"); Console.WriteLine(string.Format("The sum of the expression is: {0}", (float)value)); //prints out The sum of the expression is: 143.8 A: Pure javascript, no dependencies needed I very like bart's answer. and I do some modifications to read it easier, and also add support some function(and easily extend) function Parse(str) { try { return parseExpr(str.replaceAll(" ", "")) // Implement? See full code. } catch (e) { alert(e.message) } } Parse("123.45+3*22*4") It can support as below const testArray = [ // Basic Test ["(3+5)*4", ""], ["123.45+3*22*4", ""], ["8%2", ""], ["8%3", ""], ["7/3", ""], ["2*pi*e", 2 * Math.atan2(0, -1) * Math.exp(1)], ["2**3", ""], // unary Test ["3+(-5)", ""], ["3+(+5)", ""], // Function Test ["pow{2,3}*2", 16], ["4*sqrt{16}", 16], ["round{3.4}", 3], ["round{3.5}", 4], ["((1+e)*3/round{3.5})%2", ((1 + Math.exp(1)) * 3 / Math.round(3.5)) % 2], ["round{3.5}+pow{2,3}", Math.round(3.5)+Math.pow(2,3)], ] Full code // Main (() => { window.onload = () => { const nativeConsoleLogFunc = window.console.error window.console.error = (...data) => { // Override native function, just for test. const range = document.createRange() const frag = range.createContextualFragment(`<div>${data}</div>`) document.querySelector("body").append(frag) nativeConsoleLogFunc(...data) } // Add Enter event document.querySelector(`input`).onkeyup = (keyboardEvent) => { if (keyboardEvent.key === "Enter") { const result = Parse(document.getElementById('expr').value) if (result !== undefined) { alert(result) } } } const testArray = [ // Basic Test ["(3+5)*4", ""], ["123.45+3*22*4", ""], ["8%2", ""], ["8%3", ""], ["7/3", ""], ["2*pi*e", 2 * Math.atan2(0, -1) * Math.exp(1)], ["2**3", ""], // unary ["3+(-5)", ""], ["3+(+5)", ""], // Function Test ["pow{2,3}*2", 16], ["4*sqrt{16}", 16], ["round{3.4}", 3], ["round{3.5}", 4], ["((1+e)*3/round{3.5})%2", ((1 + Math.exp(1)) * 3 / Math.round(3.5)) % 2], ["round{3.5}+pow{2,3}", Math.round(3.5) + Math.pow(2, 3)], // error test ["21+", ValueMissingError], ["21+*", ParseError], ["(1+2", ParseError], // miss ")" ["round(3.12)", MissingParaError], // should be round{3.12} ["help", UnknownVarError], ] for (let [testString, expected] of testArray) { if (expected === "") { expected = eval(testString) // Why don't you use eval instead of writing the function yourself? Because the browser may disable eval due to policy considerations. [CSP](https://content-security-policy.com/) } const actual = Parse(testString, false) if (actual !== expected) { if (actual instanceof Error && actual instanceof expected) { continue } console.error(`${testString} = ${actual}, value <code>${expected}</code> expected`) } } } })() // Script class UnknownVarError extends Error { } class ValueMissingError extends Error { } class ParseError extends Error { } class MissingParaError extends Error { } /** * @description Operator * @param {string} sign "+", "-", "*", "/", ... * @param {number} precedence * @param {"L"|"R"} assoc associativity left or right * @param {function} exec * */ function Op(sign, precedence, assoc, exec = undefined) { this.sign = sign this.precedence = precedence this.assoc = assoc this.exec = exec } const OpArray = [ new Op("+", 10, "L", (l, r) => l + r), new Op("-", 10, "L", (l, r) => l - r), new Op("*", 20, "L", (l, r) => l * r), new Op("/", 20, "L", (l, r) => l / r), new Op("%", 20, "L", (l, r) => l % r), new Op("**", 30, "R", (l, r) => Math.pow(l, r)) ] const VarTable = { e: Math.exp(1), pi: Math.atan2(0, -1), // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/atan2 pow: (x, y) => Math.pow(x, y), sqrt: (x) => Math.sqrt(x), round: (x) => Math.round(x), } /** * @param {Op} op * @param {Number} value * */ function Item(op, value = undefined) { this.op = op this.value = value } class Stack extends Array { constructor(...items) { super(...items) this.push(new Item(new Op("", 0, "L"))) } GetLastItem() { return this[this.length - 1] // fast then pop // https://stackoverflow.com/a/61839489/9935654 } } function Cursor(str, pos) { this.str = str this.pos = pos this.MoveRight = (step = 1) => { this.pos += step } this.PeekRightChar = (step = 1) => { return this.str.substring(this.pos, this.pos + step) } /** * @return {Op} * */ this.MoveToNextOp = () => { const opArray = OpArray.sort((a, b) => b.precedence - a.precedence) for (const op of opArray) { const sign = this.PeekRightChar(op.sign.length) if (op.sign === sign) { this.MoveRight(op.sign.length) return op } } return null } } /** * @param {Cursor} cursor * */ function parseVal(cursor) { let startOffset = cursor.pos const regex = /^(?<OpOrVar>[^\d.])?(?<Num>[\d.]*)/g const m = regex.exec(cursor.str.substr(startOffset)) if (m) { const {groups: {OpOrVar, Num}} = m if (OpOrVar === undefined && Num) { cursor.pos = startOffset + Num.length if (cursor.pos > startOffset) { return parseFloat(cursor.str.substring(startOffset, startOffset + cursor.pos - startOffset)) // do not use string.substr() // It will be removed in the future. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Deprecated_and_obsolete_features#string_methods } } if ("+-(".indexOf(OpOrVar) !== -1) { cursor.pos++ switch (OpOrVar) { case "+": // unary plus, for example: (+5) return parseVal(cursor) case "-": return -(parseVal(cursor)) case "(": const value = parseExpr(cursor) if (cursor.PeekRightChar() === ")") { cursor.MoveRight() return value } throw new ParseError("Parsing error: ')' expected") } } } // below is for Variable or Function const match = cursor.str.substring(cursor.pos).match(/^[a-z_][a-z0-9_]*/i) // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/match if (match) { // Variable const varName = match[0] cursor.MoveRight(varName.length) const bracket = cursor.PeekRightChar(1) if (bracket !== "{") { if (varName in VarTable) { const val = VarTable[varName] if (typeof val === "function") { throw new MissingParaError(`${varName} is a function, it needs big curly brackets`) } return val } } // is function const regex = /{(?<Para>[^{]*)}/gm const m = regex.exec(cursor.str.substring(cursor.pos)) if (m && m.groups.Para !== undefined) { const paraString = m.groups.Para const para = paraString.split(',') cursor.MoveRight(paraString.length + 2) // 2 = { + } return VarTable[varName](...para) } throw new UnknownVarError(`unknown variable ${varName}`) } // Handle Error if (cursor.str.length === cursor.pos) { // example: 1+2+ throw new ValueMissingError(`Parsing error at end of string: value expected.`) } else { // example: 1+2+* throw new ParseError("Parsing error: unrecognized value") } } /** * @param {string|Cursor} expr * */ function parseExpr(expr) { const stack = new Stack() const cursor = (expr instanceof Cursor) ? expr : new Cursor(expr, 0) while (1) { let rightValue = parseVal(cursor) const op = cursor.MoveToNextOp() ?? new Op("", 0, "L") while ( op.precedence < stack.GetLastItem().op.precedence || (op.precedence === stack.GetLastItem().op.precedence && op.assoc === 'L')) { const lastItem = stack.pop() if (!lastItem.op.exec) { // end reached return rightValue } rightValue = lastItem.op.exec(lastItem.value, rightValue) } stack.push(new Item(op, rightValue)) } } function Parse(str, alertError = true) { try { return parseExpr(str.replaceAll(" ", "")) } catch (e) { if (alertError) { alert(e.message) return undefined } return e } } <input type="text" id="expr" name="expr" placeholder="123.45+3*22*4"> <button onclick="const x = Parse(document.getElementById('expr').value); if(x != null) alert(x);"> Calculate! </button>
{ "language": "en", "url": "https://stackoverflow.com/questions/28256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "121" }
Q: Exception analysis tool for C++ I've been looking for a tool to extract exception information from a C++ program. The most wanted feature I'm looking for: I want to know all the exceptions that can be thrown from a function (which would include all the function that is called from that function recursively). I've always thought that documenting errors and exceptions in particular is very hard (and takes a lot of effort to keep up to date). But if there is some way to automate that, it would be very helpful. A tool for windows and Visual Studio would be preferred but not necessary, I could always work around that. A: PC-Lint claims to do static exception analysis of C++ code. Coverity is another static C++ code analysis tool that apparently informs you of unhandled exceptions. AQtime claims to have exception tracing as part of its code analysis. Plus, they advertise Visual Studio integration. Here is a list of several static code analysis tools. A: DISCLAIMER: Working on this tool is my day job. Its hard for me to write this without it looking like a sales pitch, so I apologise in advance. As well as other analysis, QA C++ outputs the list of types that are thrown from the body of functions and the call tree. I believe that this would be all the information that you require. A: EDoC++ (http://edoc.sourceforge.net) is another alternative.
{ "language": "en", "url": "https://stackoverflow.com/questions/28261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Any pitfalls developing C#/.NET code in a VM running on a Mac? I am considering buying an Apple MacBook Pro. Are there any pitfalls developing C#/.NET code in a virtual machine running on a Mac? Also, is it better to run Vista or XP Pro for this purpose? A: I'm developing in a Parallels VM running Windows Server 2008, and overall it is terrific. I'd highly recommend the server OS over Vista or XP if you are doing web development. Other than the keyboard issue, the one pitfall with the MacBook Pro is that the fan is extremely loud and annoying, and running a VM has in my experience tended to heat up the laptop enough to kick it on relatively frequently. However, there are utilities out there such as Coolbook to keep it from kicking on. A: I can't tell you any specific experiences since I don't have a Mac, but I did want to point out that there was an awesome episode of the DeepFriedBytes podcast that discussed this very topic. It made me want to give it a try. They discuss the pros and cons of going this route - well worth the listen IMO if this is something you're considering: Episode 5: Developing .NET Software on a Mac A: XP Pro is definitely better, unless you have a really beefy Mac. Regarding your other question, no there are no pitfalls, other than performance. I prefer to use a real PC to do actual coding, using VMs for testing. Clearly, that's not an option for you within OSX. However, you do have the option of Boot Camp if the VM performance becomes an issue for you. That will also let you run Vista with no performance degradation. Bear in mind that the two virtual machine solutions for the Mac are fairly immature. I've used both, and while they are perfectly adequate for development, I've found both to be flaky, to varying degrees. Parallels seems mostly stable, but does crash and seems to have memory leaks; VMWare is beefer, and sucks more of the system's performance away by default (also seems to perform somewhat better than Parallels), but can have serious graphical problems depending on your setup, particularly if you try to use Unity mode. A: I'm developing .NET apps in a Vista VM under VMWare Fusion. Obviously you need a lot of memory, but other than not having Aero, I haven't run into any problems yet. A: I develop on my Macbook (not pro) using VMWare Fusion and WinXP. For the most part, it is a very good experience. I assign 1GB of memory, out of my 4GB, to the VM and its pretty speedy. The one major pitfall I've encountered is disk space. If you install a full VS2008 install and other tools, you can quickly eat up 30-40GB of disk. If you start using the snapshot feature or running multiple VMs, you'll eat up even more. Since I use my laptop as a primary machine and have lots of data and applications on the OSX side, I have run low on disk space with the standard 120GB drive. So, if you keep in mind the disk space issue, I think you'll find the experience quite satisfactory. A: You'd have the least problems running windows not in a VM, but for development your experience should be close to perfect with a VM. Both will give you less issues than MonoDevelop presumably, which is an entirely different CLR, compiler and a reimplementation of the framework. A: * *I use Parallels. I used Vista for 4 months then switched to XP. I prefer XP as it is faster. *Key bindings are quirky. Using function keys while debugging in the hosted XP will trigger events in OS X, effectively popping you out. *I have 3 "spaces" set up. One for OS X, one for XP VM, and the last for a RDC to my desktop. THIS IS BRILLIANTLY USEFUL. I can't live without spaces now. This technique actually killed my desire for a second monitor. *Like Jason said, any files stored on the OS X partition will be seen as a network resource to the XP/Vista VM. So trying to run EXEs or storing web roots there cause trust issues. Studio doesn't like project web roots to be on network shares. peace|dewde http://dewde.com A: I would look into the VMWare Fusion 2 Beta to get around the quirks with the key bindings experienced by those using Parallels. Fusion will capture all key events inside the virtual machine unless you hit a special key sequence to escape from the VM. You will, however, still have to get used to some of the oddities having an Apple based keyboard layout (no backspace, etc.). Those things aside, it really is quite seamless. A: Probably better not to run vista in a VM. Especially if you want the Aero UI turned on. VMs aren't very good with advanced graphics, so you'll probably want to run XP, or Vista in classic mode. A: Not really, it should run just fine. Your dev environment will just be a little bit slower...but in my experience, it's not really all that bad. I wouldn't want it as my main machine, but it's perfectly usable. A: I don't think Kibbee advice is correct. VMware Fusion (for the mac) currently supports up to DirectX9. The Vista integration is very good. If you have any trouble, you can natively boot into your Virtual Machine (If you have set it up as a BootCamp partition on the mac). I don't see any trouble with this setup, although I would not do it myself. The only thing, that my be a problem to you, is the keyboard-layout. The mac-keyboard has a different layout to pc-keyboards. (Especially on a german mac running a german windows, some characters might be a bit harder to type). You will have to relearn some parts of the keyboard! A: I do asp.net development on a MacBook Pro, running VMWare Fusion and Vista x64. It works great for me. As someone else mentioned, the keybindings are a little weird. I usually use a full size external keyboard, which helps a lot. A: I am developing .net applications using XP Pro in VMWare Fusion and I am not finding any issues. I am not even seeing any performance issues as the hardware in the MacBook Pro is much better than the hardware I had in my previous laptop. I found that there were a few things that I had to fiddle with to make the experience the same as working on my previous laptop. I had to install Sharp Keys to be able to access the right-click/context menu key on the keyboard, which I use often when in VS. I also made sure that some of the Mac OS keyboard and mouse shortcuts were not registered in VMWare Fusion, to stop strange things happening. I just noticed that I am only allowed my VM to use 1GB of memory, maybe I should up this just a little. There are posts out there that warn about assigning too much memory to a VM. One thing that is suggested for improving performance is to run the VM on another spindle. I haven't found a suitably priced 7200rpm portable drive yet, so I can't comment on this. [Edit] I knew I had seen this somewhere, Setting Up Windows Server 2008 VMWare Virtual Machines For .Net - This is something that I have been meaning to try out, I just haven't got around to it yet. (Too much time spent reading CrackOverflow) A: For virtualization, I'd try Sun's Virtual Box. I use it in Windows XP and Windows Vista and it works great, I expect performance would be similar running on a Mac. As for which OS to run, I would stick with Windows XP Pro. You'll not need to dedicate as much RAM to the VM as you would if you ran Vista. A: I've been doing .NET development using Parallels for over a year now, using WinXP Pro and can't complain, it runs fast (just as it would on a regular machine) and I get the best of all worlds --> a tip, use spaces, so have Windows running in one desk and your Mac stuff on the other, and with just a keystroke you move from one side to the other, flawlessly! On the Bootcamp side, to be honest I tried for a while, but having to reboot to access my apps on Mac became annoying after some time. Just a word of advice: if you go with this option take a look at MacDrive, can't go wrong with it, as you will maintain access to your Mac partitions. Been there, done that... and I kind of like it ;)... good luck with the transition! A: Just to mention an alternative to VMWare Fusion, I'm using Parallels als a VM. Performance has not been an issue so far when I've given the VM 1 GiB of main memory. Before deciding on one VM, I'd suggest testing them all extensively. I am quite happy with Parallels but I'm not sure I wouldn't use VMWare Fusion the next time. Contrarily to what Mo said, I actually find the Mac keyboard layout much better than the Windows layout, using a Germany key binding.
{ "language": "en", "url": "https://stackoverflow.com/questions/28268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can I maintain state between calls to a SQL Server UDF? I have a SQL script that inserts data (via INSERT statements currently numbering in the thousands) One of the columns contains a unique identifier (though not an IDENTITY type, just a plain ol' int) that's actually unique across a few different tables. I'd like to add a scalar function to my script that gets the next available ID (i.e. last used ID + 1) but I'm not sure this is possible because there doesn't seem to be a way to use a global or static variable from within a UDF, I can't use a temp table, and I can't update a permanent table from within a function. Currently my script looks like this: declare @v_baseID int exec dbo.getNextID @v_baseID out --sproc to get the next available id --Lots of these - where n is a hardcoded value insert into tableOfStuff (someStuff, uniqueID) values ('stuff', @v_baseID + n ) exec dbo.UpdateNextID @v_baseID + lastUsedn --sproc to update the last used id But I would like it to look like this: --Lots of these insert into tableOfStuff (someStuff, uniqueID) values ('stuff', getNextID() ) Hardcoding the offset is a pain in the arse, and is error prone. Packaging it up into a simple scalar function is very appealing, but I'm starting to think it can't be done that way since there doesn't seem to be a way to maintain the offset counter between calls. Is that right, or is there something I'm missing. We're using SQL Server 2005 at the moment. edits for clarification: Two users hitting it won't happen. This is an upgrade script that will be run only once, and never concurrently. The actual sproc isn't prefixed with sp_, fixed the example code. In normal usage, we do use an id table and a sproc to get IDs as needed, I was just looking for a cleaner way to do it in this script, which essentially just dumps a bunch of data into the db. A: If you have 2 users hitting it at the same time they will get the same id. Why didn't you use an id table with an identity instead, insert into that and use that as the unique (which is guaranteed) id, this will also perform much faster sp_getNextID never ever prefix procs with sp_, this has performance implication because the optimizer first checks the master DB to see if that proc exists there and then th local DB, also if MS decide to create a sp_getNextID in a service pack yours will never get executed A: I'm starting to think it can't be done that way since there doesn't seem to be a way to maintain the offset counter between calls. Is that right, or is there something I'm missing. You aren't missing anything; SQL Server does not support global variables, and it doesn't support data modification within UDFs. And even if you wanted to do something as kludgy as using CONTEXT_INFO (see http://weblogs.sqlteam.com/mladenp/archive/2007/04/23/60185.aspx), you can't set that from within a UDF anyway. Is there a way you can get around the "hardcoding" of the offset by making that a variable and looping over the iteration of it, doing the inserts within that loop? A: It would probably be more work than it's worth, but you can use static C#/VB variables in a SQL CLR UDF, so I think you'd be able to do what you want to do by simply incrementing this variable every time the UDF is called. The static variable would be lost whenever the appdomain unloaded, of course. So if you need continuity of your ID from one day to the next, you'd need a way, on first access of NextId, to poll all of tables that use this ID, to find the highest value.
{ "language": "en", "url": "https://stackoverflow.com/questions/28280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Generating an object model in Ruby from an XML DTD I have an XML document with a DTD, and would love to be able to access the XML model, something like this: title = Thing.Items[0].Title Is there a way, in Ruby, to generate this kind of object model based on a DTD? Or am I stuck using REXML? Thanks! A: if you include the active_support gem (comes with rails) it adds the method from_xml to the Hash object. You can then call Hash.from_xml(xml_content) and it'll return a hash that you can use to access the data. I don't know of an easy way to map an xml to an object, but you could create a wrapper class that delegates the method calls to the underlying hash which holds the data. A: I know this question was asked a while back, but if you want the true Thing.Items[0].Title type format, all you need to do is use OpenStruct. require 'rubygems' require 'activesupport' # For xml-simple require 'ostruct' h = Hash.from_xml File.read('some.xml') o = OpenStruct.new h o.thing.items[0].title A: You can use the ruby version of xml-simple. You shouldn't need to install the gem as I believe it's already installed with rails. http://xml-simple.rubyforge.org/
{ "language": "en", "url": "https://stackoverflow.com/questions/28293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Impose a total ordering on all instances of *any* class in Java I'm unsure whether the following code would ensure all conditions given in Comparator's Javadoc. class TotalOrder<T> implements Comparator<T> { public boolean compare(T o1, T o2) { if (o1 == o2 || equal(o1, o2)) return 0; int h1 = System.identityHashCode(o1); int h2 = System.identityHashCode(o2); if (h1 != h2) { return h1 < h2 ? -1 : 1; } // equals returned false but identity hash code was same, assume o1 == o2 return 0; } boolean equal(Object o1, Object o2) { return o1 == null ? o2 == null : o1.equals(o2); } } Will the code above impose a total ordering on all instances of any class, even if that class does not implement Comparable? A: Hey, look at what I found! http://gafter.blogspot.com/2007/03/compact-object-comparator.html This is exactly what I was looking for. A: Hey, look at what I found! http://gafter.blogspot.com/2007/03/compact-object-comparator.html Oh yes, I forgot about the IdentityHashMap (Java 6 and above only). Just have to pay attention at releasing your comparator. A: You answered in your comment: equals returned false but identity hash code was same, assume o1 == o2 Unfortunately you cannot assume that. Most of the time that is going to work, but in some exceptionnal cases, it won't. And you cannot know when. When such a case appear, it would lead to lose instances in TreeSets for example. A: I don't think it does since this clause is not met: Finally, the implementer must ensure that x.compareTo(y)==0 implies that sgn(x.compareTo(z)) == sgn(y.compareTo(z)), for all z. Since equal(o1, o2) depends on o1's implementation of equals, two objects that are logically equal (as determined by equals) still have two differrent identityHashCodes. So when comparing them to a third object (z), they might end up yielding different values for compareTo. Make sense? A: You should probably raise an exception if it gets to that last return 0 line --when a hash collision happens. I do have a question though: you are doing a total ordering on the hash's, which I guess is fine, but shouldn't some function be passed to it to define a Lexicographical order? int h1 = System.identityHashCode(o1); int h2 = System.identityHashCode(o2); if (h1 != h2) { return h1 < h2 ? -1 : 1; } I can imagine that you have the objects as a tuple of two integers that form a real number. But you wont get the proper ordering since you're only taking a hash of the object. This is all up to you if hashing is what you meant, but to me, it doesn't make much sense. A: I'm not really sure about the System.identityHashCode(Object). That's pretty much what the == is used for. You might rather want to use the Object.hashCode() - it's more in parallel with Object.equals(Object). A: I agree this is not ideal, hence the comment. Any suggestions? I think there is now way you can solve that, because you cannot access the one and only one thing that can distinguish two instances: their address in memory. So I have only one suggestion: reconsider your need of having a general total ordering process in Java :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/28301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Free Network Monitor I am having trouble integrating two products, one of which is mine and they appear not to be talking. So I want to make sure they are communicating correctly. I had a look around for network monitor and found TCP Spy. This works but only shows 1 side of the conversation at a time (it has to run locally) I would ideally like to see both sides at the same time - but you can't run two copies of TCP Spy. I've hit Sourceforge, but nothing seems to jump out - I'm a Windows developer, I don't have perl installed. I've found a couple of others which are cripple-ware and totally useless, so I was wondering what do the SO guys use for watching the TCP conversation? BTW - the 'not-written-here' product is not a browser. A: I'm not sure if it does everything you want, but have you seen WireShark and the Microsoft Network Monitor? A: Wireshark (previously Ethereal) Wireshark is an award-winning network protocol analyzer developed by an international team of networking experts. A: I use wireshark. Very good and free. A: Wireshark, aka Ethereal comes with a fair amount of TCP sniffing functionality. http://www.wireshark.org/ A: Wireshark is a really good and mature network sniffer. It's been around for years. * *Deep inspection of hundreds of protocols, with more being added all the time *Live capture and offline analysis *Decryption support for many protocols, including IPsec, ISAKMP, Kerberos, SNMPv3, SSL/TLS, WEP, and WPA/WPA2 *Coloring rules can be applied to the packet list for quick, intuitive analysis *Output can be exported to XML, PostScript®, CSV, or plain text A: With respect to using Windows and lacking Perl: Why not try Strawberry Perl? It's a free Perl distribution that's run by the Perl community (specifically Adam Kennedy at the core), is easy to install, and wields the full power of CPAN out of the box. A: Strange that I did not see WireShark when I visited SourceForge. The top result of the 60 returned was a bizarre german thing. A: Wireshark is great.. but another option would be via PowerShell. I've used the Get-Packet script from Jeff Hicks at Sapien Technologies as a really lightweight packet sniffer. You get custom objects representing your packets and can do whatever filtering you need to via PowerShell. The other script in the pair is Analyze-Packet, which can summarize the results of a packet capture. A: I tried Wireshark and Microsoft Network Monitor, but neither detected my (and the program I am trying to communicate with) transfer. If I had a day to sit and configure it I probably could get it working but I just wanted the bytes sent and, more specifically, bytes received. In the end I found HHD Software's Accurate Network Monitor software which did what I wanted it to, even if it was slight clunky. A: Take a look at Tcpdump It is not a full fledged GUI network analyzer (not at all) but it is usable in scripts. Since I am more a Linux person, I use it with Bash and Python, but you should be able to call it from powershell.
{ "language": "en", "url": "https://stackoverflow.com/questions/28302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Web 2.0 Color Combinations What are the most user-friendly color combinations for Web 2.0 websites, such as background, button colors, etc.? A: I've been using this free color schemer to help me determine some nice layouts. You give it a base color and it will give you a lot of complements. EDIT: Gah! Curse you jko and your god-like typing abilities! At least we have the same reference though. 8^D A: For color schemes, I like browsing Colour Lovers. There are thousands of user-submitted color schemes to pick through for ideas and you can easily create your own scheme if you'd like. A lot of times I use it just for the color palette to create just the right color (it outputs the color in hex, RGB, HSV and CMYK). A: These aren't combinations per-say but are good colours if you're just looking to mock something up (or if you're like me and have the colour sense of a bat). * *Google Docs & Spreadsheets: Web 2.0 Colours - found this recently and it's been very helpful. *Modern Life is Rubbish: Web 2.0 Colour Palette - a sort of parody but useful :-) As for when i do use actual palettes I use Colourlovers, kuler and Colourschemer with custom colours. A: what about colllor ? user-friendly color palette generator A: ColorSchemer will suggest good schemes for you. If you want to try something out on your own, try Color Combinations. A: My stick figures come out wrong, but the following links have kept me artistically aligned for many years: * *Color tools for the design impaired *Color Scheme Generator To web 2.0-ize it, just put a lens flare on your logo & mark it BETA - you'll be fine. A: search for color on useit search for color on boxesandarrows There has been loads of research on this sort of stuff and most of it is conflicting a couple of good jump off points are listed above. Generally lighter backgrounds and good contrast are favoured by all researchers but the details get niggly. A: I would say that using the right combination of colors is user-friendly. Make sure your colors coordinate with each other and you should be fine. A tool I use a lot is kuler (http://kuler.adobe.com). It'll help you pick colors that work well with each other. A: Miles Burke has compiled a list of the colors used in the majority of the big names in this Web 2.0 world. He also gives a PNG or a JPG as a color cheatsheet. I am sure I can give you an idea from what color to chose for your application. A: There is no single set of colours that everyone finds legible, but tools exist to formalise the variation and help you to make informed choices. Anyone doing UI should know about automated colour checking for accessibility. Anytime you have text that people need to read, pop the foreground and background colours into this; Colour Contrast Check This is based on a lot of research into contrast and legibility. If your colour combination is >125 and >500 you are as safe as you can be. Between 100 and 125 for brightness difference and 400 and 500 for colour difference then it's fine, but could be better. Below 100 and 400 respectively, increasing numbers of people will have trouble reading it for a variety of reasons.
{ "language": "en", "url": "https://stackoverflow.com/questions/28303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How can I get notification when a mirrored SQL Server database has failed over We have a couple of mirrored SQL Server databases. My first problem - the key problem - is to get a notification when the db fails over. I don't need to know because, erm, its mirrored and so it (almost) all carries on working automagically but it would useful to be advised and I'm currently getting failovers when I don't think I should be so it want to know when they occur (without too much digging) to see if I can determine why. I have services running that I could fairly easily use to monitor this - so the alternative question would be "How do I programmatically determine which is the principal and which is the mirror" - preferably in a more intelligent fashion than just attempting to connect each in turn (which would mostly work but...). Thanks, Murph Addendum: One of the answers queries why I don't need to know when it fails over - the answer is that we're developing using ADO.NET and that has automatic failover support, all you have to do is add Failover Partner=MIRRORSERVER (where MIRRORSERVER is the name of your mirror server instance) to your connection string and your code will fail over transparently - you may get some errors depending on what connections are active but in our case very few. A: Right, The two answers and a little thought got me to something approaching an answer. First a little more clarification: The app is written in C# (2.0+) and uses ADO.NET to talk to SQL Server 2005. The mirror setup is two W2k3 servers hosting the Principal and the Mirror plus a third server hosting an express instance as a monitor. The nice thing about this is a failover is all but transparent to the app using the database, it will throw an error for some connections but fundamentally everything will carry on nicely. Yes we're getting the odd false positive but the whole point is to have the system carry on working with the least amount of fuss and mirror does deliver this very nicely. Further, the issue is not with serious server failure - that's usually a bit more obvious but with a failover for other reasons (c.f. the false positives above) as we do have a couple of things that can't, for various reasons, fail over and in any case so we can see if we can identify the circumstance where we get false positives. So, given the above, simply checking the status of the boxes is not quite enough and chasing through the event log is probably overly complex - the answer is, as it turns out, fairly simple: sp_helpserver The first column returned by sp_helpserver is the server name. If you run the request at regular intervals saving the previous server name and doing a comparison each time you'll be able to identify when a change has taken place and then take the appropriate action. The following is a console app that demonstrates the principal - although it needs some work (e.g. the connection ought to be non-pooled and new each time) but its enough for now (so I'd then accept this as "the" answer"). Parameters are Principal, Mirror, Database using System; using System.Data.SqlClient; namespace FailoverMonitorConcept { class Program { static void Main(string[] args) { string server = args[0]; string failover = args[1]; string database = args[2]; string connStr = string.Format("Integrated Security=SSPI;Persist Security Info=True;Data Source={0};Failover Partner={1};Packet Size=4096;Initial Catalog={2}", server, failover, database); string sql = "EXEC sp_helpserver"; SqlConnection dc = new SqlConnection(connStr); SqlCommand cmd = new SqlCommand(sql, dc); Console.WriteLine("Connection string: " + connStr); Console.WriteLine("Press any key to test, press q to quit"); string priorServerName = ""; char key = ' '; while(key.ToString().ToLower() != "q") { dc.Open(); try { string serverName = cmd.ExecuteScalar() as string; Console.WriteLine(DateTime.Now.ToLongTimeString() + " - Server name: " + serverName); if (priorServerName == "") { priorServerName = serverName; } else if (priorServerName != serverName) { Console.WriteLine("***** SERVER CHANGED *****"); Console.WriteLine("New server: " + serverName); priorServerName = serverName; } } catch (System.Data.SqlClient.SqlException ex) { Console.WriteLine("Error: " + ex.ToString()); } finally { dc.Close(); } key = Console.ReadKey(true).KeyChar; } Console.WriteLine("Finis!"); } } } I wouldn't have arrived here without a) asking the question and then b) getting the responses which made me actually think Murph A: If the failover logic is in your application you could write a status screen that shows which box you're connected by writing to a var when the first connection attempt fails. I think your best bet would be a ping daemon/cron job that checks the status of each box periodically and sends an email if one doesn't respond. A: Use something like Host Monitor http://www.ks-soft.net/hostmon.eng/ to monitor the Event Log for messages related to the failover event, which can send you an alert via email/SMS. I'm curious though how you wouldn't need to know that the failover happened, because don't you have to then update the datasources in your applications to point to the new server that you failed over to? Mirroring takes place on different hosts (the primary and the mirror), unlike clustering which has multiple nodes that appear to be a single device from the outside. Also, are you using a witness server in order to automatically fail over from the primary to the mirror? This is the only way I know of to make it happen automatically, and in my experience, you get a lot of false-positives where network hiccups can fool the mirror and witness into thinking the primary is down when in fact it is not.
{ "language": "en", "url": "https://stackoverflow.com/questions/28353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Database compare tools My company has a number of relatively small Access databases (2-5MB) that control our user assisted design tools. Naturally these databases evolve over time as data bugs are found and fixed and as the schema changes to support new features in the tools. Can anyone recommend a database diff tool to compare both the data and schema from one version of the database to the next? Any suggestions will be appreciated: free, open source, or commercial. A: I use Red Gate Sql Compare for comparing schemas. It also has an interesting feature that allows you to save a snapshot of the schema which you can then use in later diffs. for example compare the schema of today with the schema of a month ago. A: I use ApexSQL Diff. It is an excellent tool for doing just what you're describing...compare schema, compare data, generate change scripts. It not free, but it works well. NOTE: ApexSQL Diff only works with SQL Server. A: We never actually purchased it as we ended up using SQL Server 2005, but DBDiff seemed to do the trick: http://www.dkgas.com/downdbdiff.cgi It works with any ODBC compatible DB. A: I've used Total Access Detective in the past and it did the trick. It's a while ago though so you might want to investigate first... A: If you're looking for a free alternative to Red Gate's most excellent SQL Compare, you might want to check SQLDBDigg made by SQLDBTools. It's what I used until I caved and bought SQL Compare. A: It's not a perfect solution, but I often export both databases as txt/SQL files and then use a diff program, such as the one that comes with TortoiseSVN. You can then see all of the differences. It doesn't automatically create the SQL though to sync the dbs. A: http://www.diffkit.org Features High performance, for large datasets (+10MM rows). Very low memory overhead, even on very large datasets. High quality-- comprehensive embedded regression test suite for the application/framework. Java run everywhere (tm) — Linux, Solaris, OS X, Windows, etc. Cross database-- Oracle, MySQL, DB2, and any JDBC datasource. Command-line driven; no GUI needed; can run in headless environments. XML configuration file driven. Free Open Source Software. Apache License, Version 2.0. Clean Object Oriented Design make extension easy. Easily embeddable as a Java library (jar).
{ "language": "en", "url": "https://stackoverflow.com/questions/28363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is "safe_eval" really safe? I'm looking for a "safe" eval function, to implement spreadsheet-like calculations (using numpy/scipy). The functionality to do this (the rexec module) has been removed from Python since 2.3 due to apparently unfixable security problems. There are several third-party hacks out there that purport to do this - the most thought-out solution that I have found is this Python Cookbok recipe, "safe_eval". Am I reasonably safe if I use this (or something similar), to protect from malicious code, or am I stuck with writing my own parser? Does anyone know of any better alternatives? EDIT: I just discovered RestrictedPython, which is part of Zope. Any opinions on this are welcome. A: Depends on your definition of safe I suppose. A lot of the security depends on what you pass in and what you are allowed to pass in the context. For instance, if a file is passed in, I can open arbitrary files: >>> names['f'] = open('foo', 'w+') >>> safe_eval.safe_eval("baz = type(f)('baz', 'w+')", names) >>> names['baz'] <open file 'baz', mode 'w+' at 0x413da0> Furthermore, the environment is very restricted (you cannot pass in modules), thus, you can't simply pass in a module of utility functions like re or random. On the other hand, you don't need to write your own parser, you could just write your own evaluator for the python ast: >>> import compiler >>> ast = compiler.parse("print 'Hello world!'") That way, hopefully, you could implement safe imports. The other idea is to use Jython or IronPython and take advantage of Java/.Net sandboxing capabilities. A: Writing your own parser could be fun! It might be a better option because people are expecting to use the familiar spreadsheet syntax (Excel, etc) and not Python when they're entering formulas. I'm not familiar with safe_eval but I would imagine that anything like this certainly has the potential for exploitation. A: If you simply need to write down and read some data structure in Python, and don't need the actual capacity of executing custom code, this one is a better fit: http://code.activestate.com/recipes/364469-safe-eval/ It garantees that no code is executed, only static data structures are evaluated: strings, lists, tuples, dictionnaries. A: Although that code looks quite secure, I've always held the opinion that any sufficiently motivated person could break it given adequate time. I do think it will take quite a bit of determination to get through that, but I'm relatively sure it could be done. A: Daniel, Jinja implements a sandboxe environment that may or may not be useful to you. From what I remember, it doesn't yet "comprehend" list comprehensions. Sanbox info A: The functionality you want is in the compiler language services, see http://docs.python.org/library/language.html If you define your app to accept only expressions, you can compile the input as an expression and get an exception if it is not, e.g. if there are semicolons or statement forms.
{ "language": "en", "url": "https://stackoverflow.com/questions/28369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Performance difference between IIf() and If In Visual Basic, is there a performance difference when using the IIf function instead of the If statement? A: According to this guy, IIf can take up to 6x as long as If/Then. YMMV. A: Better use If instead of IIf to use the type inference mechanism correctly (Option Infer On) In this example, Keywords is recognized as a string when I use If : Dim Keywords = If(String.IsNullOrEmpty(SelectedKeywords), "N/A", SelectedKeywords) Otherwise, it is recognized as an Object : Dim Keywords = IIf(String.IsNullOrEmpty(SelectedKeywords), "N/A", SelectedKeywords) A: IIf() runs both the true and false code. For simple things like numeric assignment, this isn't a big deal. But for code that requires any sort of processing, you're wasting cycles running the condition that doesn't match, and possibly causing side effects. Code illustration: Module Module1 Sub Main() Dim test As Boolean = False Dim result As String = IIf(test, Foo(), Bar()) End Sub Public Function Foo() As String Console.WriteLine("Foo!") Return "Foo" End Function Public Function Bar() As String Console.WriteLine("Bar!") Return "Bar" End Function End Module Outputs: Foo! Bar! A: On top of that, readability should probably be more highly preferred than performance in this case. Even if IIF was more efficient, it's just plain less readable to the target audience (I assume if you're working in Visual Basic, you want other programmers to be able to read your code easily, which is VB's biggest boon... and which is lost with concepts like IIF in my opinion). Also, "IIF is a function, versus IF being part of the languages' syntax"... which implies to me that, indeed, If would be faster... if for nothing else than that the If statement can be boiled down directly to a small set of opcodes rather than having to go to another space in memory to perform the logic found in said function. It's a trite difference, perhaps, but worth noting. A: I believe that the main difference between If and IIf is: * *If(test [boolean], statement1, statement2) it means that according to the test value either satement1 or statement2 will executed (just one statement will execute) *Dim obj = IIF(test [boolean] , statement1, statement2) it means that the both statements will execute but according to test value one of them will return a value to (obj). so if one of the statements will throw an exception it will throw it in (IIf) anyway but in (If) it will throw it just in case the condition will return its value. A: ...as to why it can take as long as 6x, quoth the wiki: Because IIf is a library function, it will always require the overhead of a function call, whereas a conditional operator will more likely produce inline code. Essentially IIf is the equivalent of a ternary operator in C++/C#, so it gives you some nice 1 line if/else type statements if you'd like it to. You can also give it a function to evaluate if you desire. A: VB has the following If statement which the question refers to, I think: ' Usage 1 Dim result = If(a > 5, "World", "Hello") ' Usage 2 Dim foo = If(result, "Alternative") The first is basically C#'s ternary conditional operator and the second is its coalesce operator (return result unless it’s Nothing, in which case return "Alternative"). If has thus replaced IIf and the latter is obsolete. Like in C#, VB's conditional If operator short-circuits, so you can now safely write the following, which is not possible using the IIf function: Dim len = If(text Is Nothing, 0, text.Length) A: Also, another big issue with the IIf is that it will actually call any functions that are in the arguments [1], so if you have a situation like the following: string results = IIf(Not oraData.IsDBNull(ndx), oraData.GetString(ndx), string.Empty) It will actually throw an exception, which is not how most people think the function works the first time that they see it. This can also lead to some very hard to fix bugs in an application as well. [1] IIf Function - http://msdn.microsoft.com/en-us/library/27ydhh0d(VS.71).aspx A: Those functions are different! Perhaps you only need to use IF statement. IIF will always be slower, because it will do both functions plus it will do standard IF statement. If you are wondering why there is IIF function, maybe this will be explanation: Sub main() counter = 0 bln = True s = iif(bln, f1, f2) End Sub Function f1 As String counter = counter + 1 Return "YES" End Function Function f2 As String counter = counter + 1 Return "NO" End Function So the counter will be 2 after this, but s will be "YES" only. I know this counter stuff is useless, but sometimes there are functions that you will need both to run, doesn't matter if IF is true or false, and just assign value from one of them to your variable.
{ "language": "en", "url": "https://stackoverflow.com/questions/28377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: Proxy which requires authentication with Android Emulator Has anybody managed to get the Android Emulator working behind a proxy that requires authentication? I've tried setting the -http-proxy argument to http://DOMAIN/USERNAME:PASSWORD@IP:PORT but am having no success. I've tried following the docs to no avail. I've also tried the -verbose-proxy setting but this no longer seems to exist. Any pointers? A: Apparently this problems runs only with Android 2.x and Windows. There is a opened bug here : http://code.google.com/p/android/issues/detail?id=5508&q=emulator%20proxy&colspec=ID%20Type%20Status%20Owner%20Summary%20Stars A: * *Find the file androidtool.cfg at C:\Documents and Settings\YOUR USER NAME\.android\ *Add this line: http.proxyLogin=USER@PASSWORD *Save the file and try to open the Android SDK. A: For setting proxy server we need to set APNS setting. To do this: * *Go to Setting *Go to wireless and networks *Go to mobile networks *Go to access point names. Use menu to add new apns *Set Proxy = localhost *Set Port = port that you are using to make proxy server, in my case it is 8989 For setting Name and apn here is the link: According to your sim card you can see the table A: I Managed to do it in the Adndroid 2.2 Emulator. Go to "Settings" -> "Wireless & Networks" -> "Mobile Networks" -> "Access Point Names" -> "Telkila" Over there set the proxy host name in the property "Proxy" and the Proxy port in the property "Port" A: This worked for me: http://code.google.com/p/android/issues/detail?id=5508#c39 Apparently there's a bug in the emulator that forces you to use the IP address of the proxy instead of the name... A: Jay, though that would be the ideal place for this information, it has not been updated for 2.1. Below I will list the methods that currently do NOT work for the 2.1 emulator. The http-post argument does not work for the 2.1 emulator. Setting a proxy in the APN list within the 2.1 emulator does not work. Inserting the proxy directly into the system table via sql-lite does not work with 2.1. In fact, the ONLY way to get the browser to connect to the internet via the emulator that I've found in 2.1, is to NOT use a proxy at all. I really hope this gets fixed soon, for there are many people with this same problem. A: * *Start command prompt. *Go the folder where your emulator is located. In general, it will be in the tools folder of the Android SDK. *Then use the following command: emulator -avd <avd name> -http-proxy <server>:<proxy> By using this, we will be able to access the internet using the browser. A: Using Android SDK 1.5 emulator with proxy in Eclipse 3.45 Go to Package Explorer -> Right click your Android project ->Run As->Run Configurations. Under Android Application on the left column, select your project -> on the right column, where you see Android | Target | Common tabs -> Select Target -> on the bottom “Additional Emulator Command Line Options”-> -http-proxy http://www.gateProxy.com:1080 -debug-proxy http://www.gateProxy.com:1080 ->Run/Close. A: It seems like SDK 1.5 onwards, the -http-proxy flag also doesn't work. What did work for me is to boot the android image in the emulator and then once Android is running, go to Home > Menu > Settings > Wireless Controls > Mobile Networks > Access Point Names and then setup the http proxy settings for the default access point. With the APN proxy settings in place, I can get the emulator's browser to surf the web. However, other stuff like Maps still doesn't work. A: I've not used the Android Emulator but I have set the $http_proxy environment variable for perl and wget and a few cygwin tools on windows. That might work for you for android, but the slash in the domain name seems like a potential problem. I know I tried having my domain "GLOBAL" in there, but ended up taking it out and sticking with: http://$USER:[email protected]:80 One problem I run into a lot though is programs that cannot be told to use the proxy for DNS queries too. In cases where they don't I always get a host name not found. I'd like to find a local dns resolver that can use the proxy for all the programs that won't. A: I was able to view the traffic with an HTTP sniffer instead of a proxy. I used HTTPScoop, which is a nice little app. Also the nice thing about using HTTPScoop is that I can also see traffic on my actual device when I turn on internet sharing and have my phone use the wifi from my mac. So this is a good deal for debugging what happens on the phone itself AND the emulator. This way it doesn't matter what emulator you use, because the sniffer sees the traffic independent of the emulator, device, compiler settings etc. A: I will explain all the steps: * *Go to settings in Android emulator > Wireless & Network > Mobile network > Access point > Telkilla > and here do necessary settings such as proxy, port, etc. I think now everything is clear about proxy settings... A: For Android2.3.3 Settings->Wireless&Networks->MobileNetworks->AccessPointNames->Telkila-> set the Proxy and the Port here (xx.xx.xx.xx and port) A: I remember having the same problem - After searching on the web, I found this solution - From the command line, 1. > adb shell 2. # sqlite3 /data/data/com.android.providers.settings/databases/settings.db 3. sqlite> INSERT INTO system VALUES(99,’http_proxy', 'proxy:port'); 4. sqlite>.exit EDIT: Edited answer to reflect the latest version of Android. A: I had the same problem when i use the following command: emulator-x86.exe -http-proxy domain\user:password@proxyIP:port -avd MyAVD I got the proxy authentication error. Finally, I had to bypass the proxy NTLM authentication by using the Cntlm here: http://sourceforge.net/projects/cntlm/ And then after simply configuring the cntlm.ini, I use the following command instead: emulator-x86.exe -http-proxy 127.0.0.1:3128 -avd MyAVD and it works :) A: With the new versions of Android Studio and its emulator, it is an easy task. Press the emulator's "More" button, and choose the Settings -> Proxy tab. All the needed configurations are there.
{ "language": "en", "url": "https://stackoverflow.com/questions/28380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: SQL Server 2k5 memory consumption? I have a development vm which is running sql server as well as some other apps for my stack, and I found that the other apps are performing awfully. After doing some digging, SQL Server was hogging the memory. After a quick web search I discovered that by default, it will consume as much memory as it can in order to cache data and give it back to the system as other apps request it, but this process often doesn't happen fast enough, apparently my situation is a common problem. There however is a way to limit the memory SQL Server is allowed to have. My question is, how should I set this limit. Obviously I'm going to need to do some guess and check, but is there an absolute minimum threshhold? Any recommendations are appreciated. Edit: I'll note that out developer machines have 2 gigs of memory so I'd like to be able to run the vm on 768 mb or less if possible. This vm will be only used for local dev and testing , so the load will be very minimal. After code has been tested locally it goes to another environment where the SQL server box is dedicated. What I'm really looking for here is recommendations on minimums A: Extracted fromt he SQL Server documentation: Maximum server memory (in MB) Specifies the maximum amount of memory SQL Server can allocate when it starts and while it runs. This configuration option can be set to a specific value if you know there are multiple applications running at the same time as SQL Server and you want to guarantee that these applications have sufficient memory to run. If these other applications, such as Web or e-mail servers, request memory only as needed, then do not set the option, because SQL Server will release memory to them as needed. However, applications often use whatever memory is available when they start and do not request more if needed. If an application that behaves in this manner runs on the same computer at the same time as SQL Server, set the option to a value that guarantees that the memory required by the application is not allocated by SQL Server. The recommendation on minimum is: No such thing. The more memory the better. The SQL Sever needs as much memory as it can get or it will trash your IO. Stop the SQL Server. Run your other applications and take note to the amount of memory they need. Subtract that from your total available RAM, and use that number for the MAX memory setting in the SQL Server. A: so id like to be able to run the vm on 768 mb or less if possible. That will depend on your data and the size of your database. But I usually like to give SQL server at least a GB A: It really depends on what else is going on on the machine. Get things running under a typical load and have a look at Task Manager to see what you need for everything else. Try that number to start with. For production machines, of course, it is best to give control of the machine to Sql Server (Processors -> Boost Sql Server Priority) and let it have all the RAM it wants. Since you are using VMs, maybe you could create a dedicated one just for Sql Server and run everything else on a different VM. A: Since this is a development environment, I agree with Greg, just use trial and error. It's not that crucial to get it perfectly right. But if you do a lot of work in the VM, why not give it at least half of the 2GB?
{ "language": "en", "url": "https://stackoverflow.com/questions/28387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Passing $_POST values with cURL How do you pass $_POST values to a page using cURL? A: Another simple PHP example of using cURL: <?php $ch = curl_init(); // Initiate cURL $url = "http://www.somesite.com/curl_example.php"; // Where you want to post data curl_setopt($ch, CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_POST, true); // Tell cURL you want to post something curl_setopt($ch, CURLOPT_POSTFIELDS, "var1=value1&var2=value2&var_n=value_n"); // Define what you want to post curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); // Return the output in string format $output = curl_exec ($ch); // Execute curl_close ($ch); // Close cURL handle var_dump($output); // Show output ?> Instead of using curl_setopt you can use curl_setopt_array. http://php.net/manual/en/function.curl-setopt-array.php A: Ross has the right idea for POSTing the usual parameter/value format to a url. I recently ran into a situation where I needed to POST some XML as Content-Type "text/xml" without any parameter pairs so here's how you do that: $xml = '<?xml version="1.0"?><stuff><child>foo</child><child>bar</child></stuff>'; $httpRequest = curl_init(); curl_setopt($httpRequest, CURLOPT_RETURNTRANSFER, 1); curl_setopt($httpRequest, CURLOPT_HTTPHEADER, array("Content-Type: text/xml")); curl_setopt($httpRequest, CURLOPT_POST, 1); curl_setopt($httpRequest, CURLOPT_HEADER, 1); curl_setopt($httpRequest, CURLOPT_URL, $url); curl_setopt($httpRequest, CURLOPT_POSTFIELDS, $xml); $returnHeader = curl_exec($httpRequest); curl_close($httpRequest); In my case, I needed to parse some values out of the HTTP response header so you may not necessarily need to set CURLOPT_RETURNTRANSFER or CURLOPT_HEADER. A: $query_string = ""; if ($_POST) { $kv = array(); foreach ($_POST as $key => $value) { $kv[] = stripslashes($key) . "=" . stripslashes($value); } $query_string = join("&", $kv); } if (!function_exists('curl_init')){ die('Sorry cURL is not installed!'); } $url = 'https://www.abcd.com/servlet/'; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POST, count($kv)); curl_setopt($ch, CURLOPT_POSTFIELDS, $query_string); curl_setopt($ch, CURLOPT_HEADER, FALSE); curl_setopt($ch, CURLOPT_RETURNTRANSFER, FALSE); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); $result = curl_exec($ch); curl_close($ch); A: $url='Your url'; // Specify your url $data= array('parameterkey1'=>value,'parameterkey2'=>value); // Add parameters in key value $ch = curl_init(); // Initialize cURL curl_setopt($ch, CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_POSTFIELDS, http_build_query($data)); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_exec($ch); curl_close($ch); A: Should work fine. $data = array('name' => 'Ross', 'php_master' => true); // You can POST a file by prefixing with an @ (for <input type="file"> fields) $data['file'] = '@/home/user/world.jpg'; $handle = curl_init($url); curl_setopt($handle, CURLOPT_POST, true); curl_setopt($handle, CURLOPT_POSTFIELDS, $data); curl_exec($handle); curl_close($handle) We have two options here, CURLOPT_POST which turns HTTP POST on, and CURLOPT_POSTFIELDS which contains an array of our post data to submit. This can be used to submit data to POST <form>s. It is important to note that curl_setopt($handle, CURLOPT_POSTFIELDS, $data); takes the $data in two formats, and that this determines how the post data will be encoded. * *$data as an array(): The data will be sent as multipart/form-data which is not always accepted by the server. $data = array('name' => 'Ross', 'php_master' => true); curl_setopt($handle, CURLOPT_POSTFIELDS, $data); *$data as url encoded string: The data will be sent as application/x-www-form-urlencoded, which is the default encoding for submitted html form data. $data = array('name' => 'Ross', 'php_master' => true); curl_setopt($handle, CURLOPT_POSTFIELDS, http_build_query($data)); I hope this will help others save their time. See: * *curl_init *curl_setopt A: <?php function executeCurl($arrOptions) { $mixCH = curl_init(); foreach ($arrOptions as $strCurlOpt => $mixCurlOptValue) { curl_setopt($mixCH, $strCurlOpt, $mixCurlOptValue); } $mixResponse = curl_exec($mixCH); curl_close($mixCH); return $mixResponse; } // If any HTTP authentication is needed. $username = 'http-auth-username'; $password = 'http-auth-password'; $requestType = 'POST'; // This can be PUT or POST // This is a sample array. You can use $arrPostData = $_POST $arrPostData = array( 'key1' => 'value-1-for-k1y-1', 'key2' => 'value-2-for-key-2', 'key3' => array( 'key31' => 'value-for-key-3-1', 'key32' => array( 'key321' => 'value-for-key321' ) ), 'key4' => array( 'key' => 'value' ) ); // You can set your post data $postData = http_build_query($arrPostData); // Raw PHP array $postData = json_encode($arrPostData); // Only USE this when request JSON data. $mixResponse = executeCurl(array( CURLOPT_URL => 'http://whatever-your-request-url.com/xyz/yii', CURLOPT_RETURNTRANSFER => true, CURLOPT_HTTPGET => true, CURLOPT_VERBOSE => true, CURLOPT_AUTOREFERER => true, CURLOPT_CUSTOMREQUEST => $requestType, CURLOPT_POSTFIELDS => $postData, CURLOPT_HTTPHEADER => array( "X-HTTP-Method-Override: " . $requestType, 'Content-Type: application/json', // Only USE this when requesting JSON data ), // If HTTP authentication is required, use the below lines. CURLOPT_HTTPAUTH => CURLAUTH_BASIC, CURLOPT_USERPWD => $username. ':' . $password )); // $mixResponse contains your server response.
{ "language": "en", "url": "https://stackoverflow.com/questions/28395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "98" }
Q: How do I get the path where the user installed my Java application? I want to bring up a file dialog in Java that defaults to the application installation directory. What's the best way to get that information programmatically? A: System.getProperty("user.dir") gets the directory the Java VM was started from. A: System.getProperty("user.dir"); The above method gets the user's working directory when the application was launched. This is fine if the application is launched by a script or shortcut that ensures that this is the case. However, if the app is launched from somewhere else (entirely possible if the command line is used), then the return value will be wherever the user was when they launched the app. A more reliable method is to work out the application install directory using ClassLoaders.
{ "language": "en", "url": "https://stackoverflow.com/questions/28428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Comparing two XML Schemas Are there any tools to effectively compare two XML schema's? I have seen some generic XML diff tools, but I was wondering if there is anything that knows more about schemas. A: I would look into DeltaXML. It seems to have the features you're looking for. They even have a guide on how to compare schemas.
{ "language": "en", "url": "https://stackoverflow.com/questions/28433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Optimizing for low bandwidth I am charged with designing a web application that displays very large geographical data. And one of the requirements is that it should be optimized so the PC still on dial-ups common in the suburbs of my country could use it as well. Now I am permitted to use Flash and/or Silverlight if that will help with the limited development time and user experience. The heavy part of the geographical data are chunked into tiles and loaded like map tiles in Google Maps but that means I need a lot of HTTP requests. Should I go with just javascript + HTML? Would I end up with a faster application regarding Flash/Silverlight? Since I can do some complex algorithm on those 2 tech (like DeepZoom). Deploying desktop app though, is out of the question since we don't have that much maintenance funds. It just needs to be fast... really fast.. p.s. faster is in the sense of "download faster" A: Is something like Gears acceptable? This will let you store data locally to limit re-requests. I would also stay away from flash and Silverlight and go straight to javascript/AJAX. jQuery is a ton-O-fun. A: I would suggest you look into Silverlight and DeepZoom A: I don't think you'll find Flash or Silverlight is going to help too much for this application. Either way you're going to be utilizing tiled images and the images are going to be the same size in both scenarios. Using Flash or Silverlight may allow you to add some neat animations to the application but anything you gain here will be additional overhead for your clients on dialup connections. I'd stick with plain Javascript/HTML. A: You may also want to look at asynchronously downloading your tiles via one of the Ajax libraries available. Let's say your user can view 9 tiles at a time and scroll/zoom. Download those 9 tiles they can see plus whatever is needed to handle the zoom for those tiles on the first load; then you'll need to play around with caching strategies for prefetching other information asynchronously. At one place I worked a rules engine was taking a bit too long to return a result so they opted to present the user with a "confirm this" screen. The few seconds it took the users to review and click next was more than enough time to return the results. It made the app look lightening fast to the user when in reality it took a bit longer. You have to remember, user perception of performance is just as important in some cases as the actual performance. A: I believe Microsoft's Seadragon is your answer. However, I am not sure if that is available to developers. It looks like some of it has found its way into Silverlight
{ "language": "en", "url": "https://stackoverflow.com/questions/28441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: When do you use dependency injection? I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them. Where do you draw the line on what to interface out vs just adding a property to the class? A: Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer. A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately. A: Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database. In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods). A: The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't. What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided. Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code. Uses that I've seen useful for an Inversion of Control pattern: * *A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided. *Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one. *Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid. *Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed. A: What do you mean by "just adding a property to a class?" My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation. EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run. A: I do it only when it helps with separation of concerns. Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in. But that's about it... all the other cases it'd just make the system unnecessarily complex A: Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that I think it's more of a experience / flight time call. Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps. A: The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces. There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden. A: Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer? A: I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot. As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor. As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.
{ "language": "en", "url": "https://stackoverflow.com/questions/28464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: If, IIf() and If() I recently asked a question about IIf vs. If and found out that there is another function in VB called If which basically does the same thing as IIf but is a short-circuit. Does this If function perform better than the IIf function? Does the If statement trump the If and IIf functions? A: One very important distinct between IIf() and If() is that with Option Infer On the later will implicitly cast the results to the same data type in certain cases, as where IIf will return Object. Example: Dim val As Integer = -1 Dim iifVal As Object, ifVal As Object iifVal = IIf(val >= 0, val, Nothing) ifVal = If(val >= 0, val, Nothing) Output: iifVal has value of Nothing and type of Object ifVal has value of 0 and type of Integer, b/c it is implicitly converting Nothing to an Integer. A: Damn, I really thought you were talking about the operator all along. ;-) Anyway … Does this If function perform better than the IIf function? Definitely. Remember, it's built into the language. Only one of the two conditional arguments has to be evaluated, potentially saving a costly operation. Does the If statement trump the If and IIf functions? I think you can't compare the two because they do different things. If your code semantically performs an assignment you should emphasize this, instead of the decision-making. Use the If operator here instead of the statement. This is especially true if you can use it in the initialization of a variable because otherwise the variable will be default initialized, resulting in slower code: Dim result = If(a > 0, Math.Sqrt(a), -1.0) ' versus Dim result As Double ' Redundant default initialization! If a > 0 Then result = Math.Sqrt(a) Else result = -1 End If
{ "language": "en", "url": "https://stackoverflow.com/questions/28478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What is the purpose of the designer files in Visual Studio 2008 Web application projects? There is a conversion process that is needed when migrating Visual Studio 2005 web site to Visual Studio 2008 web application projects. It looks like VS2008 is creating a .designer. file for every aspx when you right click on a file or the project itself in Solution Explorer and select 'Convert to Web Application.' What is the purpose of these designer files? And these won't exist on a release build of the web application, they are just intermediate files used during development, hopefully? A: They hold all the form designer stuff that used to go in the #Region " Web Form Designer Generated Code " section of the code. instead of putting it in the .aspx.vb file where people might edit it (mistakenly or not), it's been moved to a separate file, so that you don't have ever look at it. A: What kibbee said. For the part of your question about existing on a release build, it depends on what kind of web site you have. If you have a pre-compiled web site, then none of code files (.vb, .cs, etc) need to be deployed the server. They are compiled into .dlls (assemblies) and deployed that way along with the .as*x files.
{ "language": "en", "url": "https://stackoverflow.com/questions/28481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Compatability between Windows Vista and Visual Studio 2008 I am wondering if anyone else is experiencing these same issues: My main dev machine is a Dell, running Vista Home Premium and Visual Studio 2008 - both fully patched / up-to-date. There are some quirks, such as the play/pause media controls on my keyboard not working while running Visual Studio 2008. These kinds of quirks are annoying, but not really problematic. A bigger issue is this one: In one of my solutions, I make use of a class called Utility. If I edit the class to add another field, no matter how many times I recompile/clean/manually delete the old .DLL files, the compiler tells me that there is no such field. If, however, I check the solution into SVN and then check it out on my laptop, which runs Windows XP SP3 with a fully patched Visual Studio 2008 - everything works fine. No idea why. Has anyone else experienced this, or other problems with this kind of configuration? And if so, do you have any suggestions for how to overcome them? A: VS2008 runs fine on my Vista. All service packs (both VS & Vista) are installed. I'm also using a MS keyboard: Laser Desktop 4000. A: have you tried doing things in elevated privileges mode? including reinstalling and all... A: I think its VS2008 I am having all sorts of wierd things happen with it but im running xp. Not to the degree that you say where it is messing with volume controls. My wierd things are in VS2008. For example I place a gridview on page 1 go to page 2 then back to page 1 and my gridview will have moved.. Its just buggy.. really buggy A: Have you done any performance profiling, vs is v naughty and puts instrumented dlls etc, in your VS IDE folder. You will then get references only to the instrumented version, not your newly updated DLLs (This keeps happening to me and I forget everytime to clear out the C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE folder)
{ "language": "en", "url": "https://stackoverflow.com/questions/28526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How would you handle errors when using jQuery.ajax()? When using jQuery's ajax method to submit form data, what is the best way to handle errors? This is an example of what a call might look like: $.ajax({ url: "userCreation.ashx", data: { u:userName, p:password, e:email }, type: "POST", beforeSend: function(){disableSubmitButton();}, complete: function(){enableSubmitButton();}, error: function(xhr, statusText, errorThrown){ // Work out what the error was and display the appropriate message }, success: function(data){ displayUserCreatedMessage(); refreshUserList(); } }); The request might fail for a number of reasons, such as duplicate user name, duplicate email address etc, and the ashx is written to throw an exception when this happens. My problem seems to be that by throwing an exception the ashx causes the statusText and errorThrown to be undefined. I can get to the XMLHttpRequest.responseText which contains the HTML that makes up the standard .net error page. I am finding the page title in the responseText and using the title to work out which error was thrown. Although I have a suspicion that this will fall apart when I enable custom error handling pages. Should I be throwing the errors in the ashx, or should I be returning a status code as part of the data returned by the call to userCreation.ashx, then using this to decide what action to take? How do you handle these situations? A: Now I have a problem as to which answer to accept. Further thought on the problem brings me to the conclusion that I was incorrectly throwing exceptions. Duplicate user names, email addresses etc are expected issues during a sign up process and are therefore not exceptions, but simply errors. In which case I probably shouldn't be throwing exceptions, but returning error codes. Which leads me to think that irobinson's approach should be the one to take in this case, especially since the form is only a small part of the UI being displayed. I have now implemented this solution and I am returning xml containing a status and an optional message that is to be displayed. I can then use jQuery to parse it and take the appropriate action: - success: function(data){ var created = $("result", data).attr("success"); if (created == "OK"){ resetNewUserForm(); listUsers(''); } else { var errorMessage = $("result", data).attr("message"); $("#newUserErrorMessage").text(errorMessage).show(); } enableNewUserForm(); } However travis' answer is very detailed and would be perfect during debugging or if I wanted to display an exception message to the user. I am definitely not receiving JSON back, so it is probably down to one of those attributes that travis has listed, as I don't have them in my code. (I am going to accept irobinson's answer, but upvote travis' answer. It just feels strange to be accepting an answer that doesn't have the most votes.) A: For debugging, I usually just create an element (in the case below: <div id="error"></div>) on the page and write the XmlHttpRequest to it: error: function (XMLHttpRequest, textStatus, errorThrown) { $("#error").html(XMLHttpRequest.status + "\n<hr />" + XMLHttpRequest.responseText); } Then you can see the types of errors that are occurring and capture them correctly: if (XMLHttpRequest.status === 404) // display some page not found error if (XMLHttpRequest.status === 500) // display some server error In your ashx, can you throw a new exception (e.g "Invalid User" etc.) and then just parse that out of the XMLHttpRequest.responseText? For me when I get an error the XMLHttpRequest.responseText isn't the standard Asp.Net error page, it's a JSON object containing the error like this: { "Message":"Index was out of range. Must be non-negative and less than the size of the collection.\r\n Parameter name: index", "StackTrace":" at System.ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument argument, ExceptionResource resource)\r\n at etc...", "ExceptionType":"System.ArgumentOutOfRangeException" } Edit: This could be because the function I'm calling is marked with these attributes: <WebMethod()> _ <ScriptMethod()> _ A: Should I be throwing the errors in the ashx, or should I be returning a status code as part of the data returned by the call to userCreation.ashx, then using this to decide what action to take? How do you handle these situations? Personally, if possible, I would prefer to handle this on the server side and work up a message to the user there. This works very well in a scenario where you only want to display a message to the user telling them what happened (validation message, essentially). However, if you want to perform an action based on what happened on the server, you may want to use a status code and write some javascript to perform various actions based on that status code.
{ "language": "en", "url": "https://stackoverflow.com/questions/28529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Corporate-Friendly Open Source Licenses What open source licenses are more corporate-friendly, i.e., they can be used in commercial products without the need to open source the commercial product? A: The two most commonly used licenses that allow what you want are the BSD License and MIT License. (see also the full list of licenses considered Open Source by the OSI). A: I recommend the Apache License (specifically, version 2). It is not a “copy left” license and it addresses several matters that are important to established companies and their lawyers. “Copy left” is the philosophy of the free software foundation requiring anything incorporating the licensed opens source code to also be licensed as open source. That philosophy is regarded as poison by established companies that want to keep their products proprietary. Aside from not having “copy left” provisions, the Apache license specifically addresses the grant of rights from project contributors and it expressly addresses the fact that modern companies are typically made up for more than one legal entity (for example, a parent company and its subsidiaries). Most open source licenses don’t address these points. Whatever license you choose, if you want your code to be “corporate friendly,” in the sense that you want it to be incorporated into commercial, non-open source products, it is essential that you avoid GPL and other “copy left” type licenses. While it would be best to consult with your own lawyer before investing time or money in a project for which this is an important factor, a quick shorthand for licenses that are and are not “copy left” can be found on the Free Software Foundation’s website. They identify which licenses they don’t find meet their standards as “copy left.” The ones FSF rejects are most likely the ones that will be corporate friendly in this sense. (Although the question didn’t ask this, it is worth mentioning that, with very few exceptions, even GPL and other “copy left” type licenses are perfectly corporate friendly if they are only used internally by the commercial entities and not incorporated into their products.) A: As a possible alternative to the BSD license you can also use the Ms-PL license (Microsoft public license). Pretty much the same but (arguably) better worded. Additionally, It's got “Microsoft” in its name, which screams “corprate-friendly” like nothing else does. ;-) A: I believe that 6 of the 9 licenses on the OSI's list of "Licenses that are popular and widely used or with strong communities" meet your criterion: Apache, BSD, MIT, Mozilla, CPL, and Eclipse. The Mozilla license and CPL (the Common Public License) have language concerning patents that might make them more attractive to corporations. See here for more information. A: Basically, only the GPL requires that the whole product is GPL, and LGPL implies that the parts specific to that library be open sourced. But, for both, the problem arises only when you distribute the application. For all the other open source licenses, the only common requirement is the publicity (ie. show at some point to the user what open source component / library is used). After that you have the "no competing commercial product" licenses... All in all, the most acknowledged business friendly license are IMHO the Apache License, the Artistic License and the Mozilla Public license. Furthermore, even if Creative Commons is not widely used for software development, some options are business friendly. Edit: forgot BSD (which is more a license-template than a license) and MIT mentionned by Daniel. It seems to me that their usages are fading away, but there is some license tropism to take in account according to the development language / open source sub-community. A: The GNU Lesser General Public Licence is also corporate-friendly and quite often used in libraries. It allows for usage of a certain library but modifications to it should be made public. A: By "corporate" I tend to think of internal development, programs distributed only to people that are employed by the same company. In that sense, pretty much all free software licences are "corporate-friendly." However, in terms of distributing closed-source software that contains free software the only big one (off the top of my head) that is excluded is the GPL. You could embed LGPL, BSD, MIT, Artistic licenced code. The "price" might be having to give credit, but that would be way cheaper than actually writing and debugging the software. Things can get hazy when you consider licences that try to protect trademarks (Mozilla) or the compatibility of a broader range of software (Sun). Your constraints are not always only related to the distribution of the code. In summary, if you're unsure you should consult a lawyer. A: Ideally I looked for components licensed under the Apache Software License. After that LGPL, BSD and Artistic License are my next preferences. A: MIT, Apache and BSD tend to be the most corporate friendly. The least corporate friendly that I have ran across are usually Q Public, GPL and Mozilla... A: Wikipedia also has a very useful list that compares all the free software licenses. If you have a green box on the right ("Release changes under a different license"), I think that's all you need.
{ "language": "en", "url": "https://stackoverflow.com/questions/28530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Java import/export dependencies I'm trying to find a way to list the (static) dependency requirements of a jar file, in terms of which symbols are required at run time. I can see that the methods exported by classes can be listed using "javap", but there doesn't seem to be an opposite facility to list the 'imports'. Is it possible to do this? This would be similar to the dumpbin utility in Windows development which can be used to list the exports and imports of a DLL. EDIT : Thanks for the responses; I checked out all of the suggestions; accepted DependencyFinder as it most closely meets what I was looking for. A: You could use the Outbound dependencies feature of DependencyFinder. You can do that entirely in the GUI, or in command line exporting XML. A: I think you can get that information using JDepend A: There's a tool called JarAnalyzer that will give you the dependencies between the jars in a directory. It'll also give you a list of dependencies that don't exist in the directory. A: If it's a public jar (as in, not yours) then it might be in the Maven Repository.
{ "language": "en", "url": "https://stackoverflow.com/questions/28538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is GCC broken when taking the address of an argument on ARM7TDMI? My C code snippet takes the address of an argument and stores it in a volatile memory location (preprocessed code): void foo(unsigned int x) { *(volatile unsigned int*)(0x4000000 + 0xd4) = (unsigned int)(&x); } int main() { foo(1); while(1); } I used an SVN version of GCC for compiling this code. At the end of function foo I would expect to have the value 1 stored in the stack and, at 0x40000d4, an address pointing to that value. When I compile without optimizations using the flag -O0, I get the expected ARM7TMDI assembly output (commented for your convenience): .align 2 .global foo .type foo, %function foo: @ Function supports interworking. @ args = 0, pretend = 0, frame = 8 @ frame_needed = 0, uses_anonymous_args = 0 @ link register save eliminated. sub sp, sp, #8 str r0, [sp, #4] @ 3. Store the argument on the stack mov r3, #67108864 add r3, r3, #212 add r2, sp, #4 @ 4. Address of the stack variable str r2, [r3, #0] @ 5. Store the address at 0x40000d4 add sp, sp, #8 bx lr .size foo, .-foo .align 2 .global main .type main, %function main: @ Function supports interworking. @ args = 0, pretend = 0, frame = 0 @ frame_needed = 0, uses_anonymous_args = 0 stmfd sp!, {r4, lr} mov r0, #1 @ 1. Pass the argument in register 0 bl foo @ 2. Call function foo .L4: b .L4 .size main, .-main .ident "GCC: (GNU) 4.4.0 20080820 (experimental)" It clearly stores the argument first on the stack and from there stores it at 0x40000d4. When I compile with optimizations using -O1, I get something unexpected: .align 2 .global foo .type foo, %function foo: @ Function supports interworking. @ args = 0, pretend = 0, frame = 8 @ frame_needed = 0, uses_anonymous_args = 0 @ link register save eliminated. sub sp, sp, #8 mov r2, #67108864 add r3, sp, #4 @ 3. Address of *something* on the stack str r3, [r2, #212] @ 4. Store the address at 0x40000d4 add sp, sp, #8 bx lr .size foo, .-foo .align 2 .global main .type main, %function main: @ Function supports interworking. @ args = 0, pretend = 0, frame = 0 @ frame_needed = 0, uses_anonymous_args = 0 stmfd sp!, {r4, lr} mov r0, #1 @ 1. Pass the argument in register 0 bl foo @ 2. Call function foo .L4: b .L4 .size main, .-main .ident "GCC: (GNU) 4.4.0 20080820 (experimental)" This time the argument is never stored on the stack even though something from the stack is still stored at 0x40000d4. Is this just expected/undefined behaviour? Have I done something wrong or have I in fact found a Compiler Bug™? A: Once you return from foo(), x is gone, and any pointers to it are invalid. Subsequently using such a pointer results in what the C standard likes to call "undefined behavior," which means the compiler is absolutely allowed to assume you won't dereference it, or (if you insist on doing it anyway) need not produce code that does anything remotely like what you might expect. If you want the pointer to x to remain valid after foo() returns, you must not allocate x on foo's stack, period -- even if you know that in principle, nothing has any reason to clobber it -- because that just isn't allowed in C, no matter how often it happens to do what you expect. The simplest solution might be to make x a local variable in main() (or in whatever other function has a sufficiently long-lived scope) and to pass the address in to foo. You could also make x a global variable, or allocate it on the heap using malloc(), or set aside memory for it in some more exotic way. You can even try to figure out where the top of the stack is in some (hopefully) more portable way and explicitly store your data in some part of the stack, if you're sure you won't be needing for anything else and you're convinced that's what you really need to do. But the method you've been using to do that isn't sufficiently reliable, as you've discovered. A: So you're putting the address of a local stack variable into the DMA controller to be used, and then you're returning from the function where the stack variable is available? While this might work with your main() example (since you aren't writing on the stack again) it won't work in a 'real' program later - that value will be overwritten before or while DMA is accessing it when another function is called and the stack is used again. You need to have a structure, or a global variable you can use to store this value while the DMA accesses it - otherwise it's just going to get clobbered! -Adam A: I actually don't think the compiler is wrong, although this is an odd case. From a code analysis point-of-view, it sees you storing the address of a variable, but that address is never dereferenced and you don't jump outside of the function to external code that could use that address you stored. When you exit the function, the address of the stack is now considered bogus, since its the address of a variable that no longer exists. The "volatile" keyword really doesn't do much in C, especially with regards to multiple threads or hardware. It just tells the compiler that it has to do the access. However, since there's no users of the value of x according to the data flow, there's no reason to store the "1" on the stack. It probably would work if you wrote void foo(unsigned int x) { volatile int y = x; *(volatile unsigned int*)(0x4000000 + 0xd4) = (unsigned int)(&y); } although it still may be illegal code, since the address of y is considered invalid as soon as foo returns, but the nature of the DMA system would be to reference that location independently of the program flow. A: One thing to note is that according to the standard, casts are r-values. GCC used to allow it, but in recent versions has become a bit of a standards stickler. I don't know if it will make a difference, but you should try this: void foo(unsigned int x) { volatile unsigned int* ptr = (unsigned int*)(0x4000000 + 0xd4); *ptr = (unsigned int)(&x); } int main() { foo(1); while(1); } Also, I doubt you intended it, but you are storing the address of the function local x (which is a copy of the int you passed). You likely want to make foo take an "unsigned int *" and pass the address of what you really want to store. So I feel a more proper solution would be this: void foo(unsigned int *x) { volatile unsigned int* ptr = (unsigned int*)(0x4000000 + 0xd4); *ptr = (unsigned int)(x); } int main() { int x = 1; foo(&x); while(1); } EDIT: finally, if you code breaks with optimizations it is usually a sign that your code is doing something wrong. A: I'm darned if I can find a reference at the moment, but I'm 99% sure that you are always supposed to be able to take the address of an argument, and it's up to the compiler to finesse the details of calling conventions, register usage, etc. Indeed, I would have thought it to be such a common requirement that it's hard to see there can be general problem in this - I wonder if it's something about the volatile pointers which have upset the optimisation. Personally, I might do try this to see if it compiled better: void foo(unsigned int x) { volatile unsigned int* pArg = &x; *(volatile unsigned int*)(0x4000000 + 0xd4) = (unsigned int)pArg; } A: Tomi Kyöstilä wrote development for the Game Boy Advance. I was reading about its DMA system and I experimented with it by creating single-color tile bitmaps. The idea was to have the indexed color be passed as an argument to a function which would use DMA to fill a tile with that color. The source address for the DMA transfer is stored at 0x40000d4. That's a perfectly valid thing for you to do, and I can see how the (unexpected) code you got with the -O1 optimization wouldn't work. I see the (expected) code you got with the -O0 optimization does what you expect -- it puts value of the color you want on the stack, and a pointer to that color in the DMA transfer register. However, even the (expected) code you got with the -O0 optimization wouldn't work, either. By the time the DMA hardware gets around to taking that pointer and using it to read the desired color, that value on the stack has (probably) long been overwritten by other subroutines or interrupt handlers or both. And so both the expected and the unexpected code result in the same thing -- the DMA is (probably) going to fetch the wrong color. I think you really intended to store the color value in some location where it stays safe until the DMA is finished reading it. So a global variable, or a function-local static variable such as // Warning: Three Star Programmer at work // Warning: untested code. void foo(unsigned int x) { static volatile unsigned int color = x; // "static" so it's not on the stack volatile unsigned int** dma_register = (volatile unsigned int**)(0x4000000 + 0xd4); *dma_register = &color; } int main() { foo(1); while(1); } Does that work for you? You see I use "volatile" twice, because I want to force two values to be written in that particular order. A: sparkes wrote If you think you have found a bug in GCC the mailing lists will be glad you dropped by but generally they find some hole in your knowledge is to blame and mock mercilessly :( I figured I'd try my luck here first before going to the GCC mailing list to show my incompetence :) Adam Davis wrote Out of curiosity, what are you trying to accomplish? I was trying out development for the Game Boy Advance. I was reading about its DMA system and I experimented with it by creating single-color tile bitmaps. The idea was to have the indexed color be passed as an argument to a function which would use DMA to fill a tile with that color. The source address for the DMA transfer is stored at 0x40000d4. Will Dean wrote Personally, I might do try this to see if it compiled better: void foo(unsigned int x) { volatile unsigned int* pArg = &x; *(volatile unsigned int*)(0x4000000 + 0xd4) = (unsigned int)pArg; } With -O0 that works as well and with -O1 that is optimized to the exact same -O1 assembly I've posted in my question. A: Not an answer, but just some more info for you. We are running 3.4.5 20051201 (Red Hat 3.4.5-2) at my day job. We have also noticed some of our code (which I can't post here) stops working when we add the -O1 flag. Our solution was to remove the flag for now :( A: In general I would say, that it is a valid optimization. If you want to look deeper into it, you could compile with -da This generates a .c.Number.Passname, where you can have a look at the rtl (intermediate representation within the gcc). There you can see which pass makes which optimization (and maybe disable just the one, you dont want to have) A: I think Even T. has the answer. You passed in a variable, you cannot take the address of that variable inside the function, you can take the address of a copy of that variable though, btw that variable is typically a register so it doesnt have an address. Once you leave that function its all gone, the calling function loses it. If you need the address in the function you have to pass by reference not pass by value, send the address. It looks to me that the bug is in your code, not gcc. BTW, using *(volatile blah *)0xabcd or any other method to try to program registers is going to bite you eventually. gcc and most other compilers have this uncanny way of knowing exactly the worst time to strike. Say the day you change from this *(volatile unsigned int *)0x12345 = someuintvariable; to *(volatile unsigned int *)0x12345 = 0x12; A good compiler will realize that you are only storing 8 bits and there is no reason to waste a 32 bit store for that, depending on the architecture you specified, or the default architecture for that compiler that day, so it is within its rights to optimize that to an strb instead of an str. After having been burned by gcc and others with this dozens of times I have resorted to forcing the issue: .globl PUT32 PUT32: str r1,[r0] bx lr PUT32(0x12345,0x12); Costs a few extra clock cycles but my code continues to work yesterday, today, and will work tomorrow with any optimization flag. Not having to re-visit old code and sleeping peacefully through the night is worth a few extra clock cycles here and there. Also if your code breaks when you compile for release instead of compile for debug, that also means it is most likely a bug in your code. A: Is this just expected/undefined behaviour? Have I done something wrong or have I in fact found a Compiler Bug™? No bug just the defined behaviour that optimisation options can produce odd code which might not work :) EDIT: If you think you have found a bug in GCC the mailing lists will be glad you dropped by but generally they find some hole in your knowledge is to blame and mock mercilessly :( In this case I think it's probably the -O options attempting shortcuts that break your code that need working around.
{ "language": "en", "url": "https://stackoverflow.com/questions/28542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Most Pythonic way equivalent for: while ((x = next()) != END) What's the best Python idiom for this C construct? while ((x = next()) != END) { .... } I don't have the ability to recode next(). update: and the answer from seems to be: for x in iter(next, END): .... A: It depends a bit what you want to do. To match your example as far as possible, I would make next a generator and iterate over it: def next(): for num in range(10): yield num for x in next(): print x A: Short answer: there's no way to do inline variable assignment in a while loop in Python. Meaning that I cannot say: while x=next(): // do something here! Since that's not possible, there are a number of "idiomatically correct" ways of doing this: while 1: x = next() if x != END: // Blah else: break Obviously, this is kind of ugly. You can also use one of the "iterator" approaches listed above, but, again, that may not be ideal. Finally, you can use the "pita pocket" approach that I actually just found while googling: class Pita( object ): __slots__ = ('pocket',) marker = object() def __init__(self, v=marker): if v is not self.marker: self.pocket = v def __call__(self, v=marker): if v is not self.marker: self.pocket = v return self.pocket Now you can do: p = Pita() while p( next() ) != END: // do stuff with p.pocket! Thanks for this question; learning about the __call__ idiom was really cool! :) EDIT: I'd like to give credit where credit is due. The 'pita pocket' idiom was found here A: Maybe it's not terribly idiomatic, but I'd be inclined to go with x = next() while x != END: do_something_with_x x = next() ... but that's because I find that sort of thing easy to read A: @Mark Harrison's answer: for x in iter(next_, END): .... Here's an excerpt from Python's documentation: iter(o[, sentinel]) Return an iterator object. ...(snip)... If the second argument, sentinel, is given, then o must be a callable object. The iterator created in this case will call o with no arguments for each call to its next() method; if the value returned is equal to sentinel, StopIteration will be raised, otherwise the value will be returned. A: What are you trying to do here? If you're iterating over a list, you can use for e in L where e is the element and L is the list. If you're filtering a list, you can use list comprehensions (i.e. [ e for e in L if e % 2 == 0 ] to get all the even numbers in a list). A: If you need to do this more than once, the pythonic way would use an iterator for x in iternext(): do_something_with_x where iternext would be defined using something like (explicit is better than implicit!): def iternext(): x = next() while x != END: yield x x = next() A: Can you provide more information about what you're trying to accomplish? It's not clear to me why you can't just say for x in everything(): ... and have the everything function return everything, instead of writing a next function to just return one thing at a time. Generators can even do this quite efficiently.
{ "language": "en", "url": "https://stackoverflow.com/questions/28559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Is it possible to use nHibernate with Paradox database? Is it possible to configure nHibernate to connect to Paradox database (*.db files)? A: Yes, sort of. There is no support included in the trunk, you need to write your own dialect. Or you can port the Paradox dialect created for Hibernate.
{ "language": "en", "url": "https://stackoverflow.com/questions/28560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Globalization architecture I need to store products for an e-commerce solution in a database. Each product should have descriptive information, such as name, description etc. I need any product to be localized to x number of languages. What I've done so far, is to make any column that should be localized and nvarchar(MAX) and then i store an XML string like this: <cultures> <culture code="en-us">Super fast laptop</culture> <culture code="da-dk">Super hurtig bærbar</culture> </cultures> And when I load it from the database, into my business logic objects, I parse the XML string to a Dictionary<string, string> where the key is the culture/language code. So when I want to display the name of a product, I do this: lblName.Text = product.Name["en-us"]; Does anyone have a better solution? A: You should store the current language somewhere (in a singleton, for instance) and in the product.Name property use the language setting to get the correct string. This way you only have to write the language specific code once for each field rather than thinking about languages everywhere the field is used. For example, assuming your singleton is defined in the Localizer class that stores an enum corresponding to the current language: public class Product { private idType id; public string Name { get { return Localizer.Instance.GetLocalString(id, "Name"); } } } Where GetLocalString looks something like: public string GetLocalString(idType objectId, string fieldName) { switch (_currentLanguage) { case Language.English: // db access code to retrieve your string, may need to include the table // the object is in (e.g. "Products" "Orders" etc.) db.GetValue(objectId, fieldName, "en-us"); break; } } A: Rob Conery's MVC Storefront webcast series has a video on this issue (he gets to the database around 5:30). He stores a list of cultures, and then has a Product table for non-localized data and a ProductCultureDetail table for localized text. A: resource files A: This is basically the approach we took with Microsoft Commerce Server 2002. Yeah indexed views will help your performance.
{ "language": "en", "url": "https://stackoverflow.com/questions/28577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I merge my files when the folder structure has changed using Borland StarTeam? I'm in the process of refactoring some code which includes moving folders around, and I would like to regularly merge to keep things current. What is the best way to merge after I've moved folders around in my working copy? A: You can move the files around in StarTeam also. Then merge after that. Whatever you do, make sure you don't delete the files and re-add in StarTeam. You'll lose the file history if you do that. A: Moving the files in StarTeam and then updating your project/solution is the cleaner way to go. I would also suggest creating a view label prior to doing anything so you have a definite "roll back" point if things go wrong :) A: Folders in StarTeam can be renamed to match the filesystem moves by right clicking the folder and going to Properties. If you created new nesting levels, you will have to create those folders normally. If you moved files between existing folders, you can move those in StarTeam by dragging them from the file window on the right to the new folder on the left. Files can be renamed to match a new name in StarTeam the same way folders are, right click the file and select Properties. As a fellow StarTeam user, my condolences go out to you. A: In an ideal world, you could branch the view and merge back when you are happy with your revisions to avoid breaking the build. However, as you are using StarTeam, I would suggest making small incremental changes to the folder structure and accept that you will probably have a few breakages along the way. It will likely be less time consuming and more intuitive than trying to use the view-merge interface. A: The problem is I'm worried about breaking the build in the meantime while I'm moving folders in StarTeam. I suppose the only way to avoid that is to be ready to upload updated project files as soon as I move things around in StarTeam and do it as quickly as possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/28578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you set up an OpenID provider (server) in Ubuntu? I want to log onto Stack Overflow using OpenID, but I thought I'd set up my own OpenID provider, just because it's harder :) How do you do this in Ubuntu? Edit: Replacing 'server' with the correct term OpenID provider (Identity provider would also be correct according to wikipedia). A: I've actually done this (set up my own server using phpMyID). It's very easy and works quite well. One thing that annoys me to no end is the use of HTML redirects instead of HTTP. I changed that manually, based on some information gotten in the phpMyID forum. However, I have switched to myOpenId in the meantime. Rolling an own provider is fun and games but it just isn't secure! There are two issues: * *More generally, you have to act on faith. phpMyID is great but it's developed in someone's spare time. There could be many undetected security holes in it – and there have been some, in the past. While this of course applies to all security-related software, I believe the problem is potentially more severe with software developed in spare time, especially since the code is far from perfect in my humble opinion. *Secondly, OpenID is highly susceptible to screen scraping and mock interfaces. It's just too easy for an attacker to emulate the phpMyID interface to obtain your credentials for another site. myOpenId offers two very important solutions to the problem. * *The first is its use of a cookie-stored picture that is embedded in the login page. If anyone screen-scapes the myOpenId login page, this picture will be missing and the fake can easily be identified. *Secondly, myOpenId supports sign-in using strongly signed certificates that can be installed in the web browser. I still have phpMyID set up as an alternative provider using Yadis but I wouldn't use it as a login on sites that I don't trust. In any case, read Sam Ruby's tutorial! A: I personnally used phpMyID just for StackOverflow. It's a simple two-files PHP script to put somewhere on a subdomain. Of course, it's not as easy as installing a .deb, but since OpenID relies completely on HTTP, I'm not sure it's advisable to install a self-contained server... A: Take a look over at the Run your own identity server page. Community-ID looks to be the most promising so far. A: You might also look into setting up your own site as a delegate for another OpenID provider. That way, you can use your own custom URL, but not worry about security and maintenance as mentioned already. However, it's not very difficult, so it may not meet your criteria :) As an example, you would add this snippet of HTML to the page at your desired OpenID URL if you are using ClaimID as the OpenID provider: <link rel="openid.server" href="http://openid.claimid.com/server" /> <link rel="openid.delegate" href="http://openid.claimid.com/USERNAME" /> So when OpenID clients access your URL, they "redirect" themselves to the actual provider. A: I totally understand where you're coming from with this question. I already had a OpenID at www.myopenid.com but it feels a bit weird relying on a 3rd party for such an important login (a.k.a my permanent "home" on the internet). Luckily, It is easy to move to using your own server as a openID server - in fact, it can be done with just two files with phpMyID. * *Download "phpMyID-0.9.zip" from http://siege.org/projects/phpMyID/ *Move it to your server and unzip it to view the README file which explains everything. *The zip has two files: MyID.config.php, MyID.php. I created a directory called <mydocumentroot>/OpenID and renamed MyID.config.php to index.php. This means my OpenID URL will be very cool: http://<mywebsite>/OpenID *Decide on a username and password and then create a hash of them using: echo -n '<myUserNam>:phpMyID:<myPassword>' | openssl md5 *Open index.php in a text editor and add the username and password hash in the placeholder. Save it. *Test by browsing to http://<mywebsite>/OpenID/ *Test ID is working using: http://www.openidenabled.com/resources/openid-test/checkup/ Rerefence info: http://www.wynia.org/wordpress/2007/01/15/setting-up-an-openid-with-php/ , http://siege.org/projects/phpMyID/ , https://blog.stackoverflow.com/2009/01/using-your-own-url-as-your-openid/ A: The above answers all seem to contains dead links. This seems be a possible solution which is still working: https://simpleid.org/
{ "language": "en", "url": "https://stackoverflow.com/questions/28588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Why is it bad practice to make multiple database connections in one request? A discussion about Singletons in PHP has me thinking about this issue more and more. Most people instruct that you shouldn't make a bunch of DB connections in one request, and I'm just curious as to what your reasoning is. My first thought is the expense to your script of making that many requests to the DB, but then I counter myself with the question: wouldn't multiple connections make concurrent querying more efficient? How about some answers (with evidence, folks) from some people in the know? A: It is the cost of setting up the connection, transferring the data and then tearing it down. It will eat up your performance. Evidence is harder to come by but consider the following... Let's say it takes x microseconds to make a connection. Now you want to make several requests and get data back and forth. Let's say that the difference in transport time is negligable between one connection and many (just ofr the sake of argument). Now let's say it takes y microseconds to close the connection. Opening one connection will take x+y microseconds of overhead. Opening many will take n * (x+y). That will delay your execution. A: Setting up a DB connection is usually quite heavy. A lot of things are going on backstage (DNS resolution/TCP connection/Handshake/Authentication/Actual Query). I've had an issue once with some weird DNS configuration that made every TCP connection took a few seconds before going up. My login procedure (because of a complex architecture) took 3 different DB connections to complete. With that issue, it was taking forever to log-in. We then refactored the code to make it go through one connection only. A: Database connections are a limited resource. Some DBs have a very low connection limit, and wasting connections is a major problem. By consuming many connections, you may be blocking others for using the database. Additionally, throwing a ton of extra connections at the DB doesn't help anything unless there are resources on the DB server sitting idle. If you've got 8 cores and only one is being used to satisfy a query, then sure, making another connection might help. More likely, though, you are already using all the available cores. You're also likely hitting the same harddrive for every DB request, and adding additional lock contention. If your DB has anything resembling high utilization, adding extra connections won't help. That'd be like spawning extra threads in an application with the blind hope that the extra concurrency will make processing faster. It might in some certain circumstances, but in other cases it'll just slow you down as you thrash the hard drive, waste time task-switching, and introduce synchronization overhead. A: We access Informix from .NET and use multiple connections. Unless we're starting a transaction on each connection, it often is handled in the connection pool. I know that's very brand-specific, but most(?) database systems' cilent access will pool connections to the best of its ability. As an aside, we did have a problem with connection count because of cross-database connections. Informix supports synonyms, so we synonymed the common offenders and the multiple connections were handled server-side, saving a lot in transfer time, connection creation overhead, and (the real crux of our situtation) license fees. A: I would assume that it is because your requests are not being sent asynchronously, since your requests are done iteratively on the server, blocking each time, you have to pay for the overhead of creating a connection each time, when you only have to do it once... In Flex, all web service calls are automatically called asynchronously, so you it is common to see multiple connections, or queued up requests on the same connection. Asynchronous requests mitigate the connection cost through faster request / response time...because you cannot easily achieve this in PHP without out some threading, then the performance hit is greater then simply reusing the same connection. that's my 2 cents...
{ "language": "en", "url": "https://stackoverflow.com/questions/28590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I support SSL Client Certificate authentication? I want to do what myopenid does -- once you've logged, you can click a button that generates you an SSL certificate; the browser then downloads this certificate and stores it. When you later go back to yourid.myopenid.com, your browser can use its stored certificate for authentication so you don't ever need a password. So my questions is what is required to get this working? How do I generate certificates? How do I validate them once they're presented back to me? My stack is Rails on Apache using Passenger, but I'm not too particular. A: These are usually referred to as client side certificates. I've not actually used it but a modified version of restful-authentication can be found here here that looks like what your after. I found this via Dr. Nic's post A: Depends on the server, but the simplest solution I know of, using Apache: FakeBasicAuth "When this option is enabled, the Subject Distinguished Name (DN) of the Client X509 Certificate is translated into a HTTP Basic Authorization username. This means that the standard Apache authentication methods can be used for access control. The user name is just the Subject of the Client's X509 Certificate (can be determined by running OpenSSL's openssl x509 command: openssl x509 -noout -subject -in certificate.crt). Note that no password is obtained from the user... " Not sure about rails, but the usual REMOTE_USER environment variable should be accessible in some way. A: If you want to generate certificates, you need to cause the client to generate a key pair, and send you at least the public key. You can do this in Firefox via a Javascript call, it's crypto.generateCRMFRequest. I'm guessing there are browser-specific methods available in other browsers too. But first, you need to figure out how to issue a certificate once you get a public key. You could script something on the server with OpenSSL, but it has built-in support for CSRs, not the CRMF format Firefox will send you. So you'd need to write some code to convert the CRMF to a CSR, which will require some sort of DER processing capability… I'm just scratching the surface here—operating a CA, even for a toy application, is not trivial. SSO solutions like OpenId and PKI solutions do overlap, and there is an elegance in PKI. But the devil is in the details, and there are good reasons why this approach has been around a long time but has only taken off in government and military applications. If you are interested in pursuing this, follow up with some questions specific to the platform you would want to develop your CA service on. A: You can generate a certificate in the client's browser using browser-specific code. See this question You could also generate SSL client certs server-side using OpenSSL in Ruby (see this q). (This will work in any browser without browser-specific code, but your server will have generated the client's private key, which is not ideal for crypto purists.) Whichever method you use to generate them, you will then need to configure your webserver to require the client certificates. See the Apache docs for an example. A: I've been working on a solution to this problem. I wanted to do the same thing and I know lots of other website owners want this feature, with or without a third party provider. I created the necessary server setup and a firefox plugin to handle the certificate-based authentication. Go to mypassfree.com to grab the free firefox plugin. Email me (link on that page) for the server setup as I haven't packaged it yet with a nice installer. Server setup is Apache2 + OpenSSL + Perl (but you could rewrite the perl scripts in any language) Jonathan
{ "language": "en", "url": "https://stackoverflow.com/questions/28599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: C on Visual Studio I'm trying to learn C. As a C# developer, my IDE is Visual Studio. I've heard this is a good environment for C/C++ development. However, it seems no matter what little thing I try to do, intuition fails me. Can someone give good resources for how to either: * *learn the ins and out of C in Visual Studio *recommend a better C IDE + compiler Edit: See also: https://stackoverflow.com/questions/951516/a-good-c-ide A: Simple and sweet: Console applications (basic C programs using printf and such) are easily and cheaply done with the Tiny C Compiler - a no frills, no gui, complete C complier. http://bellard.org/tcc/ However, C development is relatively simple on Visual Studio as well. The following instructions will set Visual C++ up as a good C compiler, and it will produce console applications at first, and yo can move up into more complex windows apps as you go. * *Get the Visual Studio C++ edition (express is fine) *Start a new project - disable pre-compiled headers (maybe the wizard will let you do this, maybe you'll have to change the compiler settings once inside the project) *Delete everything inside the project. *Create a new "example.c" file with the hello world example *Compile and away you go. Alternately, get a linux virtual machine, or Cygwin. But as you already have Visual Studio, you might as well stick with what you know. As an aside, this isn't Atwood learning C finally, is it? No ALTs! ;-D -Adam A: well you can use visual studio just fine take a look at here man http://www.daniweb.com/forums/thread16256.html Go to View Menu select Solution Explorer or CTRL+ ALT +L Then Select The project that your are developing and right click on that. Then select the Properties from the submenu. Then select the Configuration properties from the Tree structure. under that select C/C++ then select Advanced. Now in the right side pane change the property Compile As from Compile as C++ Code (/TP) to Compile as C Code (/TC) Finally change your file extensions to .c Now you configured you Visual Studio to compile C programs And you can use NetBeans too it could even be more user friendly than Visual Studio download it you wont regret i promise A: Bloodshed Dev-C++ is the best windows C/C++ IDE IMO: http://www.bloodshed.net/ It uses the GNU compiler set and is free as in beer. EDIT: the download page for the IDE is here: http://www.bloodshed.net/dev/devcpp.html A: As already said, you should check out the VS.net C++ edition, but if you'd like to try something else Eclipse has a C++ edition. You can get more info from http://eclipse.org or check out the distro at http://www.easyeclipse.org/site/distributions/cplusplus.html A: The problem with learning C within Visual Studio is that you are compiling C using the Visual Studio C++ compiler. You might want to try learning C using the GNU GCC compiler from within the Cygwin environment in Windows. This is a legitimate response, I posted an IDE that uses the GNU compilers, so why has he been down modded? This is the type of thing that will make me not use SO, why down mod someone just because they are recommending a different compiler, and IMHO, a better one then Microsoft's? get real people, and @Antonio Haley I gave you +1 A: The problem with learning C within Visual Studio is that you are compiling C using the Visual Studio C++ compiler. You might want to try learning C using the GNU GCC compiler from within the Cygwin environment in Windows. A: Answering the purely subject question "recommend me a better C IDE and compiler" I find Ming32w and Code::blocks (now with combined installer) very useful on windows but YMMV as you are obviously used to the MS IDE and are just struggling with C. May I suggest you concentrate on console applications to get a feel for the language first before you attempt to tie it together with a windows UI which in my experience is the hardest bit of windows development. A: Some people say that a smaller IDE is better for learning. Take a look at Code::Blocks. It's generally true that beginning C in an IDE is hard because not many books explain enough to control the IDE. Perhaps starting in a console and a basic text editor with syntax highlighting would be better – at least under Linux. Since Windows' console is far from great, I'd not recommend using it. /EDIT: Dev-C++ used to be the best freely available IDE for Windows. However, it's development has been discontinued years ago and the most recent version unfortunately is full of bugs. A: http://xoax.net/comp/cpp/console/Lesson0.php Any use? A: There's a very good reason to learn C and C++. The reason is that there's a lot of C and C++ code out there that are performing very real and important tasks. Someone who considers themselves a programmer and a learner(doubtful that you can separate the two) can learn a lot from these lines of code. You can learn a lot from each language by studying the other, but if you really want to grok C it's a lot easier to separate yourself from anything C++ for a while. Visual C++ is great but GCC is a great way to thrust yourself into vanilla ANSI C without having to mentally sidestep any C++. @mmattax thanks! A: C in Visual Studio is fine, just use the command line compiler that is included in the Pro edition. Yes its the C++ compiler but treats all files ending .c as C . You can even force it to treat ALL files as C with a switch. The VS documentation has entries on it, just search the index for Visual C. A: Visual Studio is one of the best IDEs for C/C++. I don't think it is complicated and hard to use - if you have questions about it - ask them. Some other compilers/IDEs are fine too, but if already have Visual Studio and have used it - why not stick to it? A: For plain C, I suggest Pelles C. Generates optimized code and supports C99 constructs. Features: * *Support for 32-bit Windows (X86), *64-bit Windows (X64), and Windows Mobile (ARM). Support for the C99 standard. *Integrated source code editor with call tips and symbol browsing. Integrated source-level debugger. Project management. *Inline assembler for X86 and ARM. *Integrated resource editor. Integrated bitmap, icon and cursor editor. Integrated animated cursor and video editor. *Integrated hex-dump editor. *Supportfor custom controls in the dialog editor. Support for custom project wizards. http://www.smorgasbordet.com/pellesc/ A: When i used visual studio 5.0 it should compile c code as long as the header files and lib. are there for the compiler to find. In fact most C++ compilers like G++ will compile C code just fine. But i'm not sure how well.. If you are targeting a platform then you can change the header files and lib. within you IDE and Compiler. Visual Studio has a great debugger that no other Compiler that i have seen can compete with. I have been using gcc darwin10 4.2.1 and find the debugger is basically just the one you can getfree with any linux flavor. I recommend you learn both on a plain vanilla gcc compiler and also try visual studio which costs money. The express edition does not allow the use of threading and several other things that I forgot about. Visual Studio 5.0 should be ok to use and the debugger is much more human friendly then the one commandline version called GDB. Try DDD on linux which is similar to XCODE's debugger. Although C++ and C are different you can compile both together. But you should understand each ones flaws and good points. C code is faster, but C++ is much easier to write and manage larger code. C++ is object oriented but C is procedural while they are both imperative languages. I would suggest learning objective-C since you can use both C++ and C libraries. Using the features you like in all three languages!!! A: Visual Studio or Express do consider .c files as C code, but the compiler will keep giving warnings, and irritating suggestions which you do not require, in the debugger. Gives an indication that Visual C++, as the name suggests is optimized for C++ development for the Windows Operating system, which was originally written in plain pure C.
{ "language": "en", "url": "https://stackoverflow.com/questions/28605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Why do my exception stack traces always point to the last method line? I have a problem with my Visual Studio installation. When I got an exception I always have incorrect line numbers in it's stack trace. There are always point to last line of each method in my codebase. At the same time it's OK when I'm tracing programs with debugger. What's happed with PDBs? No, I'm not re-throwing exception at each method. In each line of stack trace I have last row of corresponding method, while exception had thrown by statement in the middle. A: Sounds like you're running your app in Release mode. Release mode has difficulties with line numbers for exceptions and whatnot. Compile your app in Debug mode (no need to attach the debugger) and see if it sorts itself out.
{ "language": "en", "url": "https://stackoverflow.com/questions/28607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is DateTime.Now the best way to measure a function's performance? I need to find a bottleneck and need to accurately as possible measure time. Is the following code snippet the best way to measure the performance? DateTime startTime = DateTime.Now; // Some execution process DateTime endTime = DateTime.Now; TimeSpan totalTimeTaken = endTime.Subtract(startTime); A: If you want something quick and dirty I would suggest using Stopwatch instead for a greater degree of precision. Stopwatch sw = new Stopwatch(); sw.Start(); // Do Work sw.Stop(); Console.WriteLine("Elapsed time: {0}", sw.Elapsed.TotalMilliseconds); Alternatively, if you need something a little more sophisticated you should probably consider using a 3rd party profiler such as ANTS. A: These are all great ways to measure time, but that is only a very indirect way to find bottleneck(s). The most direct way to find a bottneck in a thread is to get it running, and while it is doing whatever makes you wait, halt it with a pause or break key. Do this several times. If your bottleneck takes X% of time, X% is the probability that you will catch it in the act on each snapshot. Here's a more complete explanation of how and why it works A: @Sean Chambers FYI, the .NET Timer class is not for diagnostics, it generates events at a preset interval, like this (from MSDN): System.Timers.Timer aTimer; public static void Main() { // Create a timer with a ten second interval. aTimer = new System.Timers.Timer(10000); // Hook up the Elapsed event for the timer. aTimer.Elapsed += new ElapsedEventHandler(OnTimedEvent); // Set the Interval to 2 seconds (2000 milliseconds). aTimer.Interval = 2000; aTimer.Enabled = true; Console.WriteLine("Press the Enter key to exit the program."); Console.ReadLine(); } // Specify what you want to happen when the Elapsed event is // raised. private static void OnTimedEvent(object source, ElapsedEventArgs e) { Console.WriteLine("The Elapsed event was raised at {0}", e.SignalTime); } So this really doesn't help you know how long something took, just that a certain amount of time has passed. The timer is also exposed as a control in System.Windows.Forms... you can find it in your designer tool box in VS05/VS08 A: This is the correct way: using System; using System.Diagnostics; class Program { public static void Main() { Stopwatch stopWatch = Stopwatch.StartNew(); // some other code stopWatch.Stop(); // this not correct to get full timer resolution Console.WriteLine("{0} ms", stopWatch.ElapsedMilliseconds); // Correct way to get accurate high precision timing Console.WriteLine("{0} ms", stopWatch.Elapsed.TotalMilliseconds); } } For more information go through Use Stopwatch instead of DataTime for getting accurate performance counter. A: No, it's not. Use the Stopwatch (in System.Diagnostics) Stopwatch sw = Stopwatch.StartNew(); PerformWork(); sw.Stop(); Console.WriteLine("Time taken: {0}ms", sw.Elapsed.TotalMilliseconds); Stopwatch automatically checks for the existence of high-precision timers. It is worth mentioning that DateTime.Now often is quite a bit slower than DateTime.UtcNow due to the work that has to be done with timezones, DST and such. DateTime.UtcNow typically has a resolution of 15 ms. See John Chapman's blog post about DateTime.Now precision for a great summary. Interesting trivia: The stopwatch falls back on DateTime.UtcNow if your hardware doesn't support a high frequency counter. You can check to see if Stopwatch uses hardware to achieve high precision by looking at the static field Stopwatch.IsHighResolution. A: This article says that first of all you need to compare three alternatives, Stopwatch, DateTime.Now AND DateTime.UtcNow. It also shows that in some cases (when performance counter doesn't exist) Stopwatch is using DateTime.UtcNow + some extra processing. Because of that it's obvious that in that case DateTime.UtcNow is the best option (because other use it + some processing) However, as it turns out, the counter almost always exists - see Explanation about high-resolution performance counter and its existence related to .NET Stopwatch?. Here is a performance graph. Notice how low performance cost UtcNow has compared to alternatives: The X axis is sample data size, and the Y axis is the relative time of the example. One thing Stopwatch is better at is that it provides higher resolution time measurements. Another is its more OO nature. However, creating an OO wrapper around UtcNow can't be hard. A: Visual Studio Team System has some features that may help with this problem. Essentially you can write unit tests and mix them in different scenarios to run against your software as part of a stress or load test. This may help to identify areas of code that impact your applications performance the most. Microsoft' Patterns and Practices group has some guidance in Visual Studio Team System Performance Testing Guidance. A: I just found a post in Vance Morrison's blog about a CodeTimer class he wrote that makes using StopWatch easier and does some neat stuff on the side. A: The way I use within my programs is using the StopWatch class as shown here. Stopwatch sw = new Stopwatch(); sw.Start(); // Critical lines of code long elapsedMs = sw.Elapsed.TotalMilliseconds; A: I've done very little of this sort of performance checking (I tend to just think "this is slow, make it faster") so I have pretty much always gone with this. A google does reveal a lot of resources/articles for performance checking. Many mention using pinvoke to get performance information. A lot of the materials I study only really mention using perfmon.. Edit: Seen the talks of StopWatch.. Nice! I have learned something :) This looks like a good article A: This is not professional enough: Stopwatch sw = Stopwatch.StartNew(); PerformWork(); sw.Stop(); Console.WriteLine("Time taken: {0}ms", sw.Elapsed.TotalMilliseconds); A more reliable version is: PerformWork(); int repeat = 1000; Stopwatch sw = Stopwatch.StartNew(); for (int i = 0; i < repeat; i++) { PerformWork(); } sw.Stop(); Console.WriteLine("Time taken: {0}ms", sw.Elapsed.TotalMilliseconds / repeat); In my real code, I will add GC.Collect call to change managed heap to a known state, and add Sleep call so that different intervals of code can be easily separated in ETW profile. A: It's useful to push your benchmarking code into a utility class/method. The StopWatch class does not need to be Disposed or Stopped on error. So, the simplest code to time some action is public partial class With { public static long Benchmark(Action action) { var stopwatch = Stopwatch.StartNew(); action(); stopwatch.Stop(); return stopwatch.ElapsedMilliseconds; } } Sample calling code public void Execute(Action action) { var time = With.Benchmark(action); log.DebugFormat(“Did action in {0} ms.”, time); } Here is the extension method version public static class Extensions { public static long Benchmark(this Action action) { return With.Benchmark(action); } } And sample calling code public void Execute(Action action) { var time = action.Benchmark() log.DebugFormat(“Did action in {0} ms.”, time); } A: The stopwatch functionality would be better (higher precision). I'd also recommend just downloading one of the popular profilers, though (DotTrace and ANTS are the ones I've used the most... the free trial for DotTrace is fully functional and doesn't nag like some of the others). A: Use the System.Diagnostics.Stopwatch class. Stopwatch sw = new Stopwatch(); sw.Start(); // Do some code. sw.Stop(); // sw.ElapsedMilliseconds = the time your "do some code" took. A: Ditto Stopwatch, it is way better. Regarding performance measuring you should also check whether your "// Some Execution Process" is a very short process. Also bear in mind that the first run of your "// Some Execution Process" might be way slower than subsequent runs. I typically test a method by running it 1000 times or 1000000 times in a loop and I get much more accurate data than running it once. A: Since I do not care to much about precision I ended up comparing them. I am capturing lots of packets on the network and I want to place the time when I receive each packet. Here is the code that tests 5 million iterations int iterations = 5000000; // Test using datetime.now { var date = DateTime.UtcNow.AddHours(DateTime.UtcNow.Second); var now = DateTime.UtcNow; for (int i = 0; i < iterations; i++) { if (date == DateTime.Now) Console.WriteLine("it is!"); } Console.WriteLine($"Done executing {iterations} iterations using datetime.now. It took {(DateTime.UtcNow - now).TotalSeconds} seconds"); } // Test using datetime.utcnow { var date = DateTime.UtcNow.AddHours(DateTime.UtcNow.Second); var now = DateTime.UtcNow; for (int i = 0; i < iterations; i++) { if (date == DateTime.UtcNow) Console.WriteLine("it is!"); } Console.WriteLine($"Done executing {iterations} iterations using datetime.utcnow. It took {(DateTime.UtcNow - now).TotalSeconds} seconds"); } // Test using stopwatch { Stopwatch sw = new Stopwatch(); sw.Start(); var now = DateTime.UtcNow; for (int i = 0; i < iterations; i++) { if (sw.ElapsedTicks == DateTime.Now.Ticks) Console.WriteLine("it is!"); } Console.WriteLine($"Done executing {iterations} iterations using stopwatch. It took {(DateTime.UtcNow - now).TotalSeconds} seconds"); } The output is: Done executing 5000000 iterations using datetime.now. It took 0.8685502 seconds Done executing 5000000 iterations using datetime.utcnow. It took 0.1074324 seconds Done executing 5000000 iterations using stopwatch. It took 0.9625021 seconds So in conclusion DateTime.UtcNow is the fastest if you do not care to much about precision. This also supports the answer https://stackoverflow.com/a/6986472/637142 from this question.
{ "language": "en", "url": "https://stackoverflow.com/questions/28637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "493" }
Q: Finding controls that use a certain interface in ASP.NET Having a heckuva time with this one, though I feel I'm missing something obvious. I have a control that inherits from System.Web.UI.WebControls.Button, and then implements an interface that I have set up. So think... public class Button : System.Web.UI.WebControls.Button, IMyButtonInterface { ... } In the codebehind of a page, I'd like to find all instances of this button from the ASPX. Because I don't really know what the type is going to be, just the interface it implements, that's all I have to go on when looping through the control tree. Thing is, I've never had to determine if an object uses an interface versus just testing its type. How can I loop through the control tree and yank anything that implements IMyButtonInterface in a clean way (Linq would be fine)? Again, know it's something obvious, but just now started using interfaces heavily and I can't seem to focus my Google results enough to figure it out :) Edit: GetType() returns the actual class, but doesn't return the interface, so I can't test on that (e.g., it'd return "MyNamespace.Button" instead of "IMyButtonInterface"). In trying to use "as" or "is" in a recursive function, the type parameter doesn't even get recognized within the function! It's rather bizarre. So if(ctrl.GetType() == typeToFind) //ok if(ctrl is typeToFind) //typeToFind isn't recognized! eh? Definitely scratching my head over this one. A: Longhorn213 almost has the right answer, but as as Sean Chambers and bdukes say, you should use ctrl is IInterfaceToFind instead of ctrl.GetType() == aTypeVariable The reason why is that if you use .GetType() you will get the true type of an object, not necessarily what it can also be cast to in its inheritance/Interface implementation chain. Also, .GetType() will never return an abstract type/interface since you can't new up an abstract type or interface. GetType() returns concrete types only. The reason this doesn't work if(ctrl is typeToFind) Is because the type of the variable typeToFind is actually System.RuntimeType, not the type you've set its value to. Example, if you set a string's value to "foo", its type is still string not "foo". I hope that makes sense. It's very easy to get confused when working with types. I'm chronically confused when working with them. The most import thing to note about longhorn213's answer is that you have to use recursion or you may miss some of the controls on the page. Although we have a working solution here, I too would love to see if there is a more succinct way to do this with LINQ. A: You can just search on the Interface. This also uses recursion if the control has child controls, i.e. the button is in a panel. private List<Control> FindControlsByType(ControlCollection controls, Type typeToFind) { List<Control> foundList = new List<Control>(); foreach (Control ctrl in this.Page.Controls) { if (ctrl.GetType() == typeToFind) { // Do whatever with interface foundList.Add(ctrl); } // Check if the Control has Child Controls and use Recursion // to keep checking them if (ctrl.HasControls()) { // Call Function to List<Control> childList = FindControlsByType(ctrl.Controls, typeToFind); foundList.AddRange(childList); } } return foundList; } // Pass it this way FindControlsByType(Page.Controls, typeof(IYourInterface)); A: I'd make the following changes to Longhorn213's example to clean this up a bit: private List<T> FindControlsByType<T>(ControlCollection controls ) { List<T> foundList = new List<T>(); foreach (Control ctrl in this.Page.Controls) { if (ctrl as T != null ) { // Do whatever with interface foundList.Add(ctrl as T); } // Check if the Control has Child Controls and use Recursion // to keep checking them if (ctrl.HasControls()) { // Call Function to List<T> childList = FindControlsByType<T>( ctrl.Controls ); foundList.AddRange( childList ); } } return foundList; } // Pass it this way FindControlsByType<IYourInterface>( Page.Controls ); This way you get back a list of objects of the desired type that don't require another cast to use. I also made the required change to the "as" operator that the others pointed out. A: Interfaces are close enough to types that it should feel about the same. I'd use the as operator. foreach (Control c in this.Page.Controls) { IMyButtonInterface myButton = c as IMyButtonInterface; if (myButton != null) { // do something } } You can also test using the is operator, depending on your need. if (c is IMyButtonInterface) { ... } A: Would the "is" operator work? if (myControl is ISomeInterface) { // do something } A: If you're going to do some work on it if it is of that type, then TryCast is what I'd use. Dim c as IInterface = TryCast(obj, IInterface) If c IsNot Nothing 'do work End if A: you can always just use the as cast: c as IMyButtonInterface; if (c != null) { // c is an IMyButtonInterface }
{ "language": "en", "url": "https://stackoverflow.com/questions/28642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Elastic tabstop editors and plugins What Windows code editors allow the use of elastic tabstops, either natively or through a plugin? I know about a gedit plugin, but it only works on Linux. A: Code Browser may be the first for windows. I would love to see this feature as a plugin for other editors as well. A: I did quite a bit of googling trying to find this answer. There are plenty of people asking for it: * *http://developers.slashdot.org/comments.pl?sid=414610&cid=21996944 *http://www.arguingwithmyself.com/archives/75-the-biggest-feature-your-editor-is-missing *http://intype.info/blog/screencast-parser-in-editor/#comment-221 *http://codewords.wordpress.com/2006/10/16/eclipses-achilles-heel/ just to name a few... so I don't think one exists yet, sorry :( A: Code Browser supports elastic tabstops, but it appears to be the only thing for Windows that currently supports it. Unfortunately, it has an unusual UI which may render it unsuitable for multi-person projects, and may even make it difficult for you to use even if no other editors are involved. According to the elastic tabtops website, he's working on plugins for eclipse and Visual Studio 2010 (though the Eclipse plugin is stalled pending a bugfix, and jedit should support elastic tabstops in an upcoming version. Finally, though this probably isn't an option, you could try running an x server (such as Cygwin/X or Xming on your Windows computer and ssh into a Linux client (either a virtual machine or another computer) to run Gedit. This approach has many problems though: you need to keep your files on a separate computer (perhaps using Dropbox to keep them in sync), X over SSH is notoriously slow, and you need either another computer or a virtual machine. A: XMLQuire is an XML editor developed for windows to showcase virtual formatting. This concept goes a step further than elastic tabstops, indentation is simply a function of the position of the preceding line-feed character and the nesting level and context assessed by the parser: It's the XML parser that determines the nesting level and therefore the required indentation, there's no reformat key or tab key to press, the XML formatting just reflows as you edit, drag and drop etc. This means that XML is always properly indented, but without leading tabs or spaces. The concept should also work for more conventional code (except for languages like F# that exploit whitespace), but this has not yet been tried out. Note that, unlike elastic tabstops, virtual formatting only works from the left-margin and only uses the parser context. The parser context is more than just about nesting level though, factors such as mixed content, node-type, length of parent element name and attribute name all come into the equation. This allows alignment of attributes and attribute values that occur on new lines also (as shown). Word-wrapped text naturally just fits to the indentation scheme. If further text formatting is required then space characters are added by the user in the conventional way. As with elastic tabstops there's a potential issue when virtually formatted text is opened in a more conventional editor. However, because no characters have been added for XML formatting (it was all virtual), conventional editors can simply apply conventional formatting according to the settings for that editor, uses tabs or spaces. A: Here's a elastic tabstop plugin for Visual Studio 2010 by ferveo (Ramunas Geciauskas): http://visualstudiogallery.msdn.microsoft.com/ccff2b55-201f-4263-aea5-3e66024d6c0e A: Another option is jedit which has already added support for elastic tabstops. It is available on Windows, Linux, OS X, and Unix. A: The problem is that only a few toolkits/platforms have text widgets that offer the ability to set non-uniform tabstops on different lines. To my knowledge, those toolkits/platforms are Java Swing (used by the demo on the elastic tabstops page), GTK (used by Gedit and the Gedit plugin), and apparently the new version of Visual Studio (VS 2010). Expect to (eventually) see more developments on all of those platforms. A: Textadept has an elastic tabstop plugin. Atom also has a plugin.
{ "language": "en", "url": "https://stackoverflow.com/questions/28652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Debugging Web Service with SOAP Packet I have a web service that I created in C# and a test harness that was provided by my client. Unfortunately my web service doesn't seem to be parsing the objects created by the test harness. I believe the problem lies with serializing the soap packet. Using TCPTrace I was able to get the soap packet passed to the web service but only on a remote machine so I can't debug it there. Is there a way of calling my local webservice with the soap packet generated rather than my current test harness where I manually create objects and call the web service through a web reference? [edit] The machine that I got the soap packet was on a vm so I can't link it to my machine. I suppose I'm looking for a tool that I can paste the soap packet into and it will in turn call my web service A: A somewhat manual process would be to use the Poster add-in for Firefox. There is also a java utility called SoapUI that has some discovery based automated templates that you can then modify and run against your service. A: By default, .Net will not allow you to connect a packet analyzer like TCPTrace or Fiddler (which I prefer) to localhost or 127.0.0.1 connections (for reasons that I forget now..) Best way would be to reference your web services via a full IP address or FQDN where possible. That will allow you to trace the calls in the tool of your choice. A: Same as palehorse, use soapUI or directly the specific component for that feature: TCPMon. A: Just did this the other day with TCPTrace on the local machine. I mapped the remote host in the hosts file to 127.0.0.1. Ran the local web server on 8080, TcpTrace on 80 pointing to 127.0.0.1:8080. Probably your issue is trying to run both at port 80 which won't work.
{ "language": "en", "url": "https://stackoverflow.com/questions/28654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the best/a very good meta-data reader library? Right now, I'm particularly interested in reading the data from MP3 files (ID3 tags?), but the more it can do (eg EXIF from images?) the better without compromising the ID3 tag reading abilities. I'm interested in making a script that goes through my media (right now, my music files) and makes sure the file name and directory path correspond to the file's metadata and then create a log of mismatched files so I can check to see which is accurate and make the proper changes. I'm thinking Ruby or Python (see a related question specifically for Python) would be best for this, but I'm open to using any language really (and would actually probably prefer an application language like C, C++, Java, C# in case this project goes off). A: There is a great post on using PowerShell and TagLibSharp on Joel "Jaykul" Bennet's site. You could use TagLibSharp to read the metatdata with any .NET based language, but PowerShell is quite appropriate for what you are trying to do. A: use exiftool (it supports ID3 too). written in perl, but can also be used from the command line. it has a compiled windows and mac version. it is light-years ahead of any other metadata tool, supporting almost all known audio, video and image files, supports writing (not just reading), and knows about all the custom/extended tags used by software (such as photoshop) and hardware (many camera manufacturers). A: @Thomas Owens PowerShell is now part of the Common Engineering Criteria (as of Microsoft's 2009 Product Line) and starting with Serve 2008 is included as a feature. It stands as much of a chance to be installed as Python or Ruby. You also mentioned that you were willing to go to C#, which could use TagLibSharp. Or you could use IronPython... A: @Thomas Owens TagLibSharp is a nice library to use. I always lean to PowerShell first, one to promote the language, and two because it is spreading fast in the Microsoft domain. I have nothing against using other languages, I just lean towards what I know and like. :) Good luck with your project. A: Further to Anon's answer - exiftool is very powerful and supports a huge range of file types, not just images, but video, audio and numerous document formats. A Ruby interface for exiftool is available in the form of the mini_exiftool gem see http://miniexiftool.rubyforge.org/
{ "language": "en", "url": "https://stackoverflow.com/questions/28664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best way to extract data from a FileMaker Pro database in a script? My job would be easier, or at least less tedious if I could come up with an automated way (preferably in a Python script) to extract useful information from a FileMaker Pro database. I am working on Linux machine and the FileMaker database is on the same LAN running on an OS X machine. I can log into the webby interface from my machine. I'm quite handy with SQL, and if somebody could point me to some FileMaker plug-in that could give me SQL access to the data within FileMaker, I would be pleased as punch. Everything I've found only goes the other way: Having FileMaker get data from SQL sources. Not useful. It's not my first choice, but I'd use Perl instead of Python if there was a Perl-y solution at hand. Note: XML/XSLT services (as suggested by some folks) are only available on FM Server, not FM Pro. Otherwise, that would probably be the best solution. ODBC is turning out to be extremely difficult to even get working. There is absolutely zero feedback from FM when you set it up so you have to dig through /var/log/system.log and parse obscure error messages. Conclusion: I got it working by running a python script locally on the machine that queries the FM database through the ODBC connections. The script is actually a TCPServer that accepts socket connections from other systems on the LAN, runs the queries, and returns the data through the socket connection. I had to do this to bypass the fact that FM Pro only accepts ODBC connections locally (FM server is required for external connections). A: It has been a really long time since I did anything with FileMaker Pro, but I know that it does have capabilities for an ODBC (and JDBC) connection to be made to it (however, I don't know how, or if, that translates to the linux/perl/python world though). This article shows how to share/expose your FileMaker data via ODBC & JDBC: Sharing FileMaker Pro data via ODBC or JDBC From there, if you're able to create an ODBC/JDBC connection you could query out data as needed. A: You'll need the FileMaker Pro installation CD to get the drivers. This document details the process for FMP 9 - it is similar for versions 7.x and 8.x as well. Versions 6.x and earlier are completely different and I wouldn't bother trying (xDBC support in those previous versions is "minimal" at best). FMP 9 supports SQL-92 standard syntax (mostly). Note that rather than querying tables directly you query using the "table occurrence" name which serves as a table alias of sorts. If the data tables are stored in multiple files it is possible to create a single FMP file with table occurrences/aliases pointing to those data tables. There's an "undocumented feature" where such a file must have a table defined in it as well and that table "related" to any other table on the relationships graph (doesn't matter which one) for ODBC access to work. Otherwise your queries will always return no results. The PDF document details all of the limitations of using the xDBC interface FMP provides. Performance of simple queries is reasonably fast, ymmv. I have found the performance of queries specifying the "LIKE" operator to be less than stellar. FMP also has an XML/XSLT interface that you can use to query FMP data over an HTTP connection. It also provides a PHP class for accessing and using FMP data in web applications. A: If your leaning is to Python, you may be interested in checking out the Python Wrapper for Filemaker. It provides two way access to the Filemaker data via Filemaker's built-in XML services. You can find some quite thorough information on this at: http://code.google.com/p/pyfilemaker/
{ "language": "en", "url": "https://stackoverflow.com/questions/28668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to avoid OutOfMemoryError when using Bytebuffers and NIO? I'm using ByteBuffers and FileChannels to write binary data to a file. When doing that for big files or successively for multiple files, I get an OutOfMemoryError exception. I've read elsewhere that using Bytebuffers with NIO is broken and should be avoided. Does any of you already faced this kind of problem and found a solution to efficiently save large amounts of binary data in a file in java? Is the jvm option -XX:MaxDirectMemorySize the way to go? A: I would say don't create a huge ByteBuffer that contains ALL of the data at once. Create a much smaller ByteBuffer, fill it with data, then write this data to the FileChannel. Then reset the ByteBuffer and continue until all the data is written. A: Check out Java's Mapped Byte Buffers, also known as 'direct buffers'. Basically, this mechanism uses the OS's virtual memory paging system to 'map' your buffer directly to disk. The OS will manage moving the bytes to/from disk and memory auto-magically, very quickly, and you won't have to worry about changing your virtual machine options. This will also allow you to take advantage of NIO's improved performance over traditional java stream-based i/o, without any weird hacks. The only two catches that I can think of are: * *On 32-bit system, you are limited to just under 4GB total for all mapped byte buffers. (That is actually a limit for my application, and I now run on 64-bit architectures.) *Implementation is JVM specific and not a requirement. I use Sun's JVM and there are no problems, but YMMV. Kirk Pepperdine (a somewhat famous Java performance guru) is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: NIO Performance Tips A: If you access files in a random fashion (read here, skip, write there, move back) then you have a problem ;-) But if you only write big files, you should seriously consider using streams. java.io.FileOutputStream can be used directly to write file byte after byte or wrapped in any other stream (i.e. DataOutputStream, ObjectOutputStream) for convenience of writing floats, ints, Strings or even serializeable objects. Similar classes exist for reading files. Streams offer you convenience of manipulating arbitrarily large files in (almost) arbitrarily small memory. They are preferred way of accessing file system in vast majority of cases. A: The previous two responses seem pretty reasonable. As for whether the command line switch will work, it depends how quickly your memory usage hits the limit. If you don't have enough ram and virtual memory available to at least triple the memory available, then you will need to use one of the alternate suggestions given. A: Using the transferFrom method should help with this, assuming you write to the channel incrementally and not all at once as previous answers also point out. A: This can depend on the particular JDK vendor and version. There is a bug in GC in some Sun JVMs. Shortages of direct memory will not trigger a GC in the main heap, but the direct memory is pinned down by garbage direct ByteBuffers in the main heap. If the main heap is mostly empty they many not be collected for a long time. This can burn you even if you aren't using direct buffers on your own, because the JVM may be creating direct buffers on your behalf. For instance, writing a non-direct ByteBuffer to a SocketChannel creates a direct buffer under the covers to use for the actual I/O operation. The workaround is to use a small number of direct buffers yourself, and keep them around for reuse.
{ "language": "en", "url": "https://stackoverflow.com/questions/28675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Process.StartTime Access Denied My code needs to determine how long a particular process has been running. But it continues to fail with an access denied error message on the Process.StartTime request. This is a process running with a User's credentials (ie, not a high-privilege process). There's clearly a security setting or a policy setting, or something that I need to twiddle with to fix this, as I can't believe the StartTime property is in the Framework just so that it can fail 100% of the time. A Google search indicated that I could resolve this by adding the user whose credentials the querying code is running under to the "Performance Log Users" group. However, no such user group exists on this machine. A: I've read something similar to what you said in the past, Lars. Unfortunately, I'm somewhat restricted with what I can do with the machine in question (in other words, I can't go creating user groups willy-nilly: it's a server, not just some random PC). Thanks for the answers, Will and Lars. Unfortunately, they didn't solve my problem. Ultimate solution to this is to use WMI: using System.Management; String queryString = "select CreationDate from Win32_Process where ProcessId='" + ProcessId + "'"; SelectQuery query = new SelectQuery(queryString); ManagementScope scope = new System.Management.ManagementScope(@"\\.\root\CIMV2"); ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query); ManagementObjectCollection processes = searcher.Get(); //... snip ... logic to figure out which of the processes in the collection is the right one goes here DateTime startTime = ManagementDateTimeConverter.ToDateTime(processes[0]["CreationDate"].ToString()); TimeSpan uptime = DateTime.Now.Subtract(startTime); Parts of this were scraped from Code Project: http://www.codeproject.com/KB/system/win32processusingwmi.aspx And "Hey, Scripting Guy!": http://www.microsoft.com/technet/scriptcenter/resources/qanda/jul05/hey0720.mspx A: Process of .Net 1.1 uses the Performance Counters to get the information. Either they are disabled or the user does not have administrative rights. Making sure the Performance Counters are enabled and the user is an administrator should make your code work. Actually the "Performance Counter Users Group" should enough. The group doesn't exist by default. So you should create it yourself. Process of .Net 2.0 is not depended on the Performance Counters. See http://weblogs.asp.net/nunitaddin/archive/2004/11/21/267559.aspx A: The underlying code needs to be able to call OpenProcess, for which you may require SeDebugPrivilege. Is the process you're doing the StartTime request on running as a different user to your own process? A: OK, sorry that didn't work... I am no expert on ASP.NET impersonation, I tend to use app pools which I don't think you can do on W2K Have you tried writing a tiny little test app which does the same query, and then running that as various users? I am reluctant to post a chunk of MS framework code here, but you could use either Reflector or this: http://www.codeplex.com/NetMassDownloader to get the source code for the relevant bits of the framework so that you could try implementing various bits to see where it fails. Can you get any other info about the process without getting Access Denied? A: I can enumerate the process (ie, the GetProcessById function works), and we have other code that gets the EXE name and other bits of information. I will give the test app a try. I'm also going to attempt to use WMI to get this information if I can't get the C# implementation working properly in short order (this is not critical functionality, so I can't spend days on it).
{ "language": "en", "url": "https://stackoverflow.com/questions/28708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Eclipse 3.2.2 content assist not finding classes in the project In Eclipse 3.2.2 on Linux content assist is not finding classes within the same project. Upgrading above 3.2 is not an option as SWT is not available above 3.2 for Solaris. I have seen suggestions to clean the workspace, reopen the workspace, run eclipse with the -clean command, none of which has worked. A: Go to Java/Editor/Content Assist/Advanced in Preferences, and make sure that the correct proposal kinds are selected. Same kind of thing happened to me when I first moved to 3.4. A: Are you sure that "build automatically" in the Project menu is checked? :-) Another thing: is the Problems view, unfiltered, completely clear of compilation errors and of classpath errors? A: Thanks for your last comment it worked partially. If there is any kind of errors, the content assist wont work. Once fixed, it partially works. I say partially because, there appear to be a bug, when I do Perl EPIC inheritance ex: package FG::CatalogueFichier; use FG::Catalogue; our @ISA = qw(FG::Catalogue); use strict; , the inheritted subroutines are not displayed in the content assist. A: I sometimes find I "lose" content assist because the "content assist computers" get disabled. This is in: [Workspace]\.metadata\.plugins\org.eclipse.core.runtime\.settings org.eclipse.jdt.ui.prefs and I just have to remove this property: content_assist_disabled_computers=
{ "language": "en", "url": "https://stackoverflow.com/questions/28709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a simple way to make html textarea and input type text equally wide? Is there a simple way of getting a HTML textarea and an input type="text" to render with (approximately) equal width (in pixels), that works in different browsers? A CSS/HTML solution would be brilliant. I would prefer not to have to use Javascript. Thanks /Erik A: To answer the first question (although it's been answered to death): A CSS width is what you need. But I wanted to answer Gaius's question in the answers. Gaius, you're problem is that you are setting the width's in em's. This is a good think to do but you need to remember that em's are based on font size. By default an input area and a textarea have different font faces & sizes. So when you are setting the width to 35em, the input area is using the width of it's font and the textarea is using the width of it's font. The text area default font is smaller, therefore the text box is smaller. Either set the width in pixels or points, or ensure that input boxes and textareas have the same font face & size: .mywidth { width: 35em; font-family: Verdana; font-size: 1em; } <input type="text" class="mywidth"/><br/> <textarea class="mywidth"></textarea> Hope that helps. A: Someone else mentioned this, then deleted it. If you want to style all textareas and text inputs the same way without classes, use the following CSS (does not work in IE6): input[type=text], textarea { width: 80%; } A: This is a CSS question: the width includes the border and padding widths, which have different defaults for INPUT and TEXTAREA in different browsers, so make those the same as well: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>width</title> <style type="text/css"> textarea, input { padding:2px; border:2px inset #ccc; width:20em; } </style> </head> <body> <p><input/><br/><textarea></textarea></p> </body> </html> This is described in the Box dimensions section of the CSS specification, which says: The box width is given by the sum of the left and right margins, border, and padding, and the content width. A: Use CSS3 to make textbox and input work the same. See jsFiddle. .textarea, .textbox { width: 200px; -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } A: You should be able to use .mywidth { width: 100px; } <input class="mywidth"> <br> <textarea class="mywidth"></textarea> A: Yes, there is. Try doing something like this: <textarea style="width:80%"> </textarea> <input type="text" style="width:80%" /> Both should equate to the same size. You can do it with absolute sizes (px), relative sizes (em) or percentage sizes. A: Just a note - the width is always influenced by something happening on the client side - PHP can't effect those sorts of things. Setting a common class on the elements and setting a width in CSS is your cleanest bet. A: Use a CSS class for width, then place your elements into HTML DIVs, DIVs having the mentioned class. This way, the layout control overall should be better. A: As mentioned in Unequal Html textbox and dropdown width with XHTML 1.0 strict it also depends on your doctype. I have noticed that when using XHTML 1.0 strict, the difference in width does indeed appear. A: input[type="text"] { width: 60%; } input[type="email"] { width: 60%; } textarea { width: 60%; } textarea { height: 40%; } A: These days, if you use bootstrap, simply give your input fields the form-control class. Something like: <label for="display-name">Display name</label> <input type="text" class="form-control" name="displayname" placeholder=""> <label for="about-me">About me</label> <textarea class="form-control" rows="5" id="about-me" name="aboutMe"></textarea> A: you can also use the following CSS: .mywidth{ width:100px; } textarea{ width:100px; } HTML: <input class="mywidth" > <textarea></textarea>
{ "language": "en", "url": "https://stackoverflow.com/questions/28713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Which PHP opcode cacher should I use to improve performance? I'm trying to improve performance under high load and would like to implement opcode caching. Which of the following should I use? * *APC - Installation Guide *eAccelerator - Installation Guide *XCache - Installation Guide I'm also open to any other alternatives that have slipped under my radar. Currently running on a stock Debian Etch with Apache 2 and PHP 5.2 [Update 1] HowtoForge installation links added [Update 2] Based on the answers and feedback given, I have tested all 3 implementations using the following Apache JMeter test plan on my application: * *Login *Access Home Page With 50 concurrent connections, the results are as follows: No Opcode Caching APC eAccelerator XCache Performance Graph (smaller is better) From the above results, eAccelerator has a slight edge in performance compared to APC and XCache. However, what matters most from the above data is that any sort of opcode caching gives a tremendous boost in performance. I have decided to use APC due to the following 2 reasons: * *Package is available in official Debian repository *More functional control panel To summarize my experience: Ease of Installation: APC > eAccelerator > XCache Performance: eAccelerator > APC, XCache Control Panel: APC > XCache > eAccelerator A: I have run several benchmarks with eAcclerator, APC, XCache, and Zend Optimizer (even though Zend is an optimizer, not a cache). Benchmark Results http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png Result: eAccelerator is fastest (in all tests), followed by XCache and APC. (The one in the diagram is the number of seconds to call a WordPress home page 10,000 times). Zend Optimizer made everything slower (!). A: I use APC because it was easy to install in windows and I'm developing on WAMP. Integrating APC into PHP6 was discussed here: http://www.php.net/~derick/meeting-notes.html#add-an-opcode-cache-to-the-distribution-apc And there are directions on installing APC on Debian Etch here: http://www.howtoforge.com/apc-php5-apache2-debian-etch A: I can't tell you for sure, but the place where I am working now is looking at APC and eAccelerator. However, this might influence you - APC will be integrated into a future release of PHP (thanks to Ed Haber for the link). A: I've had good success with eAccelerator (speed improvement with no load is noticable) but XCache also seems pretty promising. You may want to run some trials with each though, your application might scale differently on each. A: I think the answer might depend on the type of web applications you are running. I had to make this decision myself two years ago and couldn't decide between Zend Optimizer and eAccelerator. In order to make my decision, I used ab (apache bench) to test the server, and tested the three combinations (zend, eaccelerator, both running) and proved that eAccelerator on its own gave the greatest performance. If you have the luxury of time, I would recommend doing similar tests yourself, and making the decision based on your results. A: I've been using XCache for more than a year now with no problems at all. I tried to switch to eAccelerator, but ended up with a bunch of segmentation faults (it's less forgiving of errors). The major benefit to eAccelerator is that it's not just an opcode cache, it's also an optimizer. You should fully test out your application with each one of them to make sure there aren't any problems, and then I'd use apachebench to test it under load. A: These add-ons have historically introduced lots of weird bugs to track down. These bugs can cause inconsistent behaviour which can't be diagnosed easily because it depends on the state of the cache. So I'd say: * *Don't use any of the above. Buy more tin instead, it's a more reliable (i.e. error-free) way of increasing performance. OR *Go with whichever of the above is the most robust, having tested the pants off your application. But I'd say: * *Make sure it's REALLY PHP code parsing that is causing your performance problems by profiling your application. I think it's extremely likely that it isn't - in which case you'd be wasting your time (actually, using your time negatively productively) by installing any of them.
{ "language": "en", "url": "https://stackoverflow.com/questions/28716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: Best way to unit test ASP.NET MVC action methods that use BindingHelperExtensions.UpdateFrom? In handling a form post I have something like public ActionResult Insert() { Order order = new Order(); BindingHelperExtensions.UpdateFrom(order, this.Request.Form); this.orderService.Save(order); return this.RedirectToAction("Details", new { id = order.ID }); } I am not using explicit parameters in the method as I anticipate having to adapt to variable number of fields etc. and a method with 20+ parameters is not appealing. I suppose my only option here is mock up the whole HttpRequest, equivalent to what Rob Conery has done. Is this a best practice? Hard to tell with a framework which is so new. I've also seen solutions involving using an ActionFilter so that you can transform the above method signature to something like [SomeFilter] public Insert(Contact contact) A: I'm now using ModelBinder so that my action method can look (basically) like: public ActionResult Insert(Contact contact) { if (this.ViewData.ModelState.IsValid) { this.contactService.SaveContact(contact); return this.RedirectToAction("Details", new { id = contact.ID }); } else { return this.RedirectToAction("Create"); } } A: Wrap it in an interface and mock it. A: Use NameValueDeserializer from http://www.codeplex.com/MVCContrib instead of UpdateFrom.
{ "language": "en", "url": "https://stackoverflow.com/questions/28723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Get `df` to show updated information on FreeBSD I recently ran out of disk space on a drive on a FreeBSD server. I truncated the file that was causing problems but I'm not seeing the change reflected when running df. When I run du -d0 on the partition it shows the correct value. Is there any way to force this information to be updated? What is causing the output here to be different? A: In BSD a directory entry is simply one of many references to the underlying file data (called an inode). When a file is deleted with the rm(1) command only the reference count is decreased. If the reference count is still positive, (e.g. the file has other directory entries due to symlinks) then the underlying file data is not removed. Newer BSD users often don't realize that a program that has a file open is also holding a reference. The prevents the underlying file data from going away while the process is using it. When the process closes the file if the reference count falls to zero the file space is marked as available. This scheme is used to avoid the Microsoft Windows type issues where it won't let you delete a file because some unspecified program still has it open. An easy way to observe this is to do the following cp /bin/cat /tmp/cat-test /tmp/cat-test & rm /tmp/cat-test Until the background process is terminated the file space used by /tmp/cat-test will remain allocated and unavailable as reported by df(1) but the du(1) command will not be able to account for it as it no longer has a filename. Note that if the system should crash without the process closing the file then the file data will still be present but unreferenced, an fsck(8) run will be needed to recover the filesystem space. Processes holding files open is one reason why the newsyslog(8) command sends signals to syslogd or other logging programs to inform them they should close and re-open their log files after it has rotated them. Softupdates can also effect filesystem freespace as the actual inode space recovery can be deferred; the sync(8) command can be used to encourage this to happen sooner. A: This probably centres on how you truncated the file. du and df report different things as this post on unix.com explains. Just because space is not used does not necessarily mean that it's free... A: Does df --sync work?
{ "language": "en", "url": "https://stackoverflow.com/questions/28739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: The best way to get a count of IEnumerable Whats the best/easiest way to obtain a count of items within an IEnumerable collection without enumerating over all of the items in the collection? Possible with LINQ or Lambda? A: Use this. IEnumerable list =..........; list.OfType<T>().Count() it will return the count. A: In any case, you have to loop through it. Linq offers the Count method: var result = myenum.Count(); A: The solution depends on why you don't want to enumerate through the collection. If it's because enumerating the collection might be slow, then there is no solution that will be faster. You might want to consider using an ICollection instead if possible. Unless the enumeration is remarkably slow (e.g. it reads items from disk) speed shouldn't be a problem though. If it's because enumerating the collection will require more code then it's already been written for you in the form of the .Count() extension method. Just use MyEnumerable.Count(). If it's because you want to be able to enumerate the collection after you've counted then the .Count() extension method allows for this. You can even call .Count() on a collection you're in the middle of enumerating and it will carry on from where it was before the count. For example: foreach (int item in Series.Generate(5)) { Console.WriteLine(item + "(" + myEnumerable.Count() + ")"); } will give the results 0 (5) 1 (5) 2 (5) 3 (5) 4 (5) If it's because the enumeration has side effects (e.g. writes to disk/console) or is dependant on variables that may change between counting and enumerating (e.g. reads from disk) [N.B. If possible, I would suggest rethinking the architecture as this can cause a lot of problems] then one possibility to consider is reading the enumeration into an intermittent storage. For example: List<int> seriesAsList = Series.Generate(5).ToList(); All of the above assume you can't change the type (i.e. it is returned from a library that you do not own). If possible you might want to consider changing to use an ICollection or IList (ICollection being more widely scoped than IList) which has a Count property on it. A: You will have to enumerate to get a count. Other constructs like the List keep a running count. A: There's also IList or ICollection, if you want to use a construct that is still somewhat flexible, but also has the feature you require. They both imply IEnumerable. A: It also depends on what you want to achieve by counting.. If you are interested to find if the enumerable collection has any elements, you could use myEnumerable.Any() over myEnumerable.Count() where the former will yield the first element and the later will yield all the elements. A: An IEnumerable will have to iterate through every item. to get the full count. If you just need to check if there is one or more items in an IEnumerable a more efficient method is to check if there are any. Any() only check to see there is a value and does not loop through everything. IEnumerable myStrings = new List(){"one","two", "three"}; bool hasValues = myStrings.Any(); A: Not possible with LINQ, as calling .Count(...) does enumerate the collection. If you're running into the problem where you can't iterate through a collection twice, try this: List<MyTableItem> myList = dataContext.MyTable.ToList(); int myTableCount = myList.Count; foreach (MyTableItem in myList) { ... } A: If you need to count and then loop you may be better off with a list. If you're using count to check for members you can use Any() to avoid enumerating the entire collection. A: The best solution -as I think is to do the following: * *using System.Linq.Dynamic; *myEnumerable.AsQueryable().Count() A: When I want to use the Count property, I use ILIST which implements IEnumerable and ICollection interfaces. The ILIST data structure is an Array. I stepped through using the VS Debugger and found that the .Count property below returns the Array.Length property. IList<string> FileServerVideos = Directory.GetFiles(VIDEOSERVERPATH, "*.mp4"); if (FileServerVideos.Count == 0) return;
{ "language": "en", "url": "https://stackoverflow.com/questions/28756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Any good Subversion virtual appliance recommendations? I'm looking for a quick-and-dirty solution to this, I have to set up a Subversion server really quickly, like by the end of the day tomorrow. My initial thought was to just download a virtual appliance that we could load onto our host machine. The problem I am having however is that all the appliances I have found so far are stuck in svn version 1.4 or lower. Does anybody know of an appliance that has svn 1.5 running? I don't need any of the other bits like issue tracking, WebSVN or any of that stuff. Thanks, Wally EDIT: To answer some of the questions, I would prefer for the host OS to be some flavour of Linux so that I can avoid having to purchase an additional Windows license. A: I would simply go with installing SVN, and using the SVN Daemon, and completely ignoring Apache. There should be no appliance needed. Very simple to install, very easy to configure. Just take a vanilla windows/linux box and install the subversion server. It'll probably take all of 1/2 and hour to set up. A: No need to use an appliance, use the free BitNami one click Subversion installer (supports Linux, Windows, Mac...) A: You should consider this, it is really ZIRRO friction and it integrates well in various scenarios. Not to mention it is free of charge. http://www.visualsvn.com/server/ Cheers, Dragos A: I second the Visual SVN Server suggestion. It packages SVN with an Apache server (with SSL support!) and a really nice, easy to use control panel. You can be up in running in less than 10 minutes. It even integrates with Active Directory or your local Windows accounts very nicely. http://www.visualsvn.com/server/ A: I'm not quite sure this is what you want, but Tigris.org, which hosts the Svn project, has an MSI installer for Subversion 1.5. It includes bindings for Apache. Perhaps you could also clarify what OS your host machine is running, etc.? A: Just download and install VisualSVN Server. The installation procedure is very simple, see the Getting Started guide. VisualSVN Server is based on the latest Subversion versions, has great performance and provides unique features such as distributed Subversion repositories and modern web user web interface. Downloading and installing VisualSVN Server does not require registration. You can easily deploy VisualSVN Server in the cloud. E.g. in MS Azure or AWS. A: I would agree with Kibbee. I wanted to jump in with SVN so I installed the daemon and had everything up and running in no time. It took me longer to get all the commands down for adding and committing files than the installation. A: With Appliance, we normally think about something like this: http://www.garghouti.co.uk/vmTrac/ There is not much info about versions and such though. But as others have pointed out, it is dead easy to install svn and svndeamon on an server that already exists, Svn takes very little resources, and can easily be put on an fileserver. Apache is not needed at all. A: WANdisco have just launched a Subversion Appliance: http://www.wandisco.com/subversion/appliance/ They have a free webinar at: http://www.wandisco.com/webinar/subversion Some of the marketing blurb: Download, Turn on, Use WANdisco's Subversion MultiSite Software Appliance provides immediate, real and tangible benefits to organizations by minimizing ongoing maintenance and administration costs and dramatically reducing deployment time. A globally distributed IT organization can literally be up and running in minutes. Just download it, turn it on, and it works. Features: Includes Subversion, Apache, a fully supported version of Linux and WANdisco's unique multi-site replication technology that allows distributed developers to collaborate at LAN-speed over a WAN, instead of working in silos. Combines the zero-latency deployment offered by hosted Software as a Service (SaaS) solutions for Subversion, with all of the flexibility, control and security of traditional behind the firewall implementations. Eliminates environmental dependencies, making installation, deployment and ongoing maintenance a snap. Fully supported Just Enough Operating System (JeOS) based on Linux provides everything needed for deployment under a VM, or on industry standard hardware. Available for any target virtualization environment including VMware ESX, Citrix XenServer, and Windows with Hyper-V. Ongoing maintenance and support are also dramatically simplified with automatic updates for all appliance components, including Subversion, Apache, Linux and WANdisco. There's no more one-off manual upgrades and patches. An entire multi-site implementation can be monitored and administered from a single location. Sites can be brought online, or taken offline for maintenance without disrupting user access. Built-in continuous hot backup and automated recovery features virtually eliminate downtime, making third party disk mirroring solutions unnecessary. Transparent implementation approach doesn't change Subversion's functionality, so no user retraining is required. Supported Supported: VMware (R) Virtual Appliance VMware (R) ESX Server Virtual Appliance Microsoft (R) VHD Virtual Appliance Citrix XenServer (TM) Appliance Virtual Iron Virtual Appliance Installable ISO/CD/DVD (directly on the server) A: Jumpbox.com has completely set up virtual appliances for 'virtually' all kinds of applications including Subversion. A: The Turnkey community appliance looks very good (though I have not tried it yet) http://www.turnkeylinux.org/revision-control
{ "language": "en", "url": "https://stackoverflow.com/questions/28757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Using Visual Studio 2008 Web Deployment projects - getting an error finding aspnet_merge.exe I recently upgraded a VS2005 web deployment project to VS2008 - and now I get the following error when building: The specified task executable location "bin\aspnet_merge.exe" is invalid. Here is the source of the error (from the web deployment targets file): <Target Name="AspNetMerge" Condition="'$(UseMerge)' == 'true'" DependsOnTargets="$(MergeDependsOn)"> <AspNetMerge ExePath="$(FrameworkSDKDir)bin" ApplicationPath="$(TempBuildDir)" KeyFile="$(_FullKeyFile)" DelaySign="$(DelaySign)" Prefix="$(AssemblyPrefixName)" SingleAssemblyName="$(SingleAssemblyName)" Debug="$(DebugSymbols)" Nologo="$(NoLogo)" ContentAssemblyName="$(ContentAssemblyName)" ErrorStack="$(ErrorStack)" RemoveCompiledFiles="$(DeleteAppCodeCompiledFiles)" CopyAttributes="$(CopyAssemblyAttributes)" AssemblyInfo="$(AssemblyInfoDll)" MergeXmlDocs="$(MergeXmlDocs)" ErrorLogFile="$(MergeErrorLogFile)" /> What is the solution to this problem? Note - I also created a web deployment project from scratch in VS2008 and got the same error. A: Apparently aspnet_merge.exe (and all the other SDK tools) are NOT packaged in Visual Studio 2008. Visual Studio 2005 packaged these tools as part of its installation. The place to get this is an installation of the Windows 2008 SDK (latest download). Windows 7/Windows 2008 R2 SDK: here The solution is to install the Windows SDK and make sure you set FrameworkSDKDir as an environment variable before starting the IDE. Batch command to set this variable: SET FrameworkSDKDir="C:\Program Files\Microsoft SDKs\Windows\v6.1" NOTE: You will need to modify to point to where you installed the SDK if not in the default location. Now VS2008 will know where to find aspnet_merge.exe. A: I just ran into this same problem trying to use MSBuild to build my web application on a server. I downloaded the "web" version of the SDK because the setup is only 500KB and it prompts you for which components to install and only downloads and installs the ones you choose. I unchecked everything except for ".NET Development Tools". It then downloaded and installed about 250MB worth of stuff, including aspnet_merge.exe and sgen.exe You can download the winsdk_web.exe setup for Win 7 and .NET 3.5 SP1 here.
{ "language": "en", "url": "https://stackoverflow.com/questions/28765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Simple Object to Database Product I've been taking a look at some different products for .NET which propose to speed up development time by providing a way for business objects to map seamlessly to an automatically generated database. I've never had a problem writing a data access layer, but I'm wondering if this type of product will really save the time it claims. I also worry that I will be giving up too much control over the database and make it harder to track down any data level problems. Do these type of products make it better or worse in the already tough case that the database and business object structure must change? For example: Object Relation Mapping from Dev Express In essence, is it worth it? Will I save "THAT" much time, effort, and future bugs? A: I have used SubSonic and EntitySpaces. Once you get the hang of them, I beleive they can save you time, but as complexity of your app and volume of data grow, you may outgrow these tools. You start to lose time trying to figure out if something like a performance issue is related to the ORM or to your code. So, to answer your question, I think it depends. I tend to agree with Eric on this, high volume enterprise apps are not a good place for general purpose ORMs, but in standard fare smaller CRUD type apps, you might see some saved time. A: I've found iBatis from the Apache group to be an excellent solution to this problem. My team is currently using iBatis to map all of our calls from Java to our MySQL backend. It's been a huge benefit as it's easy to manage all of our SQL queries and procedures because they're all located in XML files, not in our code. Separating SQL from your code, no matter what the language, is a great help. Additionally, iBatis allows you to write your own data mappers to map data to and from your objects to the DB. We wanted this flexibility, as opposed to a Hibernate type solution that does everything for you, but also (IMO) limits your ability to perform complex queries. There is a .NET version of iBatis as well. A: I've recently set up ActiveRecord from the Castle Project for an app. It was pretty easy to get going. After creating a new app with it, I even used MyGeneration to script out class files for a legacy app that ActiveRecord could use in a pretty short time. It uses NHibernate to interact with the database, but takes away all the xml mapping that comes with NHibernate. The nice thing is though, if necessary, you already have NHibernate in your project, you can use its full power if you have some special cases. I'd suggest taking a look at it. A: There are lots of choices of ORMs. Linq to Sql, nHibernate. For pure object databases there is db4o. It depends on the application, but for a high volume enterprise application, I would not go this route. You need more control of your data. A: I was discussing this with a friend over the weekend and it seems like the gains you make on ease of storage are lost if you need to be able to query the database outside of the application. My understanding is that these databases work by storing your object data in a de-normalized fashion. This makes it fast to retrieve entire sets of objects, but if you need to select data from a perspective that doesn't match your object model, the odbms might have a hard time getting at the particular data you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/28768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: vim commands in Eclipse I have been doing some java development lately and have started using Eclipse. For the most part, I think it is great, but being a C/C++ guy used to doing all of his editing in vim, I find myself needlessly hitting the Esc key over and over. It would be really nice if I got all the nice features of Eclipse, but still could do basic editing the same way I can in vim. Anyone know of any Eclipse pluggins that would help with this? A: Vrapper: an Eclipse plugin which acts as a wrapper for Eclipse text editors to provide a Vim-like input scheme for moving around and editing text. Unlike other plugins which embed Vim in Eclipse, Vrapper imitates the behaviour of Vim while still using whatever editor you have opened in the workbench. The goal is to have the comfort and ease which comes with the different modes, complex commands and count/operator/motion combinations which are the key features behind editing with Vim, while preserving the powerful features of the different Eclipse text editors, like code generation and refactoring... A: Viable has pretty much what you are looking for along with some extra features which none of the other plugins for eclipse seem to have, like some support for visual block mode, command line history, window splitting, and piping external commands. It is pay ($15.00 CAD) but free to tree with all the features. I personally like it better than the other solutions. A: There is this plugin that costs $20+ http://satokar.com/viplugin/ I use it and it works great, you've got basic vi movement commands and a set of others. Here is an open source, free plugin but i've never been able to get it working (i'm on a mac). http://sourceforge.net/projects/vimplugin/ You can also go the other way and get eclipse code completion inside vim. http://eclim.sourceforge.net/ You basically run an instance of Eclipse and you will be working inside vim. They just released a version compatible with Eclipse 3.4. New plugin I've started using https://marketplace.eclipse.org/content/viable-vim-eclipse
{ "language": "en", "url": "https://stackoverflow.com/questions/28793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: What refactoring tools do you use for Python? I have a bunch of classes I want to rename. Some of them have names that are small and that name is reused in other class names, where I don't want that name changed. Most of this lives in Python code, but we also have some XML code that references class names. Simple search and replace only gets me so far. In my case, I want to rename AdminAction to AdminActionPlug and AdminActionLogger to AdminActionLoggerPlug, so the first one's search-and-replace would also hit the second, wrongly. Does anyone have experience with Python refactoring tools ? Bonus points if they can fix class names in the XML documents too. A: In the meantime, I've tried it two tools that have some sort of integration with vim. The first is Rope, a python refactoring library that comes with a Vim (and emacs) plug-in. I tried it for a few renames, and that definitely worked as expected. It allowed me to preview the refactoring as a diff, which is nice. It is a bit text-driven, but that's alright for me, just takes longer to learn. The second is Bicycle Repair Man which I guess wins points on name. Also plugs into vim and emacs. Haven't played much with it yet, but I remember trying it a long time ago. Haven't played with both enough yet, or tried more types of refactoring, but I will do some more hacking with them. A: WingIDE 4.0 (WingIDE is my python IDE of choice) will support a few refactorings, but I just tried out the latest beta, beta6, and... there's still work to be done. Retract Method works nicely, but Rename Symbol does not. Update: The 4.0 release has fixed all of the refactoring tools. They work great now. A: I would take a look at Bowler (https://pybowler.io). It's better suited for use directly from the command-line than rope and encourages scripting (one-off scripts). A: Your IDE can support refactorings !! Check it Eric, Eclipse, WingIDE have build in tools for refactorings (Rename including). And that are very safe refactorings - if something can go wrong IDE wont do ref. Also consider adding few unit test to ensure your code did not suffer during refactorings. A: PyCharm have some refactoring features. PYTHON REFACTORING Rename refactoring allows to perform global code changes safely and instantly. Local changes within a file are performed in-place. Refactorings work in plain Python and Django projects. Use Introduce Variable/Field/Constant and Inline Local for improving the code structure within a method, Extract Method to break up longer methods, Extract Superclass, Push Up, Pull Down and Move to move the methods and classes. A: I would strongly recommend PyCharm - not just for refactorings. Since the first PyCharm answer was posted here a few years ago the refactoring support in PyCharm has improved significantly. Python Refactorings available in PyCharm (last checked 2016/07/27 in PyCharm 2016.2) * *Change Signature *Convert to Python Package/Module *Copy *Extract Refactorings *Inline *Invert Boolean *Make Top-Level Function *Move Refactorings *Push Members down *Pull Members up *Rename Refactorings *Safe Delete XML refactorings (I checked in context menu in an XML file): * *Rename *Move *Copy *Extract Subquery as CTE *Inline Javascript refactorings: * *Extract Parameter in JavaScript *Change Signature in JavaScript *Extract Variable in JavaScript A: You can use sed to perform this. The trick is to recall that regular expressions can recognize word boundaries. This works on all platforms provided you get the tools, which on Windows is Cygwin, Mac OS may require installing the dev tools, I'm not sure, and Linux has this out of the box. So grep, xargs, and sed should do the trick, after 12 hours of reading man pages and trial and error ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/28796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "74" }
Q: PAD (Portable Application Description) files for shareware / freeware I've been told that I should include PAD files with the freeware applications I distribute so hosting sites can list the information correctly and check for updates, etc. Can you give me some info on using PAD files? Here are general questions which come to mind: * *Is it worth the effort? *Do you use PADGen or an online tool like www.padbuilder.com? *Do you digitally sign yours? A: I do use padgen, it does not take too long to make a pad file, but what takes time is submitting it... just copy+paste stuff from your marketing material into it. keep storing all your pad files on your webserver and new version updates are listed in 1000+ small shareware/software sites automatically. however, download amounts from these sites are usually < 1000/mo. not signed mine.
{ "language": "en", "url": "https://stackoverflow.com/questions/28808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to find out which CVS tags cover which files and paths? There is a legacy CVS repository, which contains a large number of directories, sub-directories, and paths. There is also a large number of branches and tags that do not necessarilly cover all paths & files - usually a subset. How can I find out, which branch / tag covers, which files and paths? CVS log already provides the list of tags per file. The task requires me to transpose this into files per tag. I could not find such functionality in current WinCVS (CVSNT) implementation. Given ample empty cycles I can write a Perl script that would do that, the algorithm is not complex, but it needs to be done. I would imagine there are some people who needed such information and solved this problem. Thus, I think should be a readily available (open source / free) tool for this. A: To determine what tags apply to a particular file use: cvs log <filename> This will output all the versions of the file and what tags have been applied to the version. To determine what files are included in a single tag, the only thing I can think of is to check out using the tag and see what files come back. The command for that is any of: cvs update -r <tagname> cvs co <modulename> -r <tagname> cvs export <modulename> -r <tagname> A: the method quoted above didn't work for me cvs -q rdiff -s -D 2000-01-01 -r yourTagName however after a lot of messing around I've realised that cvs -q rdiff -s -D 2000-01-01 -r yourTagName ModuleName works A: You don't have to do an actual checkout. You can use the -n option to only simulate this: cvs -n co -rTagName ModuleName This will give you the names of all the files tagged TagName in module ModuleName. A: To list tags on a file one can also do: cvs status -v <file> A: The following command gives a list of files that are in that tag "yourTagName". The files are all marked as new, the revision info in "yourTagName". This command does a diff between 2000-01-01 and your tag, feel free to use another date that is earlier. cvs -q rdiff -s -D 2000-01-01 -r yourTagName A: I don't know of any tool that can help you, but if you are writing your own, I can save you from one headace: Directories in CVS cannot be tagget. Only the files within them have tags (and that is what determines what is checked out when you check out a directory on a specific tag).
{ "language": "en", "url": "https://stackoverflow.com/questions/28817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Windows Mobile - What scripting platforms are available? We have a number of users with Windows Mobile 6 and need to apply minor changes. eg. update a registry setting. One option is push and execute an executable file using our device management software. I'd like this to be a little more friendly for the admins who are familiar with scripting in VBScript/JScript etc. What are the options for scripting on Windows Mobile devices? A: I work on windows mobile full time and have never really come across a good Windows Mobile scripting implementation unfortunately. For some reason MS has never seen the need for it. For example, even though you can actually get a command console on WM, it does not support running batch files, even though all the commands are still there and it would be relatively easy. There is definitely not a VBScript engine I've ever heard of nor JScript. There is PythonCE but the WM specific support is minimal and you don't get access to a lot of WM only things. Also, I've done a lot of work with a company called SOTI which has a product called MobiControl that does incorporate a basic scripting engine. Though most of the commands are specific to their system and actually have to be run from a desktop-side management console. Given all of the times I have tried to find a good scripting engine for WM myself you would think I would've just written one ;) So, sorry, but the basic answer is no, there is not a scripting engine available for VB in the context that you specified. A: The closest thing to a scripting environment on Windows Mobile is the Configuration Service Provider interface. While it is not a scripting language per se, it does allow one to do a lot of the same type of things such as modify registry settings, copy and delete files and directories, install and uninstall applications and much more. Mike Calligaro has a great article on how to write scripts and how to get them from your desktop onto the device in various ways. One of them is certain to work for you. A: Once option that the devs over at xda-developers seem to enjoy is Mortscript I have never bothered to use it, but I have used many cab installers that distribute mortscript so that they can do various tasks A: There is also a Visual Basic Runtime to run VBScript A: Try Lua: Oficial Lua page: http://www.lua.org/ Lua WinCe / Mobile port: http://www.magma.com.ni/sw/wince/lua/
{ "language": "en", "url": "https://stackoverflow.com/questions/28820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: XML => HTML with Hpricot and Rails I have never worked with web services and rails, and obviously this is something I need to learn. I have chosen to use hpricot because it looks great. Anyway, _why's been nice enough to provide the following example on the hpricot website: #!ruby require 'hpricot' require 'open-uri' # load the RedHanded home page doc = Hpricot(open("http://redhanded.hobix.com/index.html")) # change the CSS class on links (doc/"span.entryPermalink").set("class", "newLinks") # remove the sidebar (doc/"#sidebar").remove # print the altered HTML puts doc Which looks simple, elegant, and easy peasey. Works great in Ruby, but my question is: How do I break this up in rails? I experimented with adding this all to a single controller, but couldn't think of the best way to call it in a view. So if you were parsing an XML file from a web API and printing it in nice clean HTML with Hpricot, how would you break up the activity over the models, views, and controllers, and what would you put where? A: Model, model, model, model, model. Skinny controllers, simple views. The RedHandedHomePage model does the parsing on initialization, then call 'def render' in the controller, set output to an instance variable, and print that in a view. A: I'd probably go for a REST approach and have resources that represent the different entities within the XML file being consumed. Do you have a specific example of the XML that you can give?
{ "language": "en", "url": "https://stackoverflow.com/questions/28823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What exactly is Microsoft Expression Studio and how does it integrate with Visual Studio? My university is part of MSDNAA, so I downloaded it a while back, but I just got around to installing it. I guess part of it replaces FrontPage for web editing, and there appears to be a video editor and a vector graphics editor, but I don't think I've even scratched the surface of what it is and what it can do. Could someone enlighten me, especially since I haven't found an "Expression Studio for Dummies" type website. A: The idea is that designers will work in Expression Design (to design vector artwork) and Expression Blend (to build and style XAML interactions, as well as to define timeline based animations and interactions). Developers will work on the application in Visual Studio. Visual Studio includes very basic XAML editing capabilities, so developers would only be making minor edits and would mostly be focusing on the code-behind. That's the theory / product strategy side of it. In reality, if you're performing both roles, you'll end up having your project open in both Expression Blend and Visual Studio, switching back and forth between them depending on whether you're doing "designer tasks" or "developer tasks". Fortunately, Expression Blend and Visual Studio use the same project files. A: Expression Studio is basically a design studio. It consists of a bunch of design software that Microsoft has bought for the most part. The audience is designers, not developers. The gist of the software is that Expression Blend enables designers and programmers to work seamlessly together in letting the designer create the graphical user interface. In a normal workflow, the designer would deliver a mockup which the developer would have to implement. Using Expression Blend in combination with WPF, this is no longer necessary. The graphical UI made by the designer is functional. All the developer has to do is write the code for the function behind the design. This in itself is great because developers invariably fail to implement the design as thought out by the designer. Technical limitations, lack of communication … whatever the reason. UIs never look like them mockup done up front. Expression Design is basically a vector drawing program that can be used to design smaller components that are then used within Expression Blend as parts of the UI. For example, graphical buttons could be designed that way. It can also be used as a vanilla drawing program. I did the graphics in my thesis using Expression Design. A: From Wikipedia: Microsoft Expression Studio is a suite of design and media applications from Microsoft aimed at developers and designers. It consists of: * *Microsoft Expression Web (code-named Quartz) - WYSIWYG website designer and HTML editor. *Microsoft Expression Blend (code-named Sparkle) - Visual user interface builder for Windows Presentation Foundation and Silverlight applications. *Microsoft Expression Design (code-named Acrylic) - Raster and vector graphics editor. *Microsoft Expression Media - Digital asset and media manager. *Microsoft Expression Encoder - VC-1 content professional encoder. For web development Expression Web is useful. For XAML development, Blend and Design are useful. A: EDIT: Okay, I type too slow so most of what I had to say was already mentioned, so I'll strip it out except for... The BIG thing to take note of is that the WSYWIG designer they used in Expression Web made it's way into Visual Studio 2008, which is a VERY GOOD thing. There is now EXCELLENT support for CSS, a better editing interface, and you can even go into a split edit mode to see the code and the content while editing. For the longest time I was using Expression Web to do all my initial layout and then loading that into Visual Studio 2005. With Visual Studio 2008, there is no need to do it. A: The Expression site is the first place to start. These are tools that bridge the developer/designer gap for building rich internet applications with Silverlight and WPF. They compete with Adobe Studio products. Whilst Visual Studio is good for working with code, it has some weaknesses when it comes to dealing with XAML. In many cases a designer will build something visually different from a Windows application and Expression Blend allows them this freedom. It ties in Visual Studio for the C#/VB coding and debugging part of development. A: Expression Studio is targeted more at designers. It integrates with Visual Studio in that Expression Studio uses solution and project files, just like Visual Studio. Which makes collaborating with designer easier. The developer and the designer open up the same project. The developer sets up the initial page with all the binding and the designer takes that page and makes it look pretty. A: Please check for XAML .NET development, most of the tutorials makes use of many Expression tools.
{ "language": "en", "url": "https://stackoverflow.com/questions/28826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Java and manually executing finalize If I call finalize() on an object from my program code, will the JVM still run the method again when the garbage collector processes this object? This would be an approximate example: MyObject m = new MyObject(); m.finalize(); m = null; System.gc() Would the explicit call to finalize() make the JVM's garbage collector not to run the finalize() method on object m? A: One must understand the Garbage Collector(GC) Workflow to understand the function of finalize. calling .finalize() will not invoke the garbage collector, nor calling system.gc. Actually, What finalize will help the coder is to declare the reference of the object as "unreferenced". GC forces a suspension on the running operation of JVM, which creates a dent on the performance. During operation, GC will traverse all referenced objects, starting from the root object(your main method call). This suspension time can be decreased by declaring the objects as unreferenced manually, because it will cut down the operation costs to declare the object reference obsolete by the automated run. By declaring finalize(), coder sets the reference to the object obsolete, thus on the next run of GC operation, GC run will sweep the objects without using operation time. Quote: "After the finalize method has been invoked for an object, no further action is taken until the Java virtual machine has again determined that there is no longer any means by which this object can be accessed by any thread that has not yet died, including possible actions by other objects or classes which are ready to be finalized, at which point the object may be discarded. " from Java API Doc on java.Object.finalize(); For detailed explanation, you can also check: javabook.computerware A: According to this simple test program, the JVM will still make its call to finalize() even if you explicitly called it: private static class Blah { public void finalize() { System.out.println("finalizing!"); } } private static void f() throws Throwable { Blah blah = new Blah(); blah.finalize(); } public static void main(String[] args) throws Throwable { System.out.println("start"); f(); System.gc(); System.out.println("done"); } The output is: start finalizing! finalizing! done Every resource out there says to never call finalize() explicitly, and pretty much never even implement the method because there are no guarantees as to if and when it will be called. You're better off just closing all of your resources manually. A: The finalize method is never invoked more than once by a JVM for any given object. You shouldn't be relying on finalize anyway because there's no guarantee that it will be invoked. If you're calling finalize because you need to execute clean up code then better to put it into a separate method and make it explicit, e.g: public void cleanUp() { . . . } myInstance.cleanUp();
{ "language": "en", "url": "https://stackoverflow.com/questions/28832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: What causes Visual Studio to fail to load an assembly incorrectly? I had been happily coding along on a decent sized solution (just over 13k LOC, 5 projects) which utilizes Linq to Sql for it's data access. All of sudden I performed a normal build and I received a sweet, sweet ambiguous message: Error 1 Build failed due to validation errors in C:\xxx\xxx.dbml. Open the file and resolve the issues in the Error List, then try rebuilding the project. C:\xxx\xxx.dbml I had not touched my data access layer for weeks and no adjustments had been made to the DBML file. I tried plenty of foolhardy tricks like re-creating the layout file, making copies and re-adding the existing files back to the project after restarting Visual Studio (in case of some file-level corruption); all to no avail. I forgot to wear my Visual Studio Skills +5 talismans, so I began searching around and the only answer that I found which made sense was to reset my packages because Visual Studio was not loading an assembly correctly. After running "devenv.exe /resetskippkgs" I was, in fact, able to add the dbml file back to the DAL project and rebuild the solution. I’m glad it’s fixed, but I would rather also gain a deeper understand from this experience. Does anyone know how or why this happens in Visual Studio 2008? New Edit: 10/30/2008 THIS WAS NOT SOMETHING THAT JUST HAPPENED TO ME. Rich Strahl recently wrote on his "web log" about the same experience. He links to another blog with the same issue and used the same action. I have encountered this issue a few times since this original post as well, making me think that this is not some random issue. If anyone finds the definitive answer please post. A: After installing Phalanger for Visual Studio 2008, I attempted to create a new PHP WinForms Application. The project creation failed with a similar error message, stating that a DLL required could not be loaded (Failed to load file or assembly...). Running the devenv.exe /resetskippkgs command in Visual Studio 2008 Command Promtp resolved the issue immediately. Thanks for the info. A: I also get this error when trying to compile the data access layer in the second solution (installer). What I do is that I run Custom Tool on the dbml-file, and this does it. There is really no errors in the dbml file that needs to be corrected. My theory in this is that Visual Studio caches the compiled version of the dbml file, and that the cache is invalid for other solutions. I guess running /resetskippkgs does the same thing as recompiling the dbml file. Anyway, there are no fix for this yet? A: TBH, I have had a couple of instances like this where files "seemed to go crazy".. However, upon investigation it has appeared that the files have changed in some way, shape or form.. (e.g. sometimes changes can be made to the file by inadvertantly changing a property somewhere that seems unrelated). I think there are too many possible issues that could really cause this, and based on the fact that the problem has been resovled, it seems like an answer will not be found.. A: I had the same issue in VS 2010 (build failed due to validation errors in dbml file). I resolved this by viewing the designer view of the dbml file and dragging a table slightly to a different location so that it refreshed the dbml layout etc files. This seemed to do the trick, but was a bit of a weird issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/28839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Scrum - How to get better input from the functional/commercial team We are a small team of 3 developers (2 experienced but new to this particular business sector) developing a functionally complex product. We're using Scrum and have a demo at the end of each sprint. Its clear that the functional team have plenty of ideas but these are not well communicated to the development team and the demo poses more questions than answers. Have you any recommendations for improving the the quality of input from the functional people? Further info: I think part of the problem is that there are no specs or User Stories as such. Personally I think they need to be writing down some sort of requirements - what sort of things should they be writing down and to what complexity given its an agile process? A: Have you tried working with your customer to define / formulate acceptance tests? Using something like Fit to come up with these tests - would result in better specs as well as force the customer to think about what is really required. The icing on the cake is instant-doc-executable specs at the end of this process. That is of course, if your customers are available and open to this approach. Give it a try! If not (and that seems to be the majority - because it is less work) - calendar flash 'em - schedule meetings/telecons every week until they sing like canaries :) +1 to Dana A: Sometimes the easiest way to get input from people is to force it out of them. My company used SCRUM on a project, and found very quickly that people tend to keep to themselves when they already know what they're doing. We ended up organizing weekly meetings where team members were required to display something that was learned during the week. It was forced, but it worked pretty well. A: I'm a big believer in Use Cases, detailing the system behaviour in response to user actions. Collectively these can form a loose set of requirements, and in a SCRUM environment can help you prioritise the Use Cases which will form that particular sprint's implemented features. For example, after talking to your functional team you identify 15 separate Use Cases. You prioritise the Use Cases, and decided to plan for 5 sprints. And the end of each sprint you go through and demo the product fulfilling the Use Cases implemented during the sprint, noting the feedback and amending the Use Cases. A: I understand that the people you call functional people are acting as Product Owners, right? I think part of the problem is that there are no specs or User Stories as such. Personally I think they need to be writing down some sort of requirements - what sort of things should they be writing down and to what complexity given its an agile process? Actually, without having any specs you probably have no acceptance test for the backlog itens as well. You should ask the PO to write the user stories, I like the "As a - type of user -, I want -some goal- so that -some reason-." form. Keep in mind that the User Stories shall be INVEST - Independent, Negotiable, Valuable to users or customers, Estimable, Small and Testable. What is a must is to have the Acceptance tests written together with the story so that the team should know what the story must be able to do in order do be set as done. Remember that as the product evolves, it's expected to the PO have ideas as he sees the working product. It's not a bad thing, actually it is one of the best thing you can get through Agile. What you have to pay attention is that this ideas mus be included in the product backlog and it needs to be prioritized by th PO. And, if it's necessary and will add value to the customer, the idea should be planned to be built in the next sprint. A: Someone from the functional team should be part of the team and available to answer your questions about the features you're adding. How can you estimate the Backlog item if they are not detailled enough ? You could establissh a rule that Backlog item that do not have clear acceptance criteria cannot be planned. If would be better to have someone from the functional team acting as Product Owner, to determine, choose and priotitize the Backlog items, and/or as Domain Expert. Also, make sure everyone in both the functional team and the development team speaks the same language, so as to avoid misunderstandings ; See ubiquitous language. Track the time most waiting for answers from the functional team as well as he time wasted developping unnecessary features or reworking existing features so that they fits the bill. A: Are they participating in the stand-up meetings? You could propose to have a representative at each (or some) of them, to ask them for input before the end of the sprint A: Are you doing stand-up meetings and do you have burn down chart? I think those two areas would benefit you greatly. A: I recommend the book "Practices of an agile developer" it is full of suggestions how to make a scrum team successful. It also gives good tips how to get the product owner/customer more involved and how to get the whole process rolling. It's worth the money IMHO. A: I agree that you need some sort of requirements (user stories or else). One piece of advice I can give is to use some sort of visual aids with the functional teams. When customers have plenty of ideas (as you've said) they usually also have a visual idea of what a feature looks like, when the developed product doesn't fit this visual idea it creates a lot of doubts, even if it does the job functionally. When discussing functionality with customers, I try to be very visual. Drawing sketches on a board, or even verbally describing what something would look like. Trying to find a common visual image. You can then take a photo of the sketches and use them as part of the documentation. Another advice is to keep your sprints as short as possible, so that you do more frequent demos. But you may already be doing this, since you didn't mention your current sprint duration.
{ "language": "en", "url": "https://stackoverflow.com/questions/28840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Can SlickEdit automatically update its tag files? I prefer SlickEdit for my IDE but the only way I can get it to update the tag files to incorporate code changes is to recreate the project and/or run start a re-tag manually. Is there a way to setup Slick Edit so that it automatically incorporates changes in the code base that happen after project creation. This problem is especially noticeable when working on large shared code bases where I must check out files that have been modified by other users. A: Okay, I asked a question on the SlickEdit forums. http://community.slickedit.com/index.php?topic=3854.0 EDIT: Winnar! Options->Editing->Background Tagging of Other Files
{ "language": "en", "url": "https://stackoverflow.com/questions/28843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Coolest C# LINQ/Lambdas trick you've ever pulled? Saw a post about hidden features in C# but not a lot of people have written linq/lambdas example so... I wonder... What's the coolest (as in the most elegant) use of the C# LINQ and/or Lambdas/anonymous delegates you have ever saw/written? Bonus if it has went into production too! A: http://igoro.com/archive/extended-linq-additional-operators-for-linq-to-objects/ http://igoro.com/archive/7-tricks-to-simplify-your-programs-with-linq/ A: To me, the duality between delegates (Func<T,R>, Action<T>) and expressions (Expression<Func<T,R>> Expression<Action<T>>) is what gives rise to the most clever uses of lambdas. For example: public static class PropertyChangedExtensions { public static void Raise(this PropertyChangedEventHandler handler, Expression<Func<object>> propertyExpression) { if (handler != null) { // Retrieve lambda body var body = propertyExpression.Body as MemberExpression; if (body == null) throw new ArgumentException("'propertyExpression' should be a member expression"); // Extract the right part (after "=>") var vmExpression = body.Expression as ConstantExpression; if (vmExpression == null) throw new ArgumentException("'propertyExpression' body should be a constant expression"); // Create a reference to the calling object to pass it as the sender LambdaExpression vmlambda = Expression.Lambda(vmExpression); Delegate vmFunc = vmlambda.Compile(); object vm = vmFunc.DynamicInvoke(); // Extract the name of the property to raise a change on string propertyName = body.Member.Name; var e = new PropertyChangedEventArgs(propertyName); handler(vm, e); } } } Then you can "safely" implement INotifyPropertyChanged by calling if (PropertyChanged != null) PropertyChanged.Raise( () => MyProperty ); Note : I saw this on the web at first a few weeks ago, then lost the link and a score of variations have cropped up here and there since then so I'm afraid I cannot give proper attribution. A: Actually, I'm quite proud of this for generating Excel docments: http://www.aaron-powell.com/linq-to-xml-to-excel A: The LINQ Raytracer certainly tops my list =) I'm not quite sure if qualifies as elegant but it is most certainly the coolest linq-expression I've ever seen! Oh, and just to be extremely clear; I did not write it (Luke Hoban did) A: Not my design but I've used it a few times, a typed-switch statement: http://community.bartdesmet.net/blogs/bart/archive/2008/03/30/a-functional-c-type-switch.aspx Saved me so many if... else if... else if... else IF! statements A: I did one (a bit crazy, but interesting) thing like that just recently: * *http://tomasp.net/blog/reactive-iv-reactivegame.aspx A: Some basic functionals: public static class Functionals { // One-argument Y-Combinator. public static Func<T, TResult> Y<T, TResult>(Func<Func<T, TResult>, Func<T, TResult>> F) { return t => F(Y(F))(t); } // Two-argument Y-Combinator. public static Func<T1, T2, TResult> Y<T1, T2, TResult>(Func<Func<T1, T2, TResult>, Func<T1, T2, TResult>> F) { return (t1, t2) => F(Y(F))(t1, t2); } // Three-arugument Y-Combinator. public static Func<T1, T2, T3, TResult> Y<T1, T2, T3, TResult>(Func<Func<T1, T2, T3, TResult>, Func<T1, T2, T3, TResult>> F) { return (t1, t2, t3) => F(Y(F))(t1, t2, t3); } // Four-arugument Y-Combinator. public static Func<T1, T2, T3, T4, TResult> Y<T1, T2, T3, T4, TResult>(Func<Func<T1, T2, T3, T4, TResult>, Func<T1, T2, T3, T4, TResult>> F) { return (t1, t2, t3, t4) => F(Y(F))(t1, t2, t3, t4); } // Curry first argument public static Func<T1, Func<T2, TResult>> Curry<T1, T2, TResult>(Func<T1, T2, TResult> F) { return t1 => t2 => F(t1, t2); } // Curry second argument. public static Func<T2, Func<T1, TResult>> Curry2nd<T1, T2, TResult>(Func<T1, T2, TResult> F) { return t2 => t1 => F(t1, t2); } // Uncurry first argument. public static Func<T1, T2, TResult> Uncurry<T1, T2, TResult>(Func<T1, Func<T2, TResult>> F) { return (t1, t2) => F(t1)(t2); } // Uncurry second argument. public static Func<T1, T2, TResult> Uncurry2nd<T1, T2, TResult>(Func<T2, Func<T1, TResult>> F) { return (t1, t2) => F(t2)(t1); } } Don't do much good if you don't know how to use them. In order to know that, you need to know what they're for: * *What is currying? *What is a y-combinator? A: By far the most impressive Linq implementation i've ever come across is the Brahma framework. It can be used to offload parallel calculations to the GPU using 'Linq to GPU'. You write a 'query' in linq, and then Brahma translates it into HLSL (High Level Shader Language) so DirectX can process it on the GPU. This site will only let me paste one link so try this webcast from dotnetrocks: http://www.dotnetrocks.com/default.aspx?showNum=466 Else google for Brahma Project, you'll get the right pages. Extremely cool stuff. GJ A: Progress Reporting for long running LINQ queries. In the blog post you can find an extension method WithProgressReporting() that lets you discover and report the progress of a linq query as it executes. A: I was trying to come up with a cool way to build a navigation control for a website I was building. I wanted to use regular HTML unordered list elements (employing the standard CSS "Sucker Fish" look) with a top-navigation mouse-over effect that reveals the drop down items. I had a sql dependent cached DataSet with two tables (NavigationTopLevels & NavigationBottomLevels). Then all I had to was create two class objects (TopNav & SubNav) with the few required properties (the TopNav class had to have a generic list of bottomnav items -> List<SubNav> SubItems). var TopNavs = from n in ds.NavigationTopLevels select new TopNav { NavigateUrl = String.Format("{0}/{1}", tmpURL, n.id), Text = n.Text, id = n.id, SubItems = new List<SubNav>( from si in ds.NavigationBottomLevels where si.parentID == n.id select new SubNav { id = si.id, level = si.NavLevel, NavigateUrl = String.Format("{0}/{1}/{2}", tmpURL, n.id, si.id), parentID = si.parentID, Text = si.Text } ) }; List<TopNav> TopNavigation = TopNavs.ToList(); It might not be the "coolest" but for a lot of people who want to have dynamic navigation, its sweet not to have to muddle around in the usual looping logic that comes with that. LINQ is, if anything a time saver in this case. A: I think that LINQ is a major change to .NET and it is a very powerful tool. I use LINQ to XML in production to parse and filter records from a 6MB XML file (with 20+ node levels) into a dataset in two lines of code. Before LINQ this would have taken hundreds of lines of code and days to debug. That's what I call elegant! A: Perhaps not the coolest, but recently I have been using them anytime I have a block of code that gets C+Pd over and over again only to have a few lines change. For instance, running simple SQL commands to retrieve data can be done like so: SqlDevice device = GetDevice(); return device.GetMultiple<Post>( "GetPosts", (s) => { s.Parameters.AddWithValue("@CreatedOn", DateTime.Today); return true; }, (r, p) => { p.Title = r.Get<string>("Title"); // Fill out post object return true; } ); Which could return a list of Posts that were created today. This way I don't have to copy and paste the try-catch-finally block fifteen million times for each command, object, et cetera. A: Working with attributes: private void WriteMemberDescriptions(Type type) { var descriptions = from member in type.GetMembers() let attributes = member.GetAttributes<DescriptionAttribute>(true) let attribute = attributes.FirstOrDefault() where attribute != null select new { Member = member.Name, Text = attribute.Description }; foreach(var description in descriptions) { Console.WriteLine("{0}: {1}", description.Member, description.Text); } } The GetAttributes extension method: public static class AttributeSelection { public static IEnumerable<T> GetAttributes<T>(this ICustomAttributeProvider provider, bool inherit) where T : Attribute { if(provider == null) { throw new ArgumentNullException("provider"); } return provider.GetCustomAttributes(typeof(T), inherit).Cast<T>(); } } AttributeSelection is production code and also defines GetAttribute and HasAttribute. I chose to use the let and where clauses in this example. A: OLINQ reactive LINQ queries over INotifyingCollection - these allow you to do (amongst other things) realtime aggregation against large datasets. https://github.com/wasabii/OLinq
{ "language": "en", "url": "https://stackoverflow.com/questions/28858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: Why does SQL Server work faster when you index a table after filling it? I have a sproc that puts 750K records into a temp table through a query as one of its first actions. If I create indexes on the temp table before filling it, the item takes about twice as long to run compared to when I index after filling the table. (The index is an integer in a single column, the table being indexed is just two columns each a single integer.) This seems a little off to me, but then I don't have the firmest understanding of what goes on under the hood. Does anyone have an answer for this? A: It's because the database server has to do calculations each and every time you insert a new row. Basically, you end up reindexing the table each time. It doesn't seem like a very expensive operation, and it's not, but when you do that many of them together, you start to see the impact. That's why you usually want to index after you've populated your rows, since it will just be a one-time cost. A: If you create a clustered index, it affects the way the data is physically ordered on the disk. It's better to add the index after the fact and let the database engine reorder the rows when it knows how the data is distributed. For example, let's say you needed to build a brick wall with numbered bricks so that those with the highest number are at the bottom of the wall. It would be a difficult task if you were just handed the bricks in random order, one at a time - you wouldn't know which bricks were going to turn out to be the highest numbered, and you'd have to tear the wall down and rebuild it over and over. It would be a lot easier to handle that task if you had all the bricks lined up in front of you, and could organize your work. That's how it is for the database engine - if you let it know about the whole job, it can be much more efficient than if you just feed it a row at a time. A: You should NEVER EVER create an index on an empty table if you are going to massively load it right afterwards. Indexes have to be maintained as the data on the table changes, so imagine as if for every insert on the table the index was being recalculated (which is an expensive operation). Load the table first and create the index after finishing with the load. That's were the performance difference is going. A: Think of it this way. Given unorderedList = {5, 1,3} orderedList = {1,3,5} add 2 to both lists. unorderedList = {5, 1,3,2} orderedList = {1,2,3,5} What list do you think is easier to add to? Btw ordering your input before load will give you a boost. A: After performing large data manipulation operations, you frequently have to update the underlying indexes. You can do that by using the UPDATE STATISTICS [table] statement. The other option is to drop and recreate the index which, if you are doing large data insertions, will likely perform the inserts much faster. You can even incorporate that into your stored procedure. A: this is because if the data you insert is not in the order of the index, SQL will have to split pages to make room for additional rows to keep them together logically A: This due to the fact that when SQL Server indexes table with data it is able to produce exact statistics of values in indexed column. At some moments SQL Server will recalculate statistics, but when you perform massive inserts the distribution of values may change after the statistics was calculated last time. The fact that statistics is out of date can be discovered on Query Analyzer. When you see that on a certain table scan number of rows expected differs to much from actual numbers of rows processed. You should use UPDATE STATISTICS to recalculate distribution of values after you insert all the data. After that no performance difference should be observed. A: If you have an index on a table, as you add data to the table SQL Server will have to re-order the table to make room in the appropriate place for the new records. If you're adding a lot of data, it will have to reorder it over and over again. By creating an index only after the data is loaded, the re-order only needs to happen once. Of course, if you are importing the records in index order it shouldn't matter so much. A: In addition to the index overhead, running each query as a transaction is a bad idea for the same reason. If you run chunks of inserts (say 100) within 1 explicit transaction, you should also see a performance increase.
{ "language": "en", "url": "https://stackoverflow.com/questions/28877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Determine Parent Node Of DOMElement I'm translating my C# code for YouTube video comments into PHP. In order to properly nest comment replies, I need to re-arrange XML nodes. In PHP I'm using DOMDocument and DOMXPath which closely corresponds to C# XmlDocument. I've gotten pretty far in my translation but now I'm stuck on getting the parent node of a DOMElement. A DOMElement does not have a parent_node() property, only a DOMNode provides that property. After determining that a comment is a reply to a previous comment based in the string "in-reply-to" in a link element, I need to get its parent node in order to nest it beneath the comment it is in reply to: // Get the parent entry node of this link element $importnode = $objReplyXML->importNode($link->parent_node(), true); A: DOMElement is a subclass of DOMNode, so it does have parent_node property. Just use $domNode->parentNode; to find the parent node. In your example, the parent node of $importnode is null, because it has been imported into the document, and therefore does not have a parent yet. You need to attach it to another element before it has a parent. A: Replace parent_node() to parentNode A: I'm not entirely sure how your code works, but it seems like you have a small error in your code. On the line you posted in your question you have $link->parent_node(), but in the answer with the entire code snippet you have $link**s**->parent_node(). I don't think the s should be there. Also, I think you should use $link->parentNode, not $link->parent_node().
{ "language": "en", "url": "https://stackoverflow.com/questions/28878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Why doesn't **sort** sort the same on every machine? Using the same sort command with the same input produces different results on different machines. How do I fix that? A: This can be the result of locale differences: $ echo 'CO2_ CO_' | env LC_ALL=C sort CO2_ CO_ $ echo 'CO2_ CO_' | env LC_ALL=en_US sort CO_ CO2_ Setting the LC_ALL environment variable to the same value should correct the problem. A: This is probably due to different settings of the locale environment variables. sort will use these settings to determine how to compare strings. By setting these environment variables the way you want before calling sort, you should be able to force it to behave in one specific way. A: For more than you ever wanted to know about sort, read the specification of sort in the Single Unix Specification v3. It states Comparisons [...] shall be performed using the collating sequence of the current locale. IOW, how sort sorts is dependent on the locale (language) settings of the environment that the script is running under. A: The man-page on OS X says: ******* WARNING ******* The locale specified by the environment affects sort order. Set LC_ALL=C to get the traditional sort order that uses native byte values. which might explain things. If some of your systems have no locale support, they would default to that locale (C), so you wouldn't have to set it on those. If you have some that supports locales and want the same behavior, set LC_ALL=C on those systems. That would be the way to have as many systems as I know do it the same way. If you don't have any locale-less systems, just making sure they share locale would probably be enough. For more canonical information, see The Single UNIX ® Specification, Version 2 description of locale, environment variables, setlocale() and the description of the sort(1) utility.
{ "language": "en", "url": "https://stackoverflow.com/questions/28881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: .NET Compiler -- DEBUG vs. RELEASE For years I have been using the DEBUG compiler constant in VB.NET to write messages to the console. I've also been using System.Diagnostics.Debug.Write in similar fashion. It was always my understanding that when RELEASE was used as the build option, that all of these statements were left out by the compiler, freeing your production code of the overhead of debug statements. Recently when working with Silverlight 2 Beta 2, I noticed that Visual Studio actually attached to a RELEASE build that I was running off of a public website and displayed DEBUG statements which I assumed weren't even compiled! Now, my first inclination is to assume that that there is something wrong with my environment, but I also want to ask anyone with deep knowledge on System.Diagnostics.Debug and the DEBUG build option in general what I may be misunderstanding here. A: Examine the Debug.Write method. It is marked with the [Conditional("DEBUG")] attribute. The MSDN help for ConditionalAttribute states: Indicates to compilers that a method call or attribute should be ignored unless a specified conditional compilation symbol is defined. Whether the build configuration has a label of release or debug doesn't matter, what matters is whether the DEBUG symbol is defined in it. A: The preferred method is to actually use the conditional attribute to wrap your debug calls, not use the compiler directives. #ifs can get tricky and can lead to weird build problems. An example of using a conditional attribute is as follows (in C#, but works in VB.NET too): [ Conditional("Debug") ] private void WriteDebug(string debugString) { // do stuff } When you compile without the DEBUG flag set, any call to WriteDebug will be removed as was assumed was happening with Debug.Write(). A: I read the article too, and it led me to believe that when DEBUG was not defined, that the ConditionalAttribute declared on System.Debug functions would cause the compiler to leave out this code completely. I assume the same thing to be true for TRACE. That is, the System.Diagnostics.Debug functions must have ConditionalAttributes for DEBUG and for TRACE. I was wrong in that assumption. The separate Trace class has the same functions, and these define ConditionalAttribute dependent on the TRACE constant. From System.Diagnostics.Debug: _ Public Shared Sub Write ( _ message As String _ ) From System.Diagnostics.Trace: _ Public Shared Sub WriteLine ( _ message As String _ ) It seems then that my original assumption was correct, that System.Diagnostics.Debug (or system.Diagnostics.Trace) statements are actually not included in compilation as if they were included in #IF DEBUG (or #IF TRACE) regions. But I've also learned here from you guys, and verified, that the RELEASE build does not in itself take care of this. At least with Silverlight projects, which are still a little flaky, you need to get into the "Advanced Compile Options..." and make sure DEBUG is not defined. We jumped from .NET 1.1/VS2003 to .NET 3.5/VS2008 so I think some of this used to work differently, but perhaps it changed in 2.0/VS2005. A: What I do is encapsulate the call to Debug in my own class and add a precompiler directive public void Debug(string s) { #if DEBUG System.Diagnostics.Debug(...); #endif } A: Using the DEBUG compiler symbol will, like you said, actually omit the code from the assembly. I believe that System.Diagnostics.Debug.Write will always output to an attached debugger, even if you've built in Release mode. Per the MSDN article: Writes information about the debug to the trace listeners in the Listeners collection. If you don't want any output, you'll need to wrap your call to Debug.Write with the DEBUG constant like Juan said: #if DEBUG System.Diagnostics.Debug.Write(...); #endif A: To select whether you want the debug information to be compiled or to be removed, enter the "Build" tab in the project's properties window. Choose the right configuration (Active/Release/Debug/All) and make sure you check the "DEBUG Constant" if you want the info, or uncheck it if you don't. Apply changes and rebuild A: In my experience choosing between Debug and Release in VB.NET makes no difference. You may add custom actions to both configuration, but by default I think they are the same. Using Release will certainly not remove the System.Diagnostics.Debug.Write statements.
{ "language": "en", "url": "https://stackoverflow.com/questions/28894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Datatypes for physics I'm currently designing a program that will involve some physics (nothing too fancy, a few balls crashing to each other) What's the most exact datatype I can use to represent position (without a feeling of discrete jumps) in c#? Also, what's the smallest ammount of time I can get between t and t+1? One tick? EDIT: Clarifying: What is the smallest unit of time in C#? [TimeSpan].Tick? A: In .Net a decimal will be the most precise datatype that you could use for position. I would just write a class for the position: public class Position { decimal x; decimal y; decimal z; } As for time, your processor can't give you anything smaller than one tick. Sounds like an fun project! Good luck! A: The Decimal data type although precise might not be the optimum choice depending on what you want to do. Generally Direct3D and GPUs use 32-bit floats, and vectors of 3 (total 96 bits) to represent a position in x,y,z. This will usually give more than enough precision unless you need to mix both huge scale (planets) and microscopic level (basketballs) in the same "world". Reasons for not using Decimals could be size (4 x larger), speed (orders of magnitude slower) and no trigonometric functions available (AFAIK). On Windows, the QueryPerformanceCounter API function will give you the highest resolution clock, and QueryPerformanceFrequency the frequency of the counter. I believe the Stopwatch described in other comments wraps this in a .net class. A: Unless you're doing rocket-science, a decimal is WAAAY overkill. And although it might give you more precise positions, it will not necessarily give you more precise (eg) velocities, since it is a fixed-point datatype and therefore is limited to a much smaller range than a float or double. Use floats, but leave the door open to move up to doubles in case precision turns out to be a problem. A: I would use a Vector datatype. Just like in Physics, when you want to model an objects movement, you use vectors. Use a Vector2 or Vector3 class out of the XNA framework or roll your own Vector3 struct to represent the position. Vector2 is for 2D and Vector3 is 3D. TimeSpan struct or the Stopwatch class will be your best options for calculating change in time. If I had to recommend, I would use Stopwatch. A: I'm not sure I understand your last question, could you please clarify? Edit: I might still not understand, but you can use any type you want (for example, doubles) to represent time (if what you actually want is to represent the discretization of time for your physics problem, in which case the tick is irrelevant). For most physics problems, doubles would be sufficient. The tick is the best precision you can achieve when measuring time with your machine. A: I think you should be able to get away with the Decimal data type with no problem. It has the most precision available. However, the double data type should be just fine. Yes, a tick is the smallest I'm aware of (using the System.Diagnostics.Stopwatch class). A: For a simulation you're probably better off using a decimal/double (same type as position) for a dimensionless time, then converting it from/to something meaningful on input/output. Otherwise you'll be performing a ton of cast operations when you move things around. You'll get arbitrary precision this way, too, because you can choose the timescale to be as large/small as you want. A: Hey Juan, I'd recommend that you use the Vector3 class as suggested by several others since it's easy to use and above all - supports all operations you need (like addition, multiplication, matrix multiply etc...) without the need to implement it yourself. If you have any doubt about how to proceed - inherit it and at later stage you can always change the inner implementation, or disconnect from vector3. Also, don't use anything less accurate than float - all processors these days runs fast enough to be more accurate than integers (unless it's meant for mobile, but even there...) Using less than float you would lose precision very fast and end up with jumpy rotations and translations, especially if you plan to use more than a single matrix/quaternion multiplication.
{ "language": "en", "url": "https://stackoverflow.com/questions/28896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: SQLServer Get Results Where Value Is Null I have an SQL server database that I am querying and I only want to get the information when a specific row is null. I used a where statement such as: WHERE database.foobar = NULL and it does not return anything. However, I know that there is at least one result because I created an instance in the database where 'foobar' is equal to null. If I take out the where statement it shows data so I know it is not the rest of the query. Can anyone help me out? A: Correct syntax is WHERE database.foobar IS NULL. See http://msdn.microsoft.com/en-us/library/ms188795.aspx for more info A: Comparison to NULL will be false every time. You want to use IS NULL instead. x = NULL -- always false x <> NULL -- always false x IS NULL -- these do what you want x IS NOT NULL A: Read Testing for Null Values, you need IS NULL not = NULL A: Is it an SQL Server database? If so, use IS NULL instead of making the comparison (MSDN).
{ "language": "en", "url": "https://stackoverflow.com/questions/28922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best JavaScript compressor What is the the best JavaScript compressor available? I'm looking for a tool that: * *is easy to use *has a high compression rate *Produce reliable end results (doesn't mess up the code) A: I use ShrinkSafe from the Dojo project - it is exceptional because it actually uses a JavaScript interpreter (Rhino) to deal with finding symbols in the code and understanding their scope, etc. which helps to ensure that the code will work when it comes out the other end, as opposed to a lot of compression tools which use regex to do the same (which is not as reliable). I actually have an MSBuild task in a Web Deployment Project in my current Visual Studio solution that runs a script which in turn runs all of the solution's JS files through ShrinkSafe before we deploy and it works quite well. EDIT: By the way, "best" is open to debate, since the criteria for "best" will vary depending on the needs of the project. Personally, I think ShrinkSafe is a good balance; for some people that think smallest size == best, it will be insufficient. EDIT: It is worth noting that the YUI compressor also uses Rhino. A: Try JSMin, got C#, Java, C and other ports and readily available too. A: YUI Compressor is the way to go. It has a great compression rate, is well tested and is in use among many top sites, and, well, personally recommended by me. I've used it for my projects without a single JavaScript error or hiccup. And it has nice documentation. I've never used its CSS compression capabilities, but they exist as well. CSS compression works just as well. Note: Although Dean Edwards's /packer/ achieves a better compression rate than YUI Compressor, I ran into a few JavaScript errors when using it. A: If you use Packer, just go far the 'shrink variables' option and gzip the resulting code. The base62 option is only for if your server cannot send gzipped files. Packer with 'shrink vars' achieves better compression the YUI, but can introduce bugs if you've skipped a semicolon somewhere. base62 is basically a poor man's gzip, which is why gzipping base62-ed code gives you bigger files than gzipping shrink-var-ed code. A: JSMin is another one. A: In searching silver bullet, found this question. For Ruby on Rails http://github.com/sstephenson/sprockets A: I recently released UglifyJS, a JavaScript compressor which is written in JavaScript (runs on the NodeJS Node.js platform, but it can be easily modified to run on any JavaScript engine, since it doesn't need any Node.js internals). It's a lot faster than both YUI Compressor and Google Closure, it compresses better than YUI on all scripts I tested it on, and it's safer than Closure (knows to deal with "eval" or "with"). Other than whitespace removal, UglifyJS also does the following: * *changes local variable names (usually to single characters) *joins consecutive var declarations *avoids inserting any unneeded brackets, parens and semicolons *optimizes IFs (removes "else" when it detects that it's not needed, transforms IFs into the &&, || or ?/: operators when possible, etc.). *transforms foo["bar"] into foo.bar where possible *removes quotes from keys in object literals, where possible *resolves simple expressions when this leads to smaller code (1+3*4 ==> 13) PS: Oh, it can "beautify" as well. ;-) A: Revisiting this question a few years later, UglifyJS, seems to be the best option as of now. As stated below, it runs on the NodeJS platform, but can be easily modified to run on any JavaScript engine. --- Old answer below--- Google released Closure Compiler which seems to be generating the smallest files so far as seen here and here Previous to that the various options were as follow Basically Packer does a better job at initial compression , but if you are going to gzip the files before sending on the wire (which you should be doing) YUI Compressor gets the smallest final size. The tests were done on jQuery code btw. * *Original jQuery library 62,885 bytes , 19,758 bytes after gzip *jQuery minified with JSMin 36,391 bytes , 11,541 bytes after gzip *jQuery minified with Packer 21,557 bytes , 11,119 bytes after gzip *jQuery minified with the YUI Compressor 31,822 bytes , 10,818 bytes after gzip @daniel james mentions in the comment compressorrater which shows Packer leading the chart in best compression, so I guess ymmv A: Here's the source code of an HttpHandler which does that, maybe it'll help you A: Here is a YUI compressor script (Byuic) that finds all the js and css down a path and compresses /(optionally) obfuscates them. Nice to integrate into a build process. A: bananascript.com used to give me best results. A: KJScompress http://opensource.seznam.cz/KJScompress/index.html Kjscompress/csskompress is set of two applications (kjscompress a csscompress) to remove non-significant whitespaces and comments from files containing JavaScript and CSS. Both are command-line applications for GNU/Linux operating system. A: Js Crush is a good compressor to use after you have minified.
{ "language": "en", "url": "https://stackoverflow.com/questions/28932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "174" }
Q: Best architecture for handling file system changes? Here is the scenario: I'm writing an app that will watch for any changes in a specific directory. This directory will be flooded with thousands of files a minute each with an "almost" unique GUID. The file format is this: GUID.dat where GUID == xxxxxxxxxxxxxxxxxxxxxxxxxxxxx (the internal contents aren't relevant, but it's just text data) My app will be a form that has one single text box that shows all the files that are being added and deleted in real time. Every time a new file comes in I have to update the textbox with this file, BUT I must first make sure that this semi-unique GUID is really unique, if it is, update the textbox with this new file. When a file is removed from that directory, make sure it exists, then delete it, update textbox accordingly. The problem is that I've been using the .NET filewatcher and it seems that there is an internal buffer that gets blown up every time the (buffersize + 1)-th file comes in. I also tried to keep an internal List in my app, and just add every single file that comes in, but do the unique-GUID check later, but no dice. A: A couple of things that I have in my head: * *If the guid is not unique, would it not overwrite the file with the same name, or is the check based on a lookup which does some external action (e.g. check the archive)? (i.e. is this a YAGNI moment?) *I've used FileSystemWatcher before with pretty good success, can you give us some ideas as to how your actually doing things? *When you say "no dice" when working with your custom list, what was the problem? And how were you checking for file system changes without FileSystemWatcher?! Sorry no answer as yet, just would like to know more about the problem :) A: I suggest you take a look at the SHChangeNotify API call, which can notify you of all kinds of shell events. To monitor file creation and deletion activity, you may want to pay special attention to the SHCNE_CREATE and SHCNE_DELETE arguments.
{ "language": "en", "url": "https://stackoverflow.com/questions/28941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Expanding Virtual Disk Hey everyone, I'm using Virtual PC and working with a virtual hard disk (*.vhd) that is only sized at 8.2 GB. I would like to double the size to something like 16-20GB. I see options for compacting the VHD but nothing to expand it. It's set to "dynamically expand" but I'm installing some software and it doesn't seem to resize itself to a larger space. Thanks much. A: Heres my solution, using VHDResizer and DISKPART on a Windows XP host. * *Download VHDResizer from here *Following these instructions from "Murnic" on this thread didnt work, on entering EXTEND, not sure on the exact wording now, but it was along the lines of cant extend this volume. The easiest way to do this (as long as you have enough hard drive space) is to extend your existing VHD using VHD Expander which gives you two VHD files. The newly extended file will take the name of your existing VHD. You might want to Defragment, Precompact, and Compact your VHD prior to extending your VHD. In Virtual PC 2007 go to Settings. * *Select your OLD VHD as Hard Disk 1 *Select your newly extended VHD as Hard Disk 2 *Boot your VM *Open an Command Prompt *Run diskpart *From DISKPART> - Execute LIST VOLUME - Select your new VHD volume by executing SELECT VOLUME where is your new VHD most likely 2 - Execute EXTEND - You should see a success message. If not you may have to recreate your extended VHD due to an error in the process. *Quit Diskpart.exe by typing EXIT *Shut Down the VM *Remove both VHD files from the Hard Disks list *Add your newly extended VHD as Hard Disk 1 *Boot your VM. *You will get a Windows Newly Added Hardware message after a short time. *Reboot the VM when prompted *Start using your newly extended VHD! Here is Microsoft's information on extending volumes using Diskpart.exe: http://support.microsoft.com/kb/325590 So I went back to these instructions from 'AutoSponge' at the start of the same thread, Mount the image * *C:>Program Files\Microsoft Virtual Server\Vhdmount>vhdmount /m “C:Documents and Settings\All Users\Documents\Shared Virtual Machines\.vhd” *Start diskpart and expand the partition C:>Program Files\Microsoft Virtual Server\Vhdmount>diskpart DISKPART>list disk DISKPART>select disk 3 -----check the number in the list DISKPART>list part DISKPART>select part 1 -----check the number in the list DISKPART>extend DISKPART>list part -----check the new size DISKPART>exit *Dismount and save changes C:>Program Files\Microsoft Virtual Server\Vhdmount>vhdmount /u /c “C:Documents and Settings\All Users\Documents\Shared Virtual Machines\.vhd” You can get download Microsoft Virtual Server here. You can do a custom install and only select VHDMount Some more information using VHDMount Using VHDMount with Windows XP - It is not possible to use '/m' (Mount), you can only use '/p' (Plug in). The reason for this is that VHDMount uses VDS (the Virtual Disk Service) to assign a disk letter to the virtual hard disk after it is mounted, but VDS is only included in Windows Server 2003 and later. This is not too big of an issue though, as unlike Windows Server 2003, Windows XP will automatically mount the virtual hard disk when it is plugged in. This means that the only functionality you lose on Windows XP is the ability to specify exactly which drive letter should be used. A: VHD Resizer A: Never worked with Virtual PC but from other virtualization software I know I guess that dynamically expand means that initially the .vhd file will take less space in the HD than the specified and will dynamically grow as you keep installing programs or adding files into the virtual drive UP TO the specified size. For what you want I guess that you will have to modify the specified size in the virtual hard drive from Virtual PC's setup window. A bit offtopic but give a go to Virtual Box: www.virtualbox.org A: For vmware users, you can download a free edition of vmware converter which not only lets you resize virtual disks but also lets you convert from physical to virtual machines and vice-versa. A: Here is a solution that worked for me: Use "CopyWipe" or a similar software to make a hardcopy to a new vhd, as described here: Eric Cosky A: I found it easier, simpler and safer to just create a second VHD and install my Big Software to that HD. A: First detach the VHD then run those commands to expand your disk: * *diskpart *Select vdisk file="Your Path" *list vdisk *expand vdisk maximum=new size in MB *attach vdisk *list disk *online Disk *list volume *select volume # *extend *list Volume *detach vdisk *exit And here's a brief description for what each line does: * *Launch the DiskPart utity. *Select the VHD file. Notice that if the path or the file name has spaces you have to put double quotes around it. *Shows you a list of Vdisks. The * at the left shows the one that is selected. *Changes the size of the vdisk to our new size. *Once the disk is expanded you have to mount it to work on the disk. *Shows the list of disks mounted disks and vDisks including ours. *If disk is not showing online you will need to bring it online. *List volumes(partitions). The ### column is the most important since it has the number you need to use to select the volume you will work with. *Select the volume we want to work with. *Extends the currently selected volume to use all contiguous available space on the same disk. *Running again to show the new size. *Dismounts the Vdisk volume so that Hyper-V can load it. *Exit diskpart utity. However I give no guarantee this will work for everyone so keep the original around until you complete the process, just in case.
{ "language": "en", "url": "https://stackoverflow.com/questions/28946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why do you not explicitly call finalize() or start the garbage collector? After reading this question, I was reminded of when I was taught Java and told never to call finalize() or run the garbage collector because "it's a big black box that you never need to worry about". Can someone boil the reasoning for this down to a few sentences? I'm sure I could read a technical report from Sun on this matter, but I think a nice, short, simple answer would satisfy my curiosity. A: The short answer: Java garbage collection is a very finely tuned tool. System.gc() is a sledge-hammer. Java's heap is divided into different generations, each of which is collected using a different strategy. If you attach a profiler to a healthy app, you'll see that it very rarely has to run the most expensive kinds of collections because most objects are caught by the faster copying collector in the young generation. Calling System.gc() directly, while technically not guaranteed to do anything, in practice will trigger an expensive, stop-the-world full heap collection. This is almost always the wrong thing to do. You think you're saving resources, but you're actually wasting them for no good reason, forcing Java to recheck all your live objects “just in case”. If you are having problems with GC pauses during critical moments, you're better off configuring the JVM to use the concurrent mark/sweep collector, which was designed specifically to minimise time spent paused, than trying to take a sledgehammer to the problem and just breaking it further. The Sun document you were thinking of is here: Java SE 6 HotSpot™ Virtual Machine Garbage Collection Tuning (Another thing you might not know: implementing a finalize() method on your object makes garbage collection slower. Firstly, it will take two GC runs to collect the object: one to run finalize() and the next to ensure that the object wasn't resurrected during finalization. Secondly, objects with finalize() methods have to be treated as special cases by the GC because they have to be collected individually, they can't just be thrown away in bulk.) A: Don't bother with finalizers. Switch to incremental garbage collection. If you want to help the garbage collector, null off references to objects you no longer need. Less path to follow= more explicitly garbage. Don't forget that (non-static) inner class instances keep references to their parent class instance. So an inner class thread keeps a lot more baggage than you might expect. In a very related vein, if you're using serialization, and you've serialized temporary objects, you're going to need to clear the serialization caches, by calling ObjectOutputStream.reset() or your process will leak memory and eventually die. Downside is that non-transient objects are going to get re-serialized. Serializing temporary result objects can be a bit more messy than you might think! Consider using soft references. If you don't know what soft references are, have a read of the javadoc for java.lang.ref.SoftReference Steer clear of Phantom references and Weak references unless you really get excitable. Finally, if you really can't tolerate the GC use Realtime Java. No, I'm not joking. The reference implementation is free to download and Peter Dibbles book from SUN is really good reading. A: As far as finalizers go: * *They are virtually useless. They aren't guaranteed to be called in a timely fashion, or indeed, at all (if the GC never runs, neither will any finalizers). This means you generally shouldn't rely on them. *Finalizers are not guaranteed to be idempotent. The garbage collector takes great care to guarantee that it will never call finalize() more than once on the same object. With well-written objects, it won't matter, but with poorly written objects, calling finalize multiple times can cause problems (e.g. double release of a native resource ... crash). *Every object that has a finalize() method should also provide a close() (or similar) method. This is the function you should be calling. e.g., FileInputStream.close(). There's no reason to be calling finalize() when you have a more appropriate method that is intended to be called by you. A: Assuming finalizers are similar to their .NET namesake then you only really need to call these when you have resources such as file handles that can leak. Most of the time your objects don't have these references so they don't need to be called. It's bad to try to collect the garbage because it's not really your garbage. You have told the VM to allocate some memory when you created objects, and the garbage collector is hiding information about those objects. Internally the GC is performing optimisations on the memory allocations it makes. When you manually try to collect the garbage you have no knowledge about what the GC wants to hold onto and get rid of, you are just forcing it's hand. As a result you mess up internal calculations. If you knew more about what the GC was holding internally then you might be able to make more informed decisions, but then you've missed the benefits of GC. A: The real problem with closing OS handles in finalize is that the finalize are executed in no guaranteed order. But if you have handles to the things that block (think e.g. sockets) potentially your code can get into deadlock situation (not trivial at all). So I'm for explicitly closing handles in a predictable orderly manner. Basically code for dealing with resources should follow the pattern: SomeStream s = null; ... try{ s = openStream(); .... s.io(); ... } finally { if (s != null) { s.close(); s = null; } } It gets even more complicated if you write your own classes that work via JNI and open handles. You need to make sure handles are closed (released) and that it will happen only once. Frequently overlooked OS handle in Desktop J2SE is Graphics[2D]. Even BufferedImage.getGrpahics() can potentially return you the handle that points into a video driver (actually holding the resource on GPU). If you won't release it yourself and leave it garbage collector to do the work - you may find strange OutOfMemory and alike situation when you ran out of video card mapped bitmaps but still have plenty of memory. In my experience it happens rather frequently in tight loops working with graphics objects (extracting thumbnails, scaling, sharpening you name it). Basically GC does not take care of programmers responsibility of correct resource management. It only takes care of memory and nothing else. The Stream.finalize calling close() IMHO would be better implemented throwing exception new RuntimeError("garbage collecting the stream that is still open"). It will save hours and days of debugging and cleaning code after the sloppy amateurs left the ends lose. Happy coding. Peace. A: The GC does a lot of optimization on when to properly finalize things. So unless you're familiar with how the GC actually works and how it tags generations, manually calling finalize or start GC'ing will probably hurt performance than help. A: Avoid finalizers. There is no guarantee that they will be called in a timely fashion. It could take quite a long time before the Memory Management system (i.e., the garbage collector) decides to collect an object with a finalizer. Many people use finalizers to do things like close socket connections or delete temporary files. By doing so you make your application behaviour unpredictable and tied to when the JVM is going to GC your object. This can lead to "out of memory" scenarios, not due to the Java Heap being exhausted, but rather due to the system running out of handles for a particular resource. One other thing to keep in mind is that introducing the calls to System.gc() or such hammers may show good results in your environment, but they won't necessarily translate to other systems. Not everyone runs the same JVM, there are many, SUN, IBM J9, BEA JRockit, Harmony, OpenJDK, etc... This JVM all conform to the JCK (those that have been officially tested that is), but have a lot of freedom when it comes to making things fast. GC is one of those areas that everyone invests in heavily. Using a hammer will often times destroy that effort.
{ "language": "en", "url": "https://stackoverflow.com/questions/28949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Guide to choosing between REST vs SOAP services? Does anyone have links to documentation or guides on making the decision between REST vs. SOAP? I understand both of these but am looking for some references on the key decision points, eg, security, which may make you lean towards one or the other. A: Google first hit seems pretty comprehensive. I think the problem here is there are too many advocates of one or the other, may be better of googling and getting more of a handle of the pro's/con's yourself and making your own decision. I know that sounds kinda lame, but ultimately these sort of design decisions fall down to the developer/architect working on it, and 99% of the time, the problem domain will be the deciding factor (or at least it should be), not a guide on the net. A: Simple Object Access Protocol (SOAP) standard an XML language defining a message architecture and message formats, is used by Web services it contain a description of the operations. WSDL is an XML-based language for describing Web services and how to access them. will run on SMTP,HTTP,FTP etc. Requires middleware support, well defined mechanisam to define services like WSDL+XSD, WS-Policy SOAP will return XML based data SOAP provide standards for security and reliability Representational State Transfer (RESTful) web services. they are second generation Web Services. RESTful web services, communicate via HTTP than SOAP-based services and do not require XML messages or WSDL service-API definitions. for REST no middleware is required only HTTP support is needed.WADL Standard, REST can return XML, plain text, JSON, HTML etc t is easier for many types of clients to consume RESTful web services while enabling the server side to evolve and scale. Clients can choose to consume some or all aspects of the service and mash it up with other web-based services. REST uses standard HTTP so it is simplerto creating clients, developing APIs REST permits many different data formats like XML, plain text, JSON, HTML where as SOAP only permits XML. REST has better performance and scalability. Rest and can be cached and SOAP can't Built-in error handling where SOAP has No error handling REST is particularly useful PDA and other mobile devices. REST is services are easy to integrate with existing websites. SOAP has set of protocols, which provide standards for security and reliability, among other things, and interoperate with other WS conforming clients and servers. SOAP Web services (such as JAX-WS) are useful in handling asynchronous processing and invocation. For Complex API's SOAP will be more usefull. A: I Think both REST and SOAP can be used to implement similar functionality, but in general SOAP should be used when a particular feature of SOAP is needed, and the advantages of REST make it generally the best option otherwise. However, both REST and SOAP are often termed "Web services," and one is often used in place of the other, but they are totally different approaches. REST is an architectural style for building client-server applications. SOAP is a protocol specification for exchanging data between two endpoints. I am very much agree with +Rob Cooper in his post. Yes, there are so many advocates. I have listed the difference between soap and rest. A: There is a good flow chart you can use to help you decide between REST vs SOAP. Link to flow chart: https://drive.google.com/file/d/0B3zMtAq1Rf-sdVFNdThvNmZWRGc/edit Link to article: https://www.linkedin.com/pulse/20140818062318-7933571-soap-vs-rest-flowchart-to-determine-the-right-web-services-protocol-for-your-needs The other two factors that I use to make this decision are: 1) Will clients of the Service require Media Types other than XML (e.g JSON). If yes, then use REST. 2) Is the client of the Service always going to be a Application/Server (e.g. not a RIA or AJAX client). If no, this leans towards REST as it is easier to consume REST services when using AJAX.
{ "language": "en", "url": "https://stackoverflow.com/questions/28950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: CPU utilization by database? Is it possible to get a breakdown of CPU utilization by database? I'm ideally looking for a Task Manager type interface for SQL server, but instead of looking at the CPU utilization of each PID (like taskmgr) or each SPID (like spwho2k5), I want to view the total CPU utilization of each database. Assume a single SQL instance. I realize that tools could be written to collect this data and report on it, but I'm wondering if there is any tool that lets me see a live view of which databases are contributing most to the sqlservr.exe CPU load. A: Sort of. Check this query out: SELECT total_worker_time/execution_count AS AvgCPU , total_worker_time AS TotalCPU , total_elapsed_time/execution_count AS AvgDuration , total_elapsed_time AS TotalDuration , (total_logical_reads+total_physical_reads)/execution_count AS AvgReads , (total_logical_reads+total_physical_reads) AS TotalReads , execution_count , SUBSTRING(st.TEXT, (qs.statement_start_offset/2)+1 , ((CASE qs.statement_end_offset WHEN -1 THEN datalength(st.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) AS txt , query_plan FROM sys.dm_exec_query_stats AS qs cross apply sys.dm_exec_sql_text(qs.sql_handle) AS st cross apply sys.dm_exec_query_plan (qs.plan_handle) AS qp ORDER BY 1 DESC This will get you the queries in the plan cache in order of how much CPU they've used up. You can run this periodically, like in a SQL Agent job, and insert the results into a table to make sure the data persists beyond reboots. When you read the results, you'll probably realize why we can't correlate that data directly back to an individual database. First, a single query can also hide its true database parent by doing tricks like this: USE msdb DECLARE @StringToExecute VARCHAR(1000) SET @StringToExecute = 'SELECT * FROM AdventureWorks.dbo.ErrorLog' EXEC @StringToExecute The query would be executed in MSDB, but it would poll results from AdventureWorks. Where should we assign the CPU consumption? It gets worse when you: * *Join between multiple databases *Run a transaction in multiple databases, and the locking effort spans multiple databases *Run SQL Agent jobs in MSDB that "work" in MSDB, but back up individual databases It goes on and on. That's why it makes sense to performance tune at the query level instead of the database level. In SQL Server 2008R2, Microsoft introduced performance management and app management features that will let us package a single database in a distributable and deployable DAC pack, and they're promising features to make it easier to manage performance of individual databases and their applications. It still doesn't do what you're looking for, though. For more of those, check out the T-SQL repository at Toad World's SQL Server wiki (formerly at SQLServerPedia). Updated on 1/29 to include total numbers instead of just averages. A: Here's a query that will show the actual database causing high load. It relies on the query cache which might get flushed frequently in low-memory scenarios (making the query less useful). select dbs.name, cacheobjtype, total_cpu_time, total_execution_count from (select top 10 sum(qs.total_worker_time) as total_cpu_time, sum(qs.execution_count) as total_execution_count, count(*) as number_of_statements, qs.plan_handle from sys.dm_exec_query_stats qs group by qs.plan_handle order by sum(qs.total_worker_time) desc ) a inner join (SELECT plan_handle, pvt.dbid, cacheobjtype FROM ( SELECT plan_handle, epa.attribute, epa.value, cacheobjtype FROM sys.dm_exec_cached_plans OUTER APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa /* WHERE cacheobjtype = 'Compiled Plan' AND objtype = 'adhoc' */) AS ecpa PIVOT (MAX(ecpa.value) FOR ecpa.attribute IN ("dbid", "sql_handle")) AS pvt ) b on a.plan_handle = b.plan_handle inner join sys.databases dbs on dbid = dbs.database_id A: SQL Server (starting with 2000) will install performance counters (viewable from Performance Monitor or Perfmon). One of the counter categories (from a SQL Server 2005 install is:) - SQLServer:Databases With one instance for each database. The counters available however do not provide a CPU % Utilization counter or something similar, although there are some rate counters, that you could use to get a good estimate of CPU. Example would be, if you have 2 databases, and the rate measured is 20 transactions/sec on database A and 80 trans/sec on database B --- then you would know that A contributes roughly to 20% of the total CPU, and B contributes to other 80%. There are some flaws here, as that's assuming all the work being done is CPU bound, which of course with databases it's not. But that would be a start I believe. A: I think the answer to your question is no. The issue is that one activity on a machine can cause load on multiple databases. If I have a process that is reading from a config DB, logging to a logging DB, and moving transactions in and out of various DBs based on type, how do I partition the CPU usage? You could divide CPU utilization by the transaction load, but that is again a rough metric that may mislead you. How would you divide transaction log shipping from one DB to another, for instance? Is the CPU load in the reading or the writing? You're better off looking at the transaction rate for a machine and the CPU load it causes. You could also profile stored procedures and see if any of them are taking an inordinate amount of time; however, this won't get you the answer you want. A: With all said above in mind. Starting with SQL Server 2012 (may be 2008 ?) , there is column database_id in sys.dm_exec_sessions. It gives us easy calculation of cpu for each database for currently connected sessions. If session have disconnected, then its results have gone. select session_id, cpu_time, program_name, login_name, database_id from sys.dm_exec_sessions where session_id > 50; select sum(cpu_time)/1000 as cpu_seconds, database_id from sys.dm_exec_sessions group by database_id order by cpu_seconds desc; A: Take a look at SQL Sentry. It does all you need and more. Regards, Lieven A: Have you looked at SQL profiler? Take the standard "T-SQL" or "Stored Procedure" template, tweak the fields to group by the database ID (I think you have to used the number, you dont get the database name, but it's easy to find out using exec sp_databases to get the list) Run this for a while and you'll get the total CPU counts / Disk IO / Wait etc. This can give you the proportion of CPU used by each database. If you monitor the PerfMon counter at the same time (log the data to a SQL database), and do the same for the SQL Profiler (log to database), you may be able to correlate the two together. Even so, it should give you enough of a clue as to which DB is worth looking at in more detail. Then, do the same again with just that database ID and look for the most expensive SQL / Stored Procedures. A: please check this query: SELECT DB_NAME(st.dbid) AS DatabaseName ,OBJECT_SCHEMA_NAME(st.objectid,dbid) AS SchemaName ,cp.objtype AS ObjectType ,OBJECT_NAME(st.objectid,dbid) AS Objects ,MAX(cp.usecounts)AS Total_Execution_count ,SUM(qs.total_worker_time) AS Total_CPU_Time ,SUM(qs.total_worker_time) / (max(cp.usecounts) * 1.0) AS Avg_CPU_Time FROM sys.dm_exec_cached_plans cp INNER JOIN sys.dm_exec_query_stats qs ON cp.plan_handle = qs.plan_handle CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st WHERE DB_NAME(st.dbid) IS NOT NULL GROUP BY DB_NAME(st.dbid),OBJECT_SCHEMA_NAME(objectid,st.dbid),cp.objtype,OBJECT_NAME(objectid,st.dbid) ORDER BY sum(qs.total_worker_time) desc
{ "language": "en", "url": "https://stackoverflow.com/questions/28952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: What's the best way to use web services in python? I have a medium sized application that runs as a .net web-service which I do not control, and I want to create a loose pythonic API above it to enable easy scripting. I wanted to know what is the best/most practical solution for using web-services in python. Edit: I need to consume a complex soap WS and I have no control over it. A: If I have to expose APIs, I prefer doing it as JSON. Python has excellent support for JSON objects (JSON Objects are infact python dictionaries) A: Jython and IronPython give access to great Java & .NET SOAP libraries. If you need CPython, ZSI has been flaky for me, but it could be possible to use a tool like Robin to wrap a good C++ SOAP library such as gSOAP or Apache Axis C++ A: Most of the packages on python that a SOAP service through them can be called works on python 2.x, but had problems on Python 3.x The best fit for python 3.x that I've found is suds-jurko
{ "language": "en", "url": "https://stackoverflow.com/questions/28961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }