content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Installing Svn 1.5.x on Debian Etch - Best approach? How do you install svn 1.5 on debian etch? The svn 1.5 packages available for etch are 1.4 and I really need the cool new merge tracking feature in svn1.5 (according to CollabNet its as good as ClearCase...an interesting statement in oh so many ways). So, what's the least painful way to go? My options are: build it from source create my own debian package upgrade to a test version of Lenny find someone else's svn 1.5 package Which one have you chosen or which do you think has the least amount of suffering? A: Have you considered pinning? Basically, you can upgrade some of your system (i.e. just Subversion and its dependencies) to Lenny, while keeping the rest as Etch. A: It depends on whether you want to be able to upgrade Subversion in future using Debian's package management tools. Building it from source should be easy enough, and lets you configure it the way you want, but then each time you want to upgrade, you'll need to build it from source again, rather than a simple apt-get.
Installing Svn 1.5.x on Debian Etch - Best approach?
How do you install svn 1.5 on debian etch? The svn 1.5 packages available for etch are 1.4 and I really need the cool new merge tracking feature in svn1.5 (according to CollabNet its as good as ClearCase...an interesting statement in oh so many ways). So, what's the least painful way to go? My options are: build it from source create my own debian package upgrade to a test version of Lenny find someone else's svn 1.5 package Which one have you chosen or which do you think has the least amount of suffering?
[ "Have you considered pinning? Basically, you can upgrade some of your system (i.e. just Subversion and its dependencies) to Lenny, while keeping the rest as Etch.\n", "It depends on whether you want to be able to upgrade Subversion in future using Debian's package management tools. Building it from source should be easy enough, and lets you configure it the way you want, but then each time you want to upgrade, you'll need to build it from source again, rather than a simple apt-get.\n" ]
[ 2, 0 ]
[]
[]
[ "debian", "svn", "version_control" ]
stackoverflow_0000066728_debian_svn_version_control.txt
Q: construct a complex SQL query (or queries) As part of a larger web-app (using CakePHP), I'm putting together a simple blog system. The relationships are exceedingly simple: each User has a Blog, which has many Entries, which have many Comments. An element I'd like to incorporate is a list of "Popular Entries." Popular Entries have been defined as those with the most Comments in the last month, and ultimately they need to be ordered by the number of recent Comments. Ideally, I'd like the solution to stay within Cake's Model data-retrieval apparatus (Model->find(), etc.), but I'm not sanguine about this. Anyone have a clever/elegant solution? I'm steeling myself for some wild SQL hacking to make this work... A: Heh, I was just about to come back with essentially the same answer (using Cake's Model::find): $this->loadModel('Comment'); $this->Comment->find( 'all', array( 'fields' => array('COUNT(Comment.id) AS popularCount'), 'conditions' => array( 'Comment.created >' => strtotime('-1 month') ), 'group' => 'Comment.blog_post_id', 'order' => 'popularCount DESC', 'contain' => array( 'Entry' => array( 'fields' => array( 'Entry.title' ) ) ) )); It's not perfect, but it works and can be improved on. I made an additional improvement, using the Containable behaviour to extract the Entry data instead of the Comment data. A: Shouldn't be too bad, you just need a group by (this is off the type of my head, so forgive syntax errors): SELECT entry-id, count(id) AS c FROM comment WHERE comment.createdate >= DATE_SUB(CURDATE(), INTERVAL 1 MONTH) GROUP BY entry-id ORDER BY c DESC A: If you weren't fussed about the time sensitive nature of the comments, you could make use of CakePHP's counterCache functionality by adding a "comment_count" field to the entries table, configuring the counterCache key of the Comment belongsTo Entry association with this field, then call find() on the Entry model. A: You probably want a WHERE clause to get just last 30 days comments: SELECT entry-id, count(id) AS c FROM comment WHERE comment_date + 30 >= sysdate GROUP BY entry-id ORDER BY c DESC
construct a complex SQL query (or queries)
As part of a larger web-app (using CakePHP), I'm putting together a simple blog system. The relationships are exceedingly simple: each User has a Blog, which has many Entries, which have many Comments. An element I'd like to incorporate is a list of "Popular Entries." Popular Entries have been defined as those with the most Comments in the last month, and ultimately they need to be ordered by the number of recent Comments. Ideally, I'd like the solution to stay within Cake's Model data-retrieval apparatus (Model->find(), etc.), but I'm not sanguine about this. Anyone have a clever/elegant solution? I'm steeling myself for some wild SQL hacking to make this work...
[ "Heh, I was just about to come back with essentially the same answer (using Cake's Model::find):\n$this->loadModel('Comment');\n\n$this->Comment->find( 'all', array(\n 'fields' => array('COUNT(Comment.id) AS popularCount'),\n 'conditions' => array(\n 'Comment.created >' => strtotime('-1 month')\n ),\n 'group' => 'Comment.blog_post_id',\n 'order' => 'popularCount DESC',\n\n 'contain' => array(\n 'Entry' => array(\n 'fields' => array( 'Entry.title' )\n )\n )\n));\n\nIt's not perfect, but it works and can be improved on.\nI made an additional improvement, using the Containable behaviour to extract the Entry data instead of the Comment data.\n", "Shouldn't be too bad, you just need a group by (this is off the type of my head, so forgive syntax errors):\nSELECT entry-id, count(id) AS c \nFROM comment \nWHERE comment.createdate >= DATE_SUB(CURDATE(), INTERVAL 1 MONTH) \nGROUP BY entry-id \nORDER BY c DESC\n\n", "If you weren't fussed about the time sensitive nature of the comments, you could make use of CakePHP's counterCache functionality by adding a \"comment_count\" field to the entries table, configuring the counterCache key of the Comment belongsTo Entry association with this field, then call find() on the Entry model.\n", "You probably want a WHERE clause to get just last 30 days comments:\nSELECT entry-id, count(id) AS c \nFROM comment \nWHERE comment_date + 30 >= sysdate\nGROUP BY entry-id \nORDER BY c DESC\n\n" ]
[ 4, 2, 1, 0 ]
[]
[]
[ "cakephp", "php" ]
stackoverflow_0000052806_cakephp_php.txt
Q: What is the best way to randomize an array's order in PHP without using the shuffle() function? I was asked this question in a job interview. The interviewer and I disagreed on what the correct answer was. I'm wondering if anyone has any data on this. Update: I should have mentioned that the use of shuffle() was strictly forbidden... sorry. A: shuffle($arr); :) edit: I should clarify... my definition of best involves not just algorithm efficiency but code readability and maintainability as well. Using standard library functions means maintaining less code and reading much less too. Beyond that, you can get into year-long debates with PhD professors about the best "true random" function, so somebody will always disagree with you on randomization questions. A: Well here's the solution I came up with: function randomize_array_1($array_to_randomize) { $new_array = array(); while (count($array_to_randomize) > 0) { $rand_num = rand(0, count($array_to_randomize)-1); $extracted = array_splice($array_to_randomize, $rand_num, 1); $new_array[] = $extracted[0]; } return $new_array; } And here's his solution: function randomize_array_2($array_to_randomize) { usort($array_to_randomize, "rand_sort"); return $array_to_randomize; } function rand_sort($a, $b) { return rand(-1, 1); } I ran a bunch of trials on both methods (trying each 1,000,000 times) and the speed difference was negligible. However, upon checking the actual randomness of the results I was surprised at how different the distributions were. Here's my results: randomize_array_1: [2, 3, 1] => 166855 [2, 1, 3] => 166692 [1, 2, 3] => 166690 [3, 1, 2] => 166396 [3, 2, 1] => 166629 [1, 3, 2] => 166738 randomize_array_2: [1, 3, 2] => 147781 [3, 1, 2] => 73972 [3, 2, 1] => 445004 [1, 2, 3] => 259406 [2, 3, 1] => 49222 [2, 1, 3] => 24615 As you can see, the first method provides an almost perfect distribution indicating that it's being more-or-less truly random, whereas the second method is all over the place. A: You could use the Fisher-Yates shuffle. A: He's probably testing you on a relatively common mistake most people make when implementing a shuffling algorithm (this was also actually at the center of a controversy involving an online poker site a few years back) Incorrect way to shuffle: for (i is 1 to n) Swap i with random position between 1 and n Correct way to shuffle: for (i is 1 to n) Swap i with random position between i and n Graph out the probability distribution for these cases and it's easy to see why the first solution is incorrect. A: The "correct" way is pretty vague. The best (fastest / easiest / most elegant) to sort an array would be to just use the built-in shuffle() function. A: PHP has a built in function --> shuffle() . I would say that should do what you like, but it mostly likely will be anything but totally 'random'. Check http://computer.howstuffworks.com/question697.htm for a little description of why its very, very difficult to get complete randomness form a computer. A: Short Answer: PHP's array_rand() function Given that the use of the shuffle function is forbidden, I would use $keys = array_rand($myArray, count($myArray)) to return an array of the keys from $myArray in random order. From there it should be simple to reassemble them into a new array that has been randomized. Something like: $keys = array_rand($myArray, count($myArray)); $newArray = array(); foreach ($keys as $key) { $newArray[$key] = $myArray[$key]; }
What is the best way to randomize an array's order in PHP without using the shuffle() function?
I was asked this question in a job interview. The interviewer and I disagreed on what the correct answer was. I'm wondering if anyone has any data on this. Update: I should have mentioned that the use of shuffle() was strictly forbidden... sorry.
[ "shuffle($arr);\n\n:)\nedit: I should clarify... my definition of best involves not just algorithm efficiency but code readability and maintainability as well. Using standard library functions means maintaining less code and reading much less too. Beyond that, you can get into year-long debates with PhD professors about the best \"true random\" function, so somebody will always disagree with you on randomization questions.\n", "Well here's the solution I came up with:\nfunction randomize_array_1($array_to_randomize) {\n $new_array = array();\n while (count($array_to_randomize) > 0) {\n $rand_num = rand(0, count($array_to_randomize)-1);\n $extracted = array_splice($array_to_randomize, $rand_num, 1);\n $new_array[] = $extracted[0];\n }\n return $new_array;\n}\n\nAnd here's his solution:\nfunction randomize_array_2($array_to_randomize) {\n usort($array_to_randomize, \"rand_sort\");\n return $array_to_randomize;\n}\nfunction rand_sort($a, $b) {\n return rand(-1, 1);\n}\n\nI ran a bunch of trials on both methods (trying each 1,000,000 times) and the speed difference was negligible. However, upon checking the actual randomness of the results I was surprised at how different the distributions were. Here's my results:\nrandomize_array_1:\n [2, 3, 1] => 166855\n [2, 1, 3] => 166692\n [1, 2, 3] => 166690\n [3, 1, 2] => 166396\n [3, 2, 1] => 166629\n [1, 3, 2] => 166738\n\nrandomize_array_2:\n [1, 3, 2] => 147781\n [3, 1, 2] => 73972\n [3, 2, 1] => 445004\n [1, 2, 3] => 259406\n [2, 3, 1] => 49222\n [2, 1, 3] => 24615\n\nAs you can see, the first method provides an almost perfect distribution indicating that it's being more-or-less truly random, whereas the second method is all over the place.\n", "You could use the Fisher-Yates shuffle.\n", "He's probably testing you on a relatively common mistake most people make when implementing a shuffling algorithm (this was also actually at the center of a controversy involving an online poker site a few years back)\nIncorrect way to shuffle:\n\nfor (i is 1 to n)\n Swap i with random position between 1 and n \n\nCorrect way to shuffle:\n\nfor (i is 1 to n)\n Swap i with random position between i and n\n\nGraph out the probability distribution for these cases and it's easy to see why the first solution is incorrect.\n", "The \"correct\" way is pretty vague. The best (fastest / easiest / most elegant) to sort an array would be to just use the built-in shuffle() function.\n", "PHP has a built in function --> shuffle() . I would say that should do what you like, but it mostly likely will be anything but totally 'random'. \nCheck http://computer.howstuffworks.com/question697.htm for a little description of why its very, very difficult to get complete randomness form a computer.\n", "Short Answer: PHP's array_rand() function\nGiven that the use of the shuffle function is forbidden, I would use $keys = array_rand($myArray, count($myArray)) to return an array of the keys from $myArray in random order. From there it should be simple to reassemble them into a new array that has been randomized. Something like: \n$keys = array_rand($myArray, count($myArray));\n$newArray = array();\n\nforeach ($keys as $key) {\n$newArray[$key] = $myArray[$key];\n}\n\n" ]
[ 4, 3, 2, 1, 0, 0, 0 ]
[]
[]
[ "arrays", "php", "random" ]
stackoverflow_0000065970_arrays_php_random.txt
Q: Java EE SqlResultSetMapping Syntax I have the following Java 6 code: Query q = em.createNativeQuery( "select T.* " + "from Trip T join Itinerary I on (T.itinerary_id=I.id) " + "where I.launchDate between :start and :end " + "or ADDDATE(I.launchDate, I.equipmentPullDayOfTrip) between :start and :end", "TripResults" ); q.setParameter( "start", range.getStart(), TemporalType.DATE ); q.setParameter( "end", range.getEnd(), TemporalType.DATE ); @SqlResultSetMapping( name="TripResults", entities={ @EntityResult( entityClass=TripEntity.class ), @EntityResult( entityClass=CommercialTripEntity.class ) } ) I receive a syntax error on the last closing right parenthesis. Eclipse gives: "Insert EnumBody to complete block statement" and "Insert enum Identifier to complete EnumHeaderName". Similar syntax error from javac. What am I doing wrong? A: The Hibernate annotations docs (http://www.hibernate.org/hib_docs/annotations/reference/en/html_single/) suggest that this should be a class-level annotation rather than occurring inline within your code. And indeed when I paste that code into my IDE and move it around, the compile errors are present when the annotation is inline, but vanish when I put it in above the class declaration: @SqlResultSetMapping( name="TripResults", entities={ @EntityResult( entityClass=TripEntity.class ), @EntityResult( entityClass=CommercialTripEntity.class ) } ) public class Foo { public void bogus() { Query q = em.createNativeQuery( "select T.* " + "from Trip T join Itinerary I on (T.itinerary_id=I.id) " + "where I.launchDate between :start and :end " + "or ADDDATE(I.launchDate, I.equipmentPullDayOfTrip) between :start and :end", "TripResults" ); q.setParameter( "start", range.getStart(), TemporalType.DATE ); q.setParameter( "end", range.getEnd(), TemporalType.DATE ); } } ...obviously I have no evidence that the above code will actually work. I have only verified that it doesn't cause compile errors. A: Your example comes straight out of the API docs which are unfortunately poorly presented. Your annotation should be placed on some class, probably the one in which you will be creating the query to use the result set mapping. However, it actually doesn't matter where this annotation is placed. Your JPA provider will actually maintain a global list of all these mappings, so no matter where you define it, you will be able to use it anywhere. This is a shortcoming of the annotation method (as opposed to specifying things in XML.) Many other things in the JPA (i.e. named queries) are defined this same way. It makes it seem like there's some kind of connection between the thing being defined and the class on which it is annotated, when it's not.
Java EE SqlResultSetMapping Syntax
I have the following Java 6 code: Query q = em.createNativeQuery( "select T.* " + "from Trip T join Itinerary I on (T.itinerary_id=I.id) " + "where I.launchDate between :start and :end " + "or ADDDATE(I.launchDate, I.equipmentPullDayOfTrip) between :start and :end", "TripResults" ); q.setParameter( "start", range.getStart(), TemporalType.DATE ); q.setParameter( "end", range.getEnd(), TemporalType.DATE ); @SqlResultSetMapping( name="TripResults", entities={ @EntityResult( entityClass=TripEntity.class ), @EntityResult( entityClass=CommercialTripEntity.class ) } ) I receive a syntax error on the last closing right parenthesis. Eclipse gives: "Insert EnumBody to complete block statement" and "Insert enum Identifier to complete EnumHeaderName". Similar syntax error from javac. What am I doing wrong?
[ "The Hibernate annotations docs (http://www.hibernate.org/hib_docs/annotations/reference/en/html_single/) suggest that this should be a class-level annotation rather than occurring inline within your code. And indeed when I paste that code into my IDE and move it around, the compile errors are present when the annotation is inline, but vanish when I put it in above the class declaration:\n@SqlResultSetMapping( name=\"TripResults\",\n entities={\n @EntityResult( entityClass=TripEntity.class ),\n @EntityResult( entityClass=CommercialTripEntity.class )\n }\n )\npublic class Foo {\n public void bogus() {\n Query q = em.createNativeQuery( \n \"select T.* \" +\n \"from Trip T join Itinerary I on (T.itinerary_id=I.id) \" +\n \"where I.launchDate between :start and :end \" +\n \"or ADDDATE(I.launchDate, I.equipmentPullDayOfTrip) between :start and :end\",\n \"TripResults\" );\n\n q.setParameter( \"start\", range.getStart(), TemporalType.DATE );\n q.setParameter( \"end\", range.getEnd(), TemporalType.DATE );\n }\n}\n\n...obviously I have no evidence that the above code will actually work. I have only verified that it doesn't cause compile errors.\n", "Your example comes straight out of the API docs which are unfortunately poorly presented.\nYour annotation should be placed on some class, probably the one in which you will be creating the query to use the result set mapping. However, it actually doesn't matter where this annotation is placed. Your JPA provider will actually maintain a global list of all these mappings, so no matter where you define it, you will be able to use it anywhere.\nThis is a shortcoming of the annotation method (as opposed to specifying things in XML.) Many other things in the JPA (i.e. named queries) are defined this same way. It makes it seem like there's some kind of connection between the thing being defined and the class on which it is annotated, when it's not.\n" ]
[ 1, 0 ]
[]
[]
[ "jakarta_ee", "java", "jpa", "syntax" ]
stackoverflow_0000066528_jakarta_ee_java_jpa_syntax.txt
Q: Writing C# client to consume a Java web service that returns array of objects I am writing a C# client that calls a web service written in Java (by another person). I have added a web reference to my client and I'm able to call methods in the web service ok. The service was changed to return an array of objects, and the client does not properly parse the returned SOAP message. MyResponse[] MyFunc(string p) class MyResponse { long id; string reason; } When my generated C# proxy calls the web service (using SoapHttpClientProtocol.Invoke), I am expecting a MyResponse[] array with length of 1, ie a single element. What I am getting after the Invoke call is an element with id=0 and reason=null, regardless of what the service actually returns. Using a packet sniffer, I can see that the service is returning what appears to be a legitimate soap message with id and reason set to non-null values. Is there some trick to getting a C# client to call a Java web service that returns someobject[] ? I will work on getting a sanitized demo if necessary. Edit: This is a web reference via "Add Web Reference...". VS 2005, .NET 3.0. A: Thanks to Xian, I have a solution. The wsdl for the service included a line <import namespace="http://mynamespace.company.com"/> The soap that the client sent to the server had the following attribute on all data elements: xmlns="http://mynamespace.company.com" But the xml payload of the response (from the service back to the client) did not have this namespace included. By tinkering with the HTTP response (which I obtained with WireShark), I observed that the .NET proxy class correctly picked up the MyResponse values if I forced the xmlns attribute on every returned data element. Short of changing the service, which I don't control, the workaround is to edit the VS generated proxy class (eg Reference.cs) and look for lines like this: [System.Xml.Serialization.XmlTypeAttribute(Namespace="http://mynamespace.company.com")] public partial class MyResponse { and comment out the XmlType attribute line. This will tell the CLR to look for response elements in the default namespace rather than the one specied in the wsdl. You have to redo this whenever you update the reference, but at least it works. A: It has been a while, but I seem to remember having trouble with the slight differences in how default namespaces were handled between .Net and Java web services. Double check the generated c# proxy class and any namespaces declared within (especially the defaults xmlns=""), against what the Java service is expecting. There will be probably be very subtle differences which you will have to recreate. If this is the case then you will to provide more namespace declarations in the c# attributes. A: From your question, it looks like you had the client working at one point, and then the service was changed to return an array. Make sure you re-generate the proxy so the returned SOAP message is deserialized on the client. It wasn't clear you had done this - just making sure.
Writing C# client to consume a Java web service that returns array of objects
I am writing a C# client that calls a web service written in Java (by another person). I have added a web reference to my client and I'm able to call methods in the web service ok. The service was changed to return an array of objects, and the client does not properly parse the returned SOAP message. MyResponse[] MyFunc(string p) class MyResponse { long id; string reason; } When my generated C# proxy calls the web service (using SoapHttpClientProtocol.Invoke), I am expecting a MyResponse[] array with length of 1, ie a single element. What I am getting after the Invoke call is an element with id=0 and reason=null, regardless of what the service actually returns. Using a packet sniffer, I can see that the service is returning what appears to be a legitimate soap message with id and reason set to non-null values. Is there some trick to getting a C# client to call a Java web service that returns someobject[] ? I will work on getting a sanitized demo if necessary. Edit: This is a web reference via "Add Web Reference...". VS 2005, .NET 3.0.
[ "Thanks to Xian, I have a solution.\nThe wsdl for the service included a line\n<import namespace=\"http://mynamespace.company.com\"/>\n\nThe soap that the client sent to the server had the following attribute on all data elements:\nxmlns=\"http://mynamespace.company.com\"\n\nBut the xml payload of the response (from the service back to the client) did not have this namespace included. By tinkering with the HTTP response (which I obtained with WireShark), I observed that the .NET proxy class correctly picked up the MyResponse values if I forced the xmlns attribute on every returned data element.\nShort of changing the service, which I don't control, the workaround is to edit the VS generated proxy class (eg Reference.cs) and look for lines like this:\n[System.Xml.Serialization.XmlTypeAttribute(Namespace=\"http://mynamespace.company.com\")]\npublic partial class MyResponse {\n\nand comment out the XmlType attribute line. This will tell the CLR to look for response elements in the default namespace rather than the one specied in the wsdl. You have to redo this whenever you update the reference, but at least it works.\n", "It has been a while, but I seem to remember having trouble with the slight differences in how default namespaces were handled between .Net and Java web services.\nDouble check the generated c# proxy class and any namespaces declared within (especially the defaults xmlns=\"\"), against what the Java service is expecting. There will be probably be very subtle differences which you will have to recreate.\nIf this is the case then you will to provide more namespace declarations in the c# attributes.\n", "From your question, it looks like you had the client working at one point, and then the service was changed to return an array. Make sure you re-generate the proxy so the returned SOAP message is deserialized on the client. It wasn't clear you had done this - just making sure.\n" ]
[ 8, 3, 0 ]
[]
[]
[ "c#", "java", "web_services" ]
stackoverflow_0000064833_c#_java_web_services.txt
Q: Bluetooth Signal Strength Does anyone have any idea how to track the signal strength of a bluetooth connection perferably in C#? I was thinking using a WMI query but couldn't track down the WMI class encapsulating the connection. The idea is when I leave my machine with my cellphone in pocket the bluetooth signal weakens and my machine locks and I don't get goated. A: The Link Manager Protocol (LMP) running in a Bluetooth device looks after the link setup and configuration. This is all done by two devices exchanging Protocol Data Units (PDUs).The hardware and software functionality of the RSSI is provided at the LMP level that permits you to manage the RSSI Data. It allows you to read the RSSI level and control the TX RF output power (the LMP power commands) LMP for control and to get at status information. So what you are actually looking for is defined in the LMP when using the MS Bluetooth stack. The MS Bluetooth Stack HCI interface already supports functions below i.e HCI_READHCIPARAMETERS HCI_STARTHARDWARE HCI_STOPHARDWARE HCI_SETCALLBACK HCI_OPENCONNECTION HCI_READPACKET HCI_WRITEPACKET HCI_CLOSECONNECTION I suppose microsoft could have implemented a function called HCI_Read_RSSI but they didn't. To obtain the the RSSI data you will have to use the LMP to get the info you need. Example psuedocode to read RSSI Data // Read HCI Parameters #include <windows.h> #include <windev.h> #include <bt_buffer.h> #include <bt_hcip.h> #include <bt_os.h> #include <bt_debug.h> #include <svsutil.hxx> #include <bt_tdbg.h> unsigned short hci_subversion, lmp_subversion, manufacturer; unsigned char hci_version, lmp_version, lmp_features[8]; if (BthReadLocalVersion (&hci_version, &hci_subversion, &lmp_version, &lmp_subversion, &manufacturer, lmp_features) != ERROR_SUCCESS) { SetUnloadedState (); return 0; } WCHAR szLine[MAX_PATH] unsigned char *pf = lmp_features; if ((*pf) & 0x02) { wsprintf (szLine, L" RSSI"); } This will ONLY work with the Microsoft bluetooth stack. This is C++ code also. I got this from the experts exchange post(I know) at the bottom of the page. http://www.experts-exchange.com/Programming/Wireless_Programming/Bluetooth/Q_21267430.html There is no specific function that does it for you. Also there is this library that may help you, I haven't looked through the documentation completely but I've heard good things about it. http://inthehand.com/content/32feet.aspx Goodluck man!
Bluetooth Signal Strength
Does anyone have any idea how to track the signal strength of a bluetooth connection perferably in C#? I was thinking using a WMI query but couldn't track down the WMI class encapsulating the connection. The idea is when I leave my machine with my cellphone in pocket the bluetooth signal weakens and my machine locks and I don't get goated.
[ "The Link Manager Protocol (LMP) running in a Bluetooth device looks after the link setup and configuration. This is all done by two devices exchanging Protocol Data Units (PDUs).The hardware and software functionality of the RSSI is provided at the LMP level that permits you to manage the RSSI Data. It allows you to read the RSSI level and control the TX RF output power (the LMP power commands) LMP for control and to get at status information.\nSo what you are actually looking for is defined in the LMP when using the MS Bluetooth stack.\nThe MS Bluetooth Stack HCI interface already supports functions below i.e\nHCI_READHCIPARAMETERS\nHCI_STARTHARDWARE\nHCI_STOPHARDWARE\nHCI_SETCALLBACK\nHCI_OPENCONNECTION\nHCI_READPACKET\nHCI_WRITEPACKET\nHCI_CLOSECONNECTION\nI suppose microsoft could have implemented a function called HCI_Read_RSSI but they didn't.\nTo obtain the the RSSI data you will have to use the LMP to get the info you need.\nExample psuedocode to read RSSI Data\n// Read HCI Parameters\n\n#include <windows.h>\n#include <windev.h>\n#include <bt_buffer.h>\n#include <bt_hcip.h>\n#include <bt_os.h>\n#include <bt_debug.h>\n#include <svsutil.hxx>\n#include <bt_tdbg.h>\n\nunsigned short hci_subversion, lmp_subversion, manufacturer;\nunsigned char hci_version, lmp_version, lmp_features[8];\n\nif (BthReadLocalVersion (&hci_version, &hci_subversion, &lmp_version, &lmp_subversion, &manufacturer, lmp_features) != ERROR_SUCCESS) {\n SetUnloadedState ();\n return 0;\n }\nWCHAR szLine[MAX_PATH]\nunsigned char *pf = lmp_features;\n\nif ((*pf) & 0x02) {\nwsprintf (szLine, L\" RSSI\");\n}\n\nThis will ONLY work with the Microsoft bluetooth stack. This is C++ code also. I got this from the experts exchange post(I know) at the bottom of the page. \nhttp://www.experts-exchange.com/Programming/Wireless_Programming/Bluetooth/Q_21267430.html\nThere is no specific function that does it for you.\nAlso there is this library that may help you, I haven't looked through the documentation completely but I've heard good things about it.\nhttp://inthehand.com/content/32feet.aspx\nGoodluck man! \n" ]
[ 5 ]
[]
[]
[ "bluetooth", "c#", "wmi" ]
stackoverflow_0000066421_bluetooth_c#_wmi.txt
Q: Trying to understand web services performance I bought an ASP.NET script about a year ago to retrieve FedEx shipping values. It builds an XML string that passes to the FedEx server using an HttpWebRequest, then parses the raw XML. The average response time for the script is about 900 milliseconds. So the other day I was poking around in the FedEx developer center and discovered that they provide some C# code samples for using their web service. I built a little project using their code and WSDL file, and was surprised to find that the average response time is about 2.5 seconds. Can someone help me understand the difference in speed? And is there a way to make it faster? I have zero experience using web services. Thanks. A: Web Services have some overhead with respect to XML over HTTP calls because of validation and what is called the "SOAP envelope", which adds some extra verbosity. That said, I don't think the bigger response time is due to that. Did you try running the XML over HTTP version today to have a reasonable comparison point? Maybe the site is just busy. One other explanation could be bad coding. You never know. A: Well, the ASP.NET script could be doing something different than the C# code. Try capturing each raw HTTP request and playing them back. Do they perform the same? If so, then it is likely differences in the client code. My guess is that one is a straight HTTP get/post request, the other SOAP over HTTP(s). Other things to look at: 1) Are you hitting production for ASP.NET and test system for C#, or are they both production? 2) Assuming both are over HTTPS. A SOAP-based web service is typically a little more "heavy weight" -- especially if your request ends up doing WS-*, signing, etc. Do you have to sign your C# request by providing keys/x.509 or other credentials? There are many ways this discussion could go depending on answers to a few of the basics above.
Trying to understand web services performance
I bought an ASP.NET script about a year ago to retrieve FedEx shipping values. It builds an XML string that passes to the FedEx server using an HttpWebRequest, then parses the raw XML. The average response time for the script is about 900 milliseconds. So the other day I was poking around in the FedEx developer center and discovered that they provide some C# code samples for using their web service. I built a little project using their code and WSDL file, and was surprised to find that the average response time is about 2.5 seconds. Can someone help me understand the difference in speed? And is there a way to make it faster? I have zero experience using web services. Thanks.
[ "Web Services have some overhead with respect to XML over HTTP calls because of validation and what is called the \"SOAP envelope\", which adds some extra verbosity.\nThat said, I don't think the bigger response time is due to that. Did you try running the XML over HTTP version today to have a reasonable comparison point? Maybe the site is just busy.\nOne other explanation could be bad coding. You never know.\n", "Well, the ASP.NET script could be doing something different than the C# code. Try capturing each raw HTTP request and playing them back. Do they perform the same? If so, then it is likely differences in the client code. My guess is that one is a straight HTTP get/post request, the other SOAP over HTTP(s).\nOther things to look at:\n1) Are you hitting production for ASP.NET and test system for C#, or are they both production?\n2) Assuming both are over HTTPS.\nA SOAP-based web service is typically a little more \"heavy weight\" -- especially if your request ends up doing WS-*, signing, etc. Do you have to sign your C# request by providing keys/x.509 or other credentials?\nThere are many ways this discussion could go depending on answers to a few of the basics above.\n" ]
[ 1, 1 ]
[]
[]
[ "asp.net", "performance", "web_services" ]
stackoverflow_0000066777_asp.net_performance_web_services.txt
Q: How do I find and decouple entities from a certificate when upgrading MS-SQLServer editions? While in the final throws of upgrading MS-SQL Server 2005 Express Edition to MS-SQL Server 2005 Enterprise Edition, I came across this error: The certificate cannot be dropped because one or more entities are either signed or encrypted using it. To continue, correct the problem... So, how do I find and decouple the entities signed/encrypted using this certificate so I can delete the certificate and proceed with the upgrade? I'm also kind of expecting/assuming that the upgrade setup will provide a new certificate and re-couple those former entities with it or I'll have to forcibly do so after the setup. A: The Microsoft forum has the following code snipit to delete the certificates: use msdb BEGIN TRANSACTION declare @sp sysname declare @exec_str nvarchar(1024) declare ms_crs_sps cursor global for select object_name(crypts.major_id) from sys.crypt_properties crypts, sys.certificates certs where crypts.thumbprint = certs.thumbprint and crypts.class = 1 and certs.name = '##MS_AgentSigningCertificate##' open ms_crs_sps fetch next from ms_crs_sps into @sp while @@fetch_status = 0 begin if exists(select * from sys.objects where name = @sp) begin print 'Dropping signature from: ' + @sp set @exec_str = N'drop signature from ' + quotename(@sp) + N' by certificate [##MS_AgentSigningCertificate##]' Execute(@exec_str) if (@@error <> 0) begin declare @err_str nvarchar(1024) set @err_str = 'Cannot drop signature from ' + quotename(@sp) + '. Terminating.' close ms_crs_sps deallocate ms_crs_sps ROLLBACK TRANSACTION RAISERROR(@err_str, 20, 127) WITH LOG return end end fetch next from ms_crs_sps into @sp end close ms_crs_sps deallocate ms_crs_sps COMMIT TRANSACTION go http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3876484&SiteID=17 I have not tried the script, so please backup your data and system before attempting and update here with results.
How do I find and decouple entities from a certificate when upgrading MS-SQLServer editions?
While in the final throws of upgrading MS-SQL Server 2005 Express Edition to MS-SQL Server 2005 Enterprise Edition, I came across this error: The certificate cannot be dropped because one or more entities are either signed or encrypted using it. To continue, correct the problem... So, how do I find and decouple the entities signed/encrypted using this certificate so I can delete the certificate and proceed with the upgrade? I'm also kind of expecting/assuming that the upgrade setup will provide a new certificate and re-couple those former entities with it or I'll have to forcibly do so after the setup.
[ "The Microsoft forum has the following code snipit to delete the certificates:\nuse msdb \nBEGIN TRANSACTION\ndeclare @sp sysname\ndeclare @exec_str nvarchar(1024) \ndeclare ms_crs_sps cursor global for select object_name(crypts.major_id) from sys.crypt_properties crypts, sys.certificates certs where crypts.thumbprint = certs.thumbprint and crypts.class = 1 and certs.name = '##MS_AgentSigningCertificate##' \nopen ms_crs_sps \nfetch next from ms_crs_sps into @sp \nwhile @@fetch_status = 0 \nbegin \nif exists(select * from sys.objects where name = @sp) begin print 'Dropping signature from: ' + @sp set @exec_str = N'drop signature from ' + quotename(@sp) + N' by certificate [##MS_AgentSigningCertificate##]' \nExecute(@exec_str)\nif (@@error <> 0)\nbegin\ndeclare @err_str nvarchar(1024)\nset @err_str = 'Cannot drop signature from ' + quotename(@sp) + '. Terminating.'\nclose ms_crs_sps\ndeallocate ms_crs_sps\nROLLBACK TRANSACTION\nRAISERROR(@err_str, 20, 127) WITH LOG\nreturn\nend\nend\nfetch next from ms_crs_sps into @sp\nend\nclose ms_crs_sps\ndeallocate ms_crs_sps\nCOMMIT TRANSACTION\ngo\n\nhttp://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3876484&SiteID=17\nI have not tried the script, so please backup your data and system before attempting and update here with results.\n" ]
[ 2 ]
[]
[]
[ "sql_server" ]
stackoverflow_0000052460_sql_server.txt
Q: Nice Python wrapper for Yahoo's Geoplanet web service? Has anybody created a nice wrapper around Yahoo's geo webservice "GeoPlanet" yet? A: After a brief amount of Googling, I found nothing that looks like a wrapper for this API, but I'm not quite sure if a wrapper is what is necessary for GeoPlanet. According to Yahoo's documentation for GeoPlanet, requests are made in the form of an HTTP GET messages which can very easily be made using Python's httplib module, and responses can take one of several forms including XML and JSON. Python can very easily parse these formats. In fact, Yahoo! itself even offers libraries for parsing both XML and JSON with Python. I know it sounds like a lot of libraries, but all the hard work has already been done for the programmer. It would just take a little "gluing together" and you would have yourself a nice interface to Yahoo! GeoPlanet using the power of Python.
Nice Python wrapper for Yahoo's Geoplanet web service?
Has anybody created a nice wrapper around Yahoo's geo webservice "GeoPlanet" yet?
[ "After a brief amount of Googling, I found nothing that looks like a wrapper for this API, but I'm not quite sure if a wrapper is what is necessary for GeoPlanet. \nAccording to Yahoo's documentation for GeoPlanet, requests are made in the form of an HTTP GET messages which can very easily be made using Python's httplib module, and responses can take one of several forms including XML and JSON. Python can very easily parse these formats. In fact, Yahoo! itself even offers libraries for parsing both XML and JSON with Python. \nI know it sounds like a lot of libraries, but all the hard work has already been done for the programmer. It would just take a little \"gluing together\" and you would have yourself a nice interface to Yahoo! GeoPlanet using the power of Python.\n" ]
[ 2 ]
[]
[]
[ "gis", "python", "yahoo" ]
stackoverflow_0000064185_gis_python_yahoo.txt
Q: Decorating a parent class method I would like to make a child class that has a method of the parent class where the method is a 'classmethod' in the child class but not in the parent class. Essentially, I am trying to accomplish the following: class foo(Object): def meth1(self, val): self.value = val class bar(foo): meth1 = classmethod(foo.meth1) A: I'm also not entirely sure what the exact behaviour you want is, but assuming its that you want bar.meth1(42) to be equivalent to foo.meth1 being a classmethod of bar (with "self" being the class), then you can acheive this with: def convert_to_classmethod(method): return classmethod(method.im_func) class bar(foo): meth1 = convert_to_classmethod(foo.meth1) The problem with classmethod(foo.meth1) is that foo.meth1 has already been converted to a method, with a special meaning for the first parameter. You need to undo this and look at the underlying function object, reinterpreting what "self" means. I'd also caution that this is a pretty odd thing to do, and thus liable to cause confusion to anyone reading your code. You are probably better off thinking through a different solution to your problem. A: What are you trying to accomplish? If I saw such a construct in live Python code, I would consider beating the original programmer. A: The question, as posed, seems quite odd to me: I can't see why anyone would want to do that. It is possible that you are misunderstanding just what a "classmethod" is in Python (it's a bit different from, say, a static method in Java). A normal method is more-or-less just a function which takes as its first argument (usually called "self"), an instance of the class, and which is invoked as ".". A classmethod is more-or-less just a function which takes as its first argument (often called "cls"), a class, and which can be invoked as "." OR as ".". With this in mind, and your code shown above, what would you expect to have happen if someone creates an instance of bar and calls meth1 on it? bar1 = bar() bar1.meth1("xyz") When the code to meth1 is called, it is passed two arguments 'self' and 'val'. I guess that you expect "xyz" to be passed for 'val', but what are you thinking gets passed for 'self'? Should it be the bar1 instance (in this case, no override was needed)? Or should it be the class bar (what then would this code DO)?
Decorating a parent class method
I would like to make a child class that has a method of the parent class where the method is a 'classmethod' in the child class but not in the parent class. Essentially, I am trying to accomplish the following: class foo(Object): def meth1(self, val): self.value = val class bar(foo): meth1 = classmethod(foo.meth1)
[ "I'm also not entirely sure what the exact behaviour you want is, but assuming its that you want bar.meth1(42) to be equivalent to foo.meth1 being a classmethod of bar (with \"self\" being the class), then you can acheive this with:\ndef convert_to_classmethod(method):\n return classmethod(method.im_func)\n\nclass bar(foo):\n meth1 = convert_to_classmethod(foo.meth1)\n\nThe problem with classmethod(foo.meth1) is that foo.meth1 has already been converted to a method, with a special meaning for the first parameter. You need to undo this and look at the underlying function object, reinterpreting what \"self\" means.\nI'd also caution that this is a pretty odd thing to do, and thus liable to cause confusion to anyone reading your code. You are probably better off thinking through a different solution to your problem.\n", "What are you trying to accomplish? If I saw such a construct in live Python code, I would consider beating the original programmer.\n", "The question, as posed, seems quite odd to me: I can't see why anyone would want to do that. It is possible that you are misunderstanding just what a \"classmethod\" is in Python (it's a bit different from, say, a static method in Java).\nA normal method is more-or-less just a function which takes as its first argument (usually called \"self\"), an instance of the class, and which is invoked as \".\".\nA classmethod is more-or-less just a function which takes as its first argument (often called \"cls\"), a class, and which can be invoked as \".\" OR as \".\".\nWith this in mind, and your code shown above, what would you expect to have happen if someone creates an instance of bar and calls meth1 on it?\nbar1 = bar()\nbar1.meth1(\"xyz\")\n\nWhen the code to meth1 is called, it is passed two arguments 'self' and 'val'. I guess that you expect \"xyz\" to be passed for 'val', but what are you thinking gets passed for 'self'? Should it be the bar1 instance (in this case, no override was needed)? Or should it be the class bar (what then would this code DO)?\n" ]
[ 4, 3, 0 ]
[]
[]
[ "inheritance", "oop", "python" ]
stackoverflow_0000066636_inheritance_oop_python.txt
Q: How to test credit card interactions? After reading this answer, I wonder if there's a way to get a "testing" credit card number. One that you can experiment with but that doesn't actually charge anything. A: MasterCard: 5431111111111111 Amex: 341111111111111 Discover: 6011601160116611 American Express (15 digits) 378282246310005 American Express (15 digits) 371449635398431 American Express Corporate (15 digits) 378734493671000 Diners Club (14 digits) 30569309025904 Diners Club (14 digits) 38520000023237 Discover (16 digits) 6011111111111117 Discover (16 digits) 6011000990139424 JCB (16 digits) 3530111333300000 JCB (16 digits) 3566002020360505 MasterCard (16 digits) 5555555555554444 MasterCard (16 digits) 5105105105105100 Visa (16 digits) 4111111111111111 Visa (16 digits) 4012888888881881 Visa (13 digits) 4222222222222 Credit Card Prefix Numbers: Visa: 13 or 16 numbers starting with 4 MasterCard: 16 numbers starting with 5 Discover: 16 numbers starting with 6011 AMEX: 15 numbers starting with 34 or 37 A: Most payment gateways provide such numbers for testing their services, but they will generally only work on the staging/test versions of those gateways. A: Depending on your payment gateway, there are two ways to test a transaction. For example, with authorize.net, if you send "X_TEST_TRANSACTION=true" (or something like that, its been a long time), with your POST, it will run it in test mode. They also provide a test VISA and test Mastercard number that will always come back as approved if in test mode, and declined in production mode. Look at your gateway API documentation, it will be clearly detailed there. A: Most payment processors provide either a testing number (PayPal does this) or the ability to go into testing mode (in which no transactions actually get processed). Consult the documentation.
How to test credit card interactions?
After reading this answer, I wonder if there's a way to get a "testing" credit card number. One that you can experiment with but that doesn't actually charge anything.
[ "MasterCard: 5431111111111111\nAmex: 341111111111111\nDiscover: 6011601160116611\nAmerican Express (15 digits) 378282246310005\nAmerican Express (15 digits) 371449635398431\nAmerican Express Corporate (15 digits) 378734493671000\nDiners Club (14 digits) 30569309025904\nDiners Club (14 digits) 38520000023237\nDiscover (16 digits) 6011111111111117\nDiscover (16 digits) 6011000990139424\nJCB (16 digits) 3530111333300000\nJCB (16 digits) 3566002020360505\nMasterCard (16 digits) 5555555555554444\nMasterCard (16 digits) 5105105105105100\nVisa (16 digits) 4111111111111111\nVisa (16 digits) 4012888888881881\nVisa (13 digits) 4222222222222\n\nCredit Card Prefix Numbers: \nVisa: 13 or 16 numbers starting with 4\nMasterCard: 16 numbers starting with 5\nDiscover: 16 numbers starting with 6011\nAMEX: 15 numbers starting with 34 or 37\n\n", "Most payment gateways provide such numbers for testing their services, but they will generally only work on the staging/test versions of those gateways.\n", "Depending on your payment gateway, there are two ways to test a transaction.\nFor example, with authorize.net, if you send \"X_TEST_TRANSACTION=true\" (or something like that, its been a long time), with your POST, it will run it in test mode.\nThey also provide a test VISA and test Mastercard number that will always come back as approved if in test mode, and declined in production mode.\nLook at your gateway API documentation, it will be clearly detailed there.\n", "Most payment processors provide either a testing number (PayPal does this) or the ability to go into testing mode (in which no transactions actually get processed). Consult the documentation.\n" ]
[ 27, 1, 1, 0 ]
[]
[]
[ "credit_card", "testing" ]
stackoverflow_0000066880_credit_card_testing.txt
Q: When to commit changes? Using Oracle 10g, accessed via Perl DBI, I have a table with a few tens of million rows being updated a few times per second while being read from much more frequently form another process. Soon the update frequency will increase by an order of magnitude (maybe two). Someone suggested that committing every N updates instead of after every update will help performance. I have a few questions: Will that be faster or slower or it depends (planning to benchmark both way as soon as can get a decent simulation of the new load) Why will it help / hinder performance. If "it depends ..." , on what ? If it helps what's the best value of N ? Why can't my local DBA have an helpful straight answer when I need one? (Actually I know the answer to that one) :-) EDIT: @codeslave : Thanks, btw losing uncommited changes is not a problem, I don't delete the original data used for updating till I am sure everything is fine , btw cleaning lady did unplugs the server, TWICE :-) Some googling showed it might help because of issue related to rollback segments, but I still don't know a rule of thumb for N every few tens ? hundreds? thousand ? @diciu : Great info, I'll definitely look into that. A: A commit results in Oracle writing stuff to the disk - i.e. in the redo log file so that whatever the transaction being commited has done can be recoverable in the event of a power failure, etc. Writing in file is slower than writing in memory so a commit will be slower if performed for many operations in a row rather then for a set of coalesced updates. In Oracle 10g there's an asynchronous commit that makes it much faster but less reliable: https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-6158695.html PS I know for sure that, in a scenario I've seen in a certain application, changing the number of coalesced updates from 5K to 50K makes it faster by an order of magnitude (10 times faster). A: Reducing the frequency of commits will certainly speed things up, however as you are reading and writing to this table frequently there is the potential for locks. Only you can determine the likelihood of the same data being updated at the same time. If the chance of this is low, commit every 50 rows and monitor the situation. Trial and error I'm afraid :-) A: As well as reducing the commit frequency, you should also consider performing bulk updates instead of individual ones. A: Faster/Slower? It will probably be a little faster. However, you run a greater risk of running into deadlocks, losing uncommitted changes should something catastrophic happen (cleaning lady unplugs the server), FUD, Fire, Brimstone, etc. Why would it help? Obviously fewer commit operations, which in turn means fewer disk writes, etc. DBA's and straight answers? If it was easy, you won't need one. A: If you "don't delete the original data used for updating till [you are] sure everything is fine", then why don't you remove all those incremental commits in between, and rollback if there's a problem? It sounds like you effectively have built a transaction systems on top of transactions. A: @CodeSlave your your questions is answered by @stevechol , if i remove ALL the incremental commits there will be locks. I guess if nothing better comes along I'll follow his advice pick a random number , monitor the load and adjust accordingly. While applying @diciu twaks. PS: the transaction on top of transaction is just accidental, I get the files used for updates by FTP and instead of deleting them immediately I set a cron job to deletes them a week later (if no one using the application has complained) that means if something goes wrong I have a week to catch the errors.
When to commit changes?
Using Oracle 10g, accessed via Perl DBI, I have a table with a few tens of million rows being updated a few times per second while being read from much more frequently form another process. Soon the update frequency will increase by an order of magnitude (maybe two). Someone suggested that committing every N updates instead of after every update will help performance. I have a few questions: Will that be faster or slower or it depends (planning to benchmark both way as soon as can get a decent simulation of the new load) Why will it help / hinder performance. If "it depends ..." , on what ? If it helps what's the best value of N ? Why can't my local DBA have an helpful straight answer when I need one? (Actually I know the answer to that one) :-) EDIT: @codeslave : Thanks, btw losing uncommited changes is not a problem, I don't delete the original data used for updating till I am sure everything is fine , btw cleaning lady did unplugs the server, TWICE :-) Some googling showed it might help because of issue related to rollback segments, but I still don't know a rule of thumb for N every few tens ? hundreds? thousand ? @diciu : Great info, I'll definitely look into that.
[ "A commit results in Oracle writing stuff to the disk - i.e. in the redo log file so that whatever the transaction being commited has done can be recoverable in the event of a power failure, etc.\nWriting in file is slower than writing in memory so a commit will be slower if performed for many operations in a row rather then for a set of coalesced updates.\nIn Oracle 10g there's an asynchronous commit that makes it much faster but less reliable: https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-6158695.html\nPS I know for sure that, in a scenario I've seen in a certain application, changing the number of coalesced updates from 5K to 50K makes it faster by an order of magnitude (10 times faster).\n", "Reducing the frequency of commits will certainly speed things up, however as you are reading and writing to this table frequently there is the potential for locks. Only you can determine the likelihood of the same data being updated at the same time. If the chance of this is low, commit every 50 rows and monitor the situation. Trial and error I'm afraid :-)\n", "As well as reducing the commit frequency, you should also consider performing bulk updates instead of individual ones.\n", "\nFaster/Slower? \n\nIt will probably be a little faster. However, you run a greater risk of running into deadlocks, losing uncommitted changes should something catastrophic happen (cleaning lady unplugs the server), FUD, Fire, Brimstone, etc.\n\nWhy would it help? \n\nObviously fewer commit operations, which in turn means fewer disk writes, etc.\n\nDBA's and straight answers? \n\nIf it was easy, you won't need one.\n", "If you \"don't delete the original data used for updating till [you are] sure everything is fine\", then why don't you remove all those incremental commits in between, and rollback if there's a problem? It sounds like you effectively have built a transaction systems on top of transactions.\n", "@CodeSlave your your questions is answered by @stevechol , if i remove ALL the incremental commits there will be locks. I guess if nothing better comes along I'll follow his advice pick a random number , monitor the load and adjust accordingly. While applying @diciu twaks.\nPS: the transaction on top of transaction is just accidental, I get the files used for updates by FTP and instead of deleting them immediately I set a cron job to deletes them a week later (if no one using the application has complained) that means if something goes wrong I have a week to catch the errors.\n" ]
[ 4, 1, 1, 0, 0, 0 ]
[]
[]
[ "commit", "oracle", "sql" ]
stackoverflow_0000033204_commit_oracle_sql.txt
Q: What's the best way to work with SQL Server data non-programmatically? We have a SQL server database. To manipulate the data non-programmatically, I can use SQL Server Management Studio by right-clicking a table and selecting "Open Table". However this is slow for very large tables and sorting and filtering is cumbersome. Typically what we have done until now is to create an Access database containing linked tables which point to the SQL Server tables and views. Opening a large table is much faster this way, and Access has easy-to-use right-click filtering and sorting. However, since Access 2007, sorting in particular has been quite slow when working with large tables. The Access database can also inadvertently lock the database tables, blocking other processes that may need to access the data. Creating the Access database in the first place, and updating it when new tables are added to SQL Server, is also tedious. Is there a better way to work with the data that offers the usability of Access without its drawbacks? A: Joel Coehoorn's answer is of course correct, that if the data is critical or there are naive users using the data, then a application front end should be developed. That being said, I have cases where a wise user (ok, me) user needs to just get in there and poke around. Instead of directly looking at the tables, use MS Access but use queries to narrow down what you're looking at both column wise and row wise. That will improve the speed. Then edit the query properties and make sure that the query is No Locks. That should eliminate any blocking behavior. You may want to limit the number of rows returned which again will improve the speed. You can still edit the data in the query as you look at it. Depending on what you're looking at, it may also be useful to set up database Views in the SQL Server to do some of the heavy lifting on the server rather than on the client. A: I don't know how well it will do with really large tables, but Visual Studio is much quicker than SQL Management Studio for basic table operations. Open up your database in Server Explorer, right-click on a table, and select either "Open" to just display the data or "New Query" to filter, sort, etc. A: I've used Visual Studio to do lots of stuff, just for convenience rather than having to log into the server and work on the database manager directly. However, have you tried Toad for MS SQL (from Quest Software)? I use it all the time for Oracle, and have had good results (often better than Oracle's tools). A: Editing raw data is a dangerous no-no. Better to identify the situations where you find yourself doing that and put together an application interface to act as an intermediary that can prevent you from doing stupid things like breaking a foreign key. A: I don't know what the performance would be like for large datasets, but open office has a database program (Base), which is an Access clone and may just be what you are looking for. A: You might want to read Tony Toews's Access Performance FAQ, which provides a number of hints on how to improve performance in an Access application. Perhaps one of those tips will solve the problem in your A2K7 app.
What's the best way to work with SQL Server data non-programmatically?
We have a SQL server database. To manipulate the data non-programmatically, I can use SQL Server Management Studio by right-clicking a table and selecting "Open Table". However this is slow for very large tables and sorting and filtering is cumbersome. Typically what we have done until now is to create an Access database containing linked tables which point to the SQL Server tables and views. Opening a large table is much faster this way, and Access has easy-to-use right-click filtering and sorting. However, since Access 2007, sorting in particular has been quite slow when working with large tables. The Access database can also inadvertently lock the database tables, blocking other processes that may need to access the data. Creating the Access database in the first place, and updating it when new tables are added to SQL Server, is also tedious. Is there a better way to work with the data that offers the usability of Access without its drawbacks?
[ "Joel Coehoorn's answer is of course correct, that if the data is critical or there are naive users using the data, then a application front end should be developed. That being said, I have cases where a wise user (ok, me) user needs to just get in there and poke around. \nInstead of directly looking at the tables, use MS Access but use queries to narrow down what you're looking at both column wise and row wise. That will improve the speed. Then edit the query properties and make sure that the query is No Locks. That should eliminate any blocking behavior. You may want to limit the number of rows returned which again will improve the speed. You can still edit the data in the query as you look at it.\nDepending on what you're looking at, it may also be useful to set up database Views in the SQL Server to do some of the heavy lifting on the server rather than on the client.\n", "I don't know how well it will do with really large tables, but Visual Studio is much quicker than SQL Management Studio for basic table operations. Open up your database in Server Explorer, right-click on a table, and select either \"Open\" to just display the data or \"New Query\" to filter, sort, etc.\n", "I've used Visual Studio to do lots of stuff, just for convenience rather than having to log into the server and work on the database manager directly.\nHowever, have you tried Toad for MS SQL (from Quest Software)? I use it all the time for Oracle, and have had good results (often better than Oracle's tools).\n", "Editing raw data is a dangerous no-no. Better to identify the situations where you find yourself doing that and put together an application interface to act as an intermediary that can prevent you from doing stupid things like breaking a foreign key.\n", "I don't know what the performance would be like for large datasets, but open office has a database program (Base), which is an Access clone and may just be what you are looking for.\n", "You might want to read \nTony Toews's Access Performance FAQ, which provides a number of hints on how to improve performance in an Access application. Perhaps one of those tips will solve the problem in your A2K7 app.\n" ]
[ 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "ms_access", "sql_server", "ssms" ]
stackoverflow_0000040344_ms_access_sql_server_ssms.txt
Q: What is the best source for information on COM error codes? I'm at a loss for where to get the best information about the meaning, likely causes, and possible solutions to resolve COM errors when all you have is the HRESULT. Searching Google for terms like '80004027' is just about useless as it sends you to random discussion groups where 90% of the time, the question 'What does 80004027 mean?' is not answered. What is a good resource for this? Why isn't MSDN the top Google result? A: I always use WinError.h. That has the vast majority of Windows error codes of all sorts. A key indicator to look out for is the Facility part of the code: the second most-significant byte. That is, 0x80nnmmmm, where nn is the Facility. That tells you which component generated the code. Anything with a facility of 7 is a Windows error code repackaged as an HRESULT, and you should convert the low word to decimal and look it up in WinError.h. There are also error ranges that appear in their own headers (e.g. anything from 12000 - 12999 is a WinInet error code and you should look it up in WinInet.h). Looking up the error code will give you the symbolic name, which might be found in more documentation than the code itself or the wording of the error message. FACILITY_ITF (which has the value 4, so these HRESULTs start 0x8004) indicates that the error is defined by the interface you're using; you'll have to check with that interface to find out what it means. Finally, COM also offers the interface IErrorInfo to retrieve extended error information: call GetErrorInfo to retrieve the error object. You'll have to query for ISupportErrorInfo and call that interface's InterfaceSupportsErrorInfo method to determine whether the interface you called actually set the error object (and of course, if it was template code, it could be lying). A: Error Lookup (ErrLook.exe) in your %PROGRAMFILES%[Some version of Visual Studio]\Tools Common7\ folder will give you the error message often, but not always: |---------------------------------------------------| | [] Error Lookup | |---------------------------------------------------| | Value: [0x80004027] | | | | Error Message | | +---------------------------------------------+ | | |The component or application containing the | | | |component has been disabled | | | | | | | +---------------------------------------------+ | | [Modules...] [Look up] [Close] [Help] | |---------------------------------------------------- If this doesn't work, you might follow some ideas from here: http://blogs.msdn.com/oldnewthing/archive/2008/09/01/8914664.aspx (Error Lookup simply calls FormatMessage() with the FORMAT_MESSAGE_FROM_SYSTEM flag) If the COM error is not a system error, you might also have to check the documentation of the component that threw the error. If you are catching the error in code, you can hope that the component implements rich errors (GetErrorInfo(), same as the Err object in VB) so you can get a full message describing the problem. A: Structure of COM Error Code Second in Google results for COM Error Code. A: Good link from Prakash (I wasn't aware of the RCNr stuff - I thought those bytes were part of the facility - but that's only true in 16 bit Windows it seems.) Often these unknown codes are specific to the interface/component you're using. The facility would be set to FACILITY_ITF. I have an old program HRPlus (link?) that parses HRESULTs.
What is the best source for information on COM error codes?
I'm at a loss for where to get the best information about the meaning, likely causes, and possible solutions to resolve COM errors when all you have is the HRESULT. Searching Google for terms like '80004027' is just about useless as it sends you to random discussion groups where 90% of the time, the question 'What does 80004027 mean?' is not answered. What is a good resource for this? Why isn't MSDN the top Google result?
[ "I always use WinError.h. That has the vast majority of Windows error codes of all sorts.\nA key indicator to look out for is the Facility part of the code: the second most-significant byte. That is, 0x80nnmmmm, where nn is the Facility. That tells you which component generated the code. Anything with a facility of 7 is a Windows error code repackaged as an HRESULT, and you should convert the low word to decimal and look it up in WinError.h. There are also error ranges that appear in their own headers (e.g. anything from 12000 - 12999 is a WinInet error code and you should look it up in WinInet.h).\nLooking up the error code will give you the symbolic name, which might be found in more documentation than the code itself or the wording of the error message.\nFACILITY_ITF (which has the value 4, so these HRESULTs start 0x8004) indicates that the error is defined by the interface you're using; you'll have to check with that interface to find out what it means.\nFinally, COM also offers the interface IErrorInfo to retrieve extended error information: call GetErrorInfo to retrieve the error object. You'll have to query for ISupportErrorInfo and call that interface's InterfaceSupportsErrorInfo method to determine whether the interface you called actually set the error object (and of course, if it was template code, it could be lying).\n", "Error Lookup (ErrLook.exe) in your %PROGRAMFILES%[Some version of Visual Studio]\\Tools Common7\\ folder will give you the error message often, but not always:\n\n |---------------------------------------------------|\n | [] Error Lookup |\n |---------------------------------------------------|\n | Value: [0x80004027] |\n | |\n | Error Message |\n | +---------------------------------------------+ |\n | |The component or application containing the | |\n | |component has been disabled | |\n | | | |\n | +---------------------------------------------+ |\n | [Modules...] [Look up] [Close] [Help] |\n |----------------------------------------------------\n\nIf this doesn't work, you might follow some ideas from here:\nhttp://blogs.msdn.com/oldnewthing/archive/2008/09/01/8914664.aspx\n(Error Lookup simply calls FormatMessage() with the FORMAT_MESSAGE_FROM_SYSTEM\nflag)\nIf the COM error is not a system error, you might also have to check the documentation of the component that threw the error.\nIf you are catching the error in code, you can hope that the component implements rich errors (GetErrorInfo(), same as the Err object in VB) so you can get a full message describing the problem.\n", "Structure of COM Error Code\nSecond in Google results for COM Error Code.\n", "Good link from Prakash (I wasn't aware of the RCNr stuff - I thought those bytes were part of the facility - but that's only true in 16 bit Windows it seems.) \nOften these unknown codes are specific to the interface/component you're using. The facility would be set to FACILITY_ITF. I have an old program HRPlus (link?) that parses HRESULTs.\n" ]
[ 6, 5, 0, 0 ]
[]
[]
[ "com", "windows" ]
stackoverflow_0000052059_com_windows.txt
Q: How do I return an array of strings from an ActiveX object to JScript I need to call into a Win32 API to get a series of strings, and I would like to return an array of those strings to JavaScript. This is for script that runs on local machine for administration scripts, not for the web browser. My IDL file for the COM object has the interface that I am calling into as: HRESULT GetArrayOfStrings([out, retval] SAFEARRAY(BSTR) * rgBstrStringArray); The function returns correctly, but the strings are getting 'lost' when they are being assigned to a variable in JavaScript. The question is: What is the proper way to get the array of strings returned to a JavaScript variable? ­­­­­­­­­­­­­­­­­­­­­­­­ A: If i recall correctly, you'll need to wrap the SAFEARRAY in a VARIANT in order for it to get through, and then use a VBArray object to unpack it on the JS side of things: HRESULT GetArrayOfStrings(/*[out, retval]*/ VARIANT* pvarBstrStringArray) { // ... _variant_t ret; ret.vt = VT_ARRAY|VT_VARIANT; ret.parray = rgBstrStringArray; *pvarBstrStringArray = ret.Detach(); return S_OK; } then var jsFriendlyStrings = new VBArray( axOb.GetArrayOfStrings() ).toArray(); A: Shog9 is correct. COM scripting requires that all outputs be VARIANTS. In fact, it also requires that all the INPUTs be VARIANTS as well -- see the nasty details of IDispatch in your favorite help file. It's only thought the magic of the Dual Interface implementation by ATL and similar layers (which most likely is what you are using) that you don't have to worry about that. The input VARIANTs passed by the calling code are converted to match your method signature before your actual method is called.
How do I return an array of strings from an ActiveX object to JScript
I need to call into a Win32 API to get a series of strings, and I would like to return an array of those strings to JavaScript. This is for script that runs on local machine for administration scripts, not for the web browser. My IDL file for the COM object has the interface that I am calling into as: HRESULT GetArrayOfStrings([out, retval] SAFEARRAY(BSTR) * rgBstrStringArray); The function returns correctly, but the strings are getting 'lost' when they are being assigned to a variable in JavaScript. The question is: What is the proper way to get the array of strings returned to a JavaScript variable? ­­­­­­­­­­­­­­­­­­­­­­­­
[ "If i recall correctly, you'll need to wrap the SAFEARRAY in a VARIANT in order for it to get through, and then use a VBArray object to unpack it on the JS side of things:\nHRESULT GetArrayOfStrings(/*[out, retval]*/ VARIANT* pvarBstrStringArray)\n{\n // ...\n\n _variant_t ret;\n ret.vt = VT_ARRAY|VT_VARIANT;\n ret.parray = rgBstrStringArray;\n *pvarBstrStringArray = ret.Detach();\n return S_OK;\n}\n\nthen\nvar jsFriendlyStrings = new VBArray( axOb.GetArrayOfStrings() ).toArray();\n\n", "Shog9\nis correct. COM scripting requires that all outputs be VARIANTS. \nIn fact, it also requires that all the INPUTs be VARIANTS as well -- see the nasty details of IDispatch in your favorite help file. It's only thought the magic of the Dual Interface implementation by ATL and similar layers (which most likely is what you are using) that you don't have to worry about that. The input VARIANTs passed by the calling code are converted to match your method signature before your actual method is called.\n" ]
[ 6, 1 ]
[]
[]
[ "activex", "com", "javascript" ]
stackoverflow_0000045169_activex_com_javascript.txt
Q: Tools to test/debug/fix PHP concurrency issues? I find myself doing some relatively advanced stuff with memcached in PHP. It's becoming a mental struggle to think about and resolve race conditions and concurrency issues caused by the lock-free nature of the cache. PHP seems pretty poor in tools when it comes to concurrency (threads, anyone?), so I wonder if there are any solutions out there to test/debug this properly. I don't want to wait until two users request two scripts that will run as parallel processes at the same time and cause a concurrency issue that will leave me scratching my head, or that I might not ever notice until it snowballs into a clusterfsck. Any magic PHP concurrency wand I should know of? A: PHP is not a language designed for multi-threading, and I don't think it ever will be. If you need mutex functionality, PHP has a Semaphore functions you can compile in. Memcache has no mutex capability, but it can be emulated using the Memcache::add() method. If you are using a MySQL database, and are trying to prevent some kind of race condition corruption, you can use the lock tables statement, or use transactions. A: You could try pounding on your code with a load test tool that can make multiple requests at the same time. Jmeter comes to mind. A: Not specifically for this issue but: FirePHP?
Tools to test/debug/fix PHP concurrency issues?
I find myself doing some relatively advanced stuff with memcached in PHP. It's becoming a mental struggle to think about and resolve race conditions and concurrency issues caused by the lock-free nature of the cache. PHP seems pretty poor in tools when it comes to concurrency (threads, anyone?), so I wonder if there are any solutions out there to test/debug this properly. I don't want to wait until two users request two scripts that will run as parallel processes at the same time and cause a concurrency issue that will leave me scratching my head, or that I might not ever notice until it snowballs into a clusterfsck. Any magic PHP concurrency wand I should know of?
[ "PHP is not a language designed for multi-threading, and I don't think it ever will be.\nIf you need mutex functionality, PHP has a Semaphore functions you can compile in. \nMemcache has no mutex capability, but it can be emulated using the Memcache::add() method.\nIf you are using a MySQL database, and are trying to prevent some kind of race condition corruption, you can use the lock tables statement, or use transactions.\n", "You could try pounding on your code with a load test tool that can make multiple requests at the same time. Jmeter comes to mind.\n", "Not specifically for this issue but: FirePHP?\n" ]
[ 3, 1, 0 ]
[]
[]
[ "concurrency", "memcached", "php" ]
stackoverflow_0000066952_concurrency_memcached_php.txt
Q: Haxe iteration on Dynamic I have a variable of type Dynamic and I know for sure one of its fields, lets call it a, actually is an array. But when I'm writing var d : Dynamic = getDynamic(); for (t in d.a) { } I get a compilation error on line two: You can't iterate on a Dynamic value, please specify Iterator or Iterable How can I make this compilable? A: Haxe can't iterate over Dynamic variables (as the compiler says). You can make it work in several ways, where this one is probably easiest (depending on your situation): var d : {a:Array<Dynamic>} = getDynamic(); for (t in d.a) { ... } You could also change Dynamic to the type of the contents of the array. A: Another way to do the same is to use an extra temp variable and explicit typing: var d = getDynamic(); var a: Array<Dynamic> = d.a; for (t in a) { ... }
Haxe iteration on Dynamic
I have a variable of type Dynamic and I know for sure one of its fields, lets call it a, actually is an array. But when I'm writing var d : Dynamic = getDynamic(); for (t in d.a) { } I get a compilation error on line two: You can't iterate on a Dynamic value, please specify Iterator or Iterable How can I make this compilable?
[ "Haxe can't iterate over Dynamic variables (as the compiler says).\nYou can make it work in several ways, where this one is probably easiest (depending on your situation):\nvar d : {a:Array<Dynamic>} = getDynamic();\nfor (t in d.a) { ... }\n\nYou could also change Dynamic to the type of the contents of the array.\n", "Another way to do the same is to use an extra temp variable and explicit typing:\nvar d = getDynamic();\nvar a: Array<Dynamic> = d.a;\nfor (t in a) { ... }\n\n" ]
[ 6, 3 ]
[]
[]
[ "arrays", "for_loop", "haxe", "iterable", "loops" ]
stackoverflow_0000051781_arrays_for_loop_haxe_iterable_loops.txt
Q: google maps providing directions in local language I noticed that Google maps is providing directions in my local language (hungarian) when I am using google chrome, but English language directions when I am using it from IE. I would like to know how chrome figures this out and how can I write code that is always returning directions on the user's language. A: HTTPrequests` include an Accept-Language header which is set according to your locale preferences on most OS/browser combinations. Google uses a combination of that, the local domain you use (eg 'google.it', 'google.hu') and any preferences you set with the Preferences link in the home page to assign a language to your pages. It's likely that IE is misrepresenting your locale to Google Maps, whereas Chrome has correctly guessed it. You can change IE's locale by changing your national settings in Control Panel, while Chrome's locale can be changed in (wrench menu) > Preferences.
google maps providing directions in local language
I noticed that Google maps is providing directions in my local language (hungarian) when I am using google chrome, but English language directions when I am using it from IE. I would like to know how chrome figures this out and how can I write code that is always returning directions on the user's language.
[ "HTTPrequests` include an Accept-Language header which is set according to your locale preferences on most OS/browser combinations. Google uses a combination of that, the local domain you use (eg 'google.it', 'google.hu') and any preferences you set with the Preferences link in the home page to assign a language to your pages.\nIt's likely that IE is misrepresenting your locale to Google Maps, whereas Chrome has correctly guessed it. You can change IE's locale by changing your national settings in Control Panel, while Chrome's locale can be changed in (wrench menu) > Preferences.\n" ]
[ 3 ]
[ "I could be way off but I think it's fairly safe to assume that google, is using gears.\n" ]
[ -1 ]
[ "google_maps" ]
stackoverflow_0000067029_google_maps.txt
Q: Is there an Eclipse command to surround the current selection with parentheses? Is there an Eclipse command to surround the current selection with parentheses? Creating a template is a decent workaround; it doesn't work with the "Surround With" functionality, because I want to parenthesize an expression, not an entire line, and that requires ${word_selection} rather than ${line_selection}. Is there a way that I can bind a keyboard shortcut to this particular template? Ctrl-space Ctrl-space arrow arrow arrow isn't as slick as I'd hoped for. A: Maybe not the correct answer, but at least a workaround: define a Java template with the name "parenthesis" (or "pa") with the following : (${word_selection})${cursor} once the word is selected, ctrl-space + p + use the arrow keys to select the template I used this technique for boxing primary types in JDK 1.4.2 and it saves quite a lot of typing. A: Easy, Window->Prefs, then select Java->Editor->Templates Create a new template with : (${line_selection}${cursor}) The "line_selection" means you have to select more than one line. You can try creating another one with "word_selection", too. Then, select text, right click, Surround With... and choose your new template.
Is there an Eclipse command to surround the current selection with parentheses?
Is there an Eclipse command to surround the current selection with parentheses? Creating a template is a decent workaround; it doesn't work with the "Surround With" functionality, because I want to parenthesize an expression, not an entire line, and that requires ${word_selection} rather than ${line_selection}. Is there a way that I can bind a keyboard shortcut to this particular template? Ctrl-space Ctrl-space arrow arrow arrow isn't as slick as I'd hoped for.
[ "Maybe not the correct answer, but at least a workaround:\n\ndefine a Java template with the name \"parenthesis\" (or \"pa\") with the following :\n(${word_selection})${cursor}\nonce the word is selected, ctrl-space + p + use the arrow keys to select the template\n\nI used this technique for boxing primary types in JDK 1.4.2 and it saves quite a lot of typing.\n", "Easy, Window->Prefs, then select Java->Editor->Templates\nCreate a new template with : (${line_selection}${cursor})\nThe \"line_selection\" means you have to select more than one line.\nYou can try creating another one with \"word_selection\", too.\nThen, select text, right click, Surround With... and choose your new template.\n" ]
[ 35, 4 ]
[]
[]
[ "eclipse", "keyboard_shortcuts" ]
stackoverflow_0000066986_eclipse_keyboard_shortcuts.txt
Q: What methods of caching, other than to file or database, are available? Currently I know of only two ways to cache data (I use PHP but I assume that the same will apply to most languages). Save the cache to a file Save the cache to a large DB field Are there any other (perhaps better) ways of caching or is it really just this simple? A: Maybe you want to explicit more precisely what you want to cache. You have all this opportunities to cache: Accessing the Data Base where you cache the data first correctly tuning your RDBMS, then using a layer to delegate the decision to detect multiple queries for the same data (with AdoDB for example.) Extracting calculations from loops in the code so you don't compute the same value multiple times. Here your third way: storing results in the session for the user. Precompiling the PHP code with an extension like APC Cache. This way you don't have to compile the same PHP code for every request. The page sent to the user making sure you're setting the right META tags (do a good thing for the world and don't use ETL at least absolutly necessary); or maybe making dynamic pages completely static (having a batch process that generates .html pages); or by using a proxy cache like Squid. Prefetching and by this I refer all those opportunities you have to improve the user experience just by doing things while the user don't look your way. For example, preloading IMG tags in the HTML file, tunning the RDBMS for prefectching, precomputing results storing complex computations in the database, etc. From my experience, I'd bet you that your code can be improved a lot before we start to talk about caching things. Consider, for example, how well structured is the navigation of your site and how well you control the user experience. Then check your code with a tool like XDebug. Verify also how well are you making your SQL queries and how well are you indexing your tables. Then check your code again to look for opportunities to apply the rule "read many times but write just once" Use a simple tool like YSlow to hint other simple things to improve. Check your code again looking for opportunities to put logic in the browser (via JavaScript) A: You can also cache in memory which is much more efficient. Try memcached. A: Seconding memcached, does the simple stuff well and can go distributive and all that jazz if you need it too A: If you're using Apache, you can use mod_rewrite to statically cache your web pages. Lets say you're using PHP, and you have a request for "/somepage.php". In your .htaccess file you put the following: RewriteEngine on RewriteCond %{QUERY_STRING} ^$ # let's not cache urls with queries RewriteCond %{REQUEST_METHOD} ^GET$ # or POST/PUT/DELETE requests RewriteCond static_cache/%{REQUEST_URI} -s # Check that this file exists and is > 0 bytes RewriteRule (^.*$) static_cache$1 [L] # If all the conditions are met, we rewrite this request to hit the static cache instead If your cache turns up empty, the request is handled by your php script as usual, so now it's simply a matter of making your php script store the resulting html in the cache. The simplest way to do this is using another htaccess rule to prepend end append a couple of php files to all your php requests (this might or might not be a good idea, depending on your application): php_value auto_prepend_file "pre_cache.php" php_value auto_append_file "post_cache.php" Then you'd do something like this: pre_cache.php: ob_start(); post_cache.php: $result = ob_get_flush(); if(!$_SERVER['QUERY_STRING']) { # Again, we're not caching query string requests file_put_contents("static_cache/" + __FILE__, $result); } With some additional regular expressions in the .htaccess file we could probably start caching query string requests as well, but I'll leave that as an exercise for the reader :)
What methods of caching, other than to file or database, are available?
Currently I know of only two ways to cache data (I use PHP but I assume that the same will apply to most languages). Save the cache to a file Save the cache to a large DB field Are there any other (perhaps better) ways of caching or is it really just this simple?
[ "Maybe you want to explicit more precisely what you want to cache. You have all this opportunities to cache: \n\nAccessing the Data Base where you cache the data first correctly tuning your RDBMS, then using a layer to delegate the decision to detect multiple queries for the same data (with AdoDB for example.)\nExtracting calculations from loops in the code so you don't compute the same value multiple times. Here your third way: storing results in the session for the user.\nPrecompiling the PHP code with an extension like APC Cache. This way you don't have to compile the same PHP code for every request.\nThe page sent to the user making sure you're setting the right META tags (do a good thing for the world and don't use ETL at least absolutly necessary); or maybe making dynamic pages completely static (having a batch process that generates .html pages); or by using a proxy cache like Squid.\nPrefetching and by this I refer all those opportunities you have to improve the user experience just by doing things while the user don't look your way. For example, preloading IMG tags in the HTML file, tunning the RDBMS for prefectching, precomputing results storing complex computations in the database, etc. \n\nFrom my experience, I'd bet you that your code can be improved a lot before we start to talk about caching things. Consider, for example, how well structured is the navigation of your site and how well you control the user experience. Then check your code with a tool like XDebug. \nVerify also how well are you making your SQL queries and how well are you indexing your tables. Then check your code again to look for opportunities to apply the rule \"read many times but write just once\"\nUse a simple tool like YSlow to hint other simple things to improve. Check your code again looking for opportunities to put logic in the browser (via JavaScript)\n", "You can also cache in memory which is much more efficient. Try memcached.\n", "Seconding memcached, does the simple stuff well and can go distributive and all that jazz if you need it too\n", "If you're using Apache, you can use mod_rewrite to statically cache your web pages. Lets say you're using PHP, and you have a request for \"/somepage.php\". In your .htaccess file you put the following:\nRewriteEngine on\nRewriteCond %{QUERY_STRING} ^$ # let's not cache urls with queries\nRewriteCond %{REQUEST_METHOD} ^GET$ # or POST/PUT/DELETE requests\nRewriteCond static_cache/%{REQUEST_URI} -s # Check that this file exists and is > 0 bytes\nRewriteRule (^.*$) static_cache$1 [L] # If all the conditions are met, we rewrite this request to hit the static cache instead\n\nIf your cache turns up empty, the request is handled by your php script as usual, so now it's simply a matter of making your php script store the resulting html in the cache. The simplest way to do this is using another htaccess rule to prepend end append a couple of php files to all your php requests (this might or might not be a good idea, depending on your application):\nphp_value auto_prepend_file \"pre_cache.php\"\nphp_value auto_append_file \"post_cache.php\"\n\nThen you'd do something like this:\npre_cache.php:\nob_start();\n\npost_cache.php:\n$result = ob_get_flush();\nif(!$_SERVER['QUERY_STRING']) { # Again, we're not caching query string requests\n file_put_contents(\"static_cache/\" + __FILE__, $result);\n}\n\nWith some additional regular expressions in the .htaccess file we could probably start caching query string requests as well, but I'll leave that as an exercise for the reader :)\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "caching", "language_agnostic" ]
stackoverflow_0000004062_caching_language_agnostic.txt
Q: Validation framework for business app built on Spring 2.5 What could the best strategy for writing validation layer for mid-enterprise level business application built on Spring 2.5 I know that Spring provides facility where we can implement Validator interface and write validation logic in validate method. But this will be restricted to only web requests coming through spring controller. I would like to develop the validation framework which can be utilized during web-services calls. In other words, the framework can remain and be called independently without the need of implementing Validator interface and then too it can be automatically integrated into Spring MVC flow. Hope you get my point. A: The Spring Validation framework can be used outside of Spring MVC. What WebServices Stack are you using? If you are using Spring-WS (Spring's Web Services stack) they have special instructions on how to set up the validator here: http://static.springframework.org/spring-ws/sites/1.5/reference/html/server.html#d0e2313 If you are using some other stack, it is probably easier to implement something for that stack (or find one) that will use Spring's validation framework. A: Recall that the Validator interface defines two methods: boolean supports(Class clazz) void validate(Object target, Errors errors) The Object target is your form object, which is the whole object representing the page to be shown to the user. The Errors instance will contain the errors that will be displayed to the user. So, what you need to do is define an intermediary that can be called with the specifics in your form that you want to validate which are also the same as in your web service. The intermediary can take one of two forms: (probably the best): public interface ErrorReturning { public void getErrors(Errors errors); } (this can get ugly really fast if more than two states are added): public interface ValidationObject { public Errors getErrors(Errors errors); public Object getResultOfWebServiceValidation(); } I would suggest that the first approach be implemented. With your common validation, pass an object that can be used for web service validation directly, but allow it to implement the getErrors() method. This way, in your validator for Spring, inside your validation method you can simply call: getCommonValidator().validate(partialObject).getErrors(errors); Your web service would be based around calls to getCommonValidator().validate(partialObject) for a direct object to be used in the web service. The second approach is like this, though the interface only allows for an object to be returned from the given object for a web service validation object, instead of the object being a usable web service validation object in and of itself.
Validation framework for business app built on Spring 2.5
What could the best strategy for writing validation layer for mid-enterprise level business application built on Spring 2.5 I know that Spring provides facility where we can implement Validator interface and write validation logic in validate method. But this will be restricted to only web requests coming through spring controller. I would like to develop the validation framework which can be utilized during web-services calls. In other words, the framework can remain and be called independently without the need of implementing Validator interface and then too it can be automatically integrated into Spring MVC flow. Hope you get my point.
[ "The Spring Validation framework can be used outside of Spring MVC. What WebServices Stack are you using? If you are using Spring-WS (Spring's Web Services stack) they have special instructions on how to set up the validator here:\nhttp://static.springframework.org/spring-ws/sites/1.5/reference/html/server.html#d0e2313\nIf you are using some other stack, it is probably easier to implement something for that stack (or find one) that will use Spring's validation framework.\n", "Recall that the Validator interface defines two methods:\nboolean supports(Class clazz)\nvoid validate(Object target, Errors errors)\n\nThe Object target is your form object, which is the whole object representing the page to be shown to the user. The Errors instance will contain the errors that will be displayed to the user.\nSo, what you need to do is define an intermediary that can be called with the specifics in your form that you want to validate which are also the same as in your web service. The intermediary can take one of two forms:\n\n(probably the best):\npublic interface ErrorReturning { \n public void getErrors(Errors errors);\n}\n\n(this can get ugly really fast if more than two states are added):\npublic interface ValidationObject {\n public Errors getErrors(Errors errors);\n public Object getResultOfWebServiceValidation();\n}\n\n\nI would suggest that the first approach be implemented. With your common validation, pass an object that can be used for web service validation directly, but allow it to implement the getErrors() method. This way, in your validator for Spring, inside your validation method you can simply call:\ngetCommonValidator().validate(partialObject).getErrors(errors);\n\nYour web service would be based around calls to getCommonValidator().validate(partialObject) for a direct object to be used in the web service.\nThe second approach is like this, though the interface only allows for an object to be returned from the given object for a web service validation object, instead of the object being a usable web service validation object in and of itself.\n" ]
[ 1, 0 ]
[]
[]
[ "spring", "validation" ]
stackoverflow_0000057314_spring_validation.txt
Q: How can I change the way my Drupal theme displays the front page I am trying to build an website for my college's magazine. I used the "views" module to show a block of static content I created on the front page. My question is: how can I edit the theme's css so it changes the way that block of static content is displayed? For reference, here's the link to the site (in portuguese, and with almost zero content for now). A: I can't access your site at the moment, so I'm basing this on fairly limited information. But if the home page is static content, the views module might not be appropriate. It might be better to create a page (In the menu, go to: Create content > page), make a note of the page's url, and then change the default home page to that url (Administer > Site Configuration > Site information, 'Default front page' is at the bottom). Although I might be misunderstanding what you mean by 'static content'. But however you're creating the front page, don't edit the css in the theme - it'll get overwritten next time you upgrade. Instead you need to create a sub-theme. As an example, if you want to subtheme Garland, in drupal 6. You first need to setup a directory for your themes. Go to sites/all/ in your drupal installation, and create a subdirectory called themes if it doesn't already exist. Go into that directory, and create a directory for your subtheme, say mytheme (i.e. sites/all/themes/mytheme/). Then use your text editor to create a file called mytheme.info in that directory, with the contents: name = My Theme version = 0.1 core = 6.x base theme = garland stylesheets[all][] = mytheme.css And then use your text editor to create a file called mytheme.css in that directory, and put the extra CSS in there. For more information, there's the druapl documentation on .info files and style sheets. Although, you might want to buy a book, as the online documentation isn't great. A: The main css file that drives your content is the styles.css file located in your currently selected theme. In your case that means that most of your site styling is driven by this file: /aroda/roda/themes/garland/style.css with basic coloring effects handled by this file: /aroda/roda/files/color/garland-d3985506/style.css You're currently using Garland, the default Drupal theme included with the core download, so for best practices you shouldn't edit the included style.css file directly. Instead, you should, as Daniel James said, create a subdirectory in /sites/all called "themes". If you're using Drupal 6, I'd follow Daniel James directions from there. If you're using Drupal 5, I'd go ahead and copy the garland directory into the themes directory and rename it for something specific to your site (aroda_v1) so you would have something like /sites/all/themes/aroda_v1 which would contain styles.css. At that point, you can edit the styles.css file directly to make any changes you see fit. Hope that helps! A: It looks like most of your CSS info is in some *.css files. There is also some inline Style info on the page. Your style for the static info comes from the in-line stuff. I am not sure how Drupal generates the page but the place to start looking is for any properties for "ultima-edicao". That is what the surrounding DIV is called.
How can I change the way my Drupal theme displays the front page
I am trying to build an website for my college's magazine. I used the "views" module to show a block of static content I created on the front page. My question is: how can I edit the theme's css so it changes the way that block of static content is displayed? For reference, here's the link to the site (in portuguese, and with almost zero content for now).
[ "I can't access your site at the moment, so I'm basing this on fairly limited information. But if the home page is static content, the views module might not be appropriate. It might be better to create a page (In the menu, go to: Create content > page), make a note of the page's url, and then change the default home page to that url (Administer > Site Configuration > Site information, 'Default front page' is at the bottom). Although I might be misunderstanding what you mean by 'static content'.\nBut however you're creating the front page, don't edit the css in the theme - it'll get overwritten next time you upgrade. Instead you need to create a sub-theme.\nAs an example, if you want to subtheme Garland, in drupal 6. You first need to setup a directory for your themes. Go to sites/all/ in your drupal installation, and create a subdirectory called themes if it doesn't already exist. Go into that directory, and create a directory for your subtheme, say mytheme (i.e. sites/all/themes/mytheme/). Then use your text editor to create a file called mytheme.info in that directory, with the contents:\nname = My Theme\nversion = 0.1\ncore = 6.x\nbase theme = garland\nstylesheets[all][] = mytheme.css\n\nAnd then use your text editor to create a file called mytheme.css in that directory, and put the extra CSS in there.\nFor more information, there's the druapl documentation on .info files and style sheets. Although, you might want to buy a book, as the online documentation isn't great.\n", "The main css file that drives your content is the styles.css file located in your currently selected theme. In your case that means that most of your site styling is driven by this file: /aroda/roda/themes/garland/style.css with basic coloring effects handled by this file:\n/aroda/roda/files/color/garland-d3985506/style.css\nYou're currently using Garland, the default Drupal theme included with the core download, so for best practices you shouldn't edit the included style.css file directly. Instead, you should, as Daniel James said, create a subdirectory in /sites/all called \"themes\".\nIf you're using Drupal 6, I'd follow Daniel James directions from there. If you're using Drupal 5, I'd go ahead and copy the garland directory into the themes directory and rename it for something specific to your site (aroda_v1) so you would have something like /sites/all/themes/aroda_v1 which would contain styles.css. At that point, you can edit the styles.css file directly to make any changes you see fit. Hope that helps!\n", "It looks like most of your CSS info is in some *.css files. There is also some inline Style info on the page. Your style for the static info comes from the in-line stuff. I am not sure how Drupal generates the page but the place to start looking is for any properties for \"ultima-edicao\". That is what the surrounding DIV is called.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "drupal", "drupal_theming" ]
stackoverflow_0000035372_drupal_drupal_theming.txt
Q: Unescaping angle-brackets through System.Xml.XmlWriter I'm writing a string containing some XML via System.Xml.XmlWriter. I'm stuck using WriteString(), and from the documentation: WriteString does the following: The characters &, <, and > are replaced with &amp;, &lt;, and &gt;, respectively. I'd like this to stop, but I can't seem to find any XmlWriterSettings properties to control this behavior. What are some workarounds? Thanks! David A: Try wrapping your real content between the CDATA tags: <![CDATA[ it's my content ]]> A: WriteString writes your content as a literal, and &, < and > are illegal in XML text, so they are escaped. If the other end is not unescaping them, that's where the problem lies. If you want to write unescaped XML, use WriteRaw. A: I'm stuck using WriteString(), I think that's the root of your problem. Can you explain more about your reason you're stuck using WriteString? My gut is you're using the wrong method for what you want to do
Unescaping angle-brackets through System.Xml.XmlWriter
I'm writing a string containing some XML via System.Xml.XmlWriter. I'm stuck using WriteString(), and from the documentation: WriteString does the following: The characters &, <, and > are replaced with &amp;, &lt;, and &gt;, respectively. I'd like this to stop, but I can't seem to find any XmlWriterSettings properties to control this behavior. What are some workarounds? Thanks! David
[ "Try wrapping your real content between the CDATA tags:\n<![CDATA[ it's my content ]]>\n", "WriteString writes your content as a literal, and &, < and > are illegal in XML text, so they are escaped.\nIf the other end is not unescaping them, that's where the problem lies.\nIf you want to write unescaped XML, use WriteRaw.\n", "\nI'm stuck using WriteString(),\n\nI think that's the root of your problem. Can you explain more about your reason you're stuck using WriteString? My gut is you're using the wrong method for what you want to do\n" ]
[ 2, 2, 1 ]
[]
[]
[ ".net", "c#", "xml" ]
stackoverflow_0000066479_.net_c#_xml.txt
Q: ASP.net Membership Provider - Switching Between Forms and Integrated Auth I'm writing a web application that I want to be able to use forms authentication pointing to a SQL database, or use integrated authentication in different installations of the web app. I'm authenticating users just fine with either provider but I have a question on how to architect my database. Currently what I'm doing is using the code: public static string UserID { get { if (HttpContext.Current.User.Identity.AuthenticationType == "Forms") { //using database auth return Membership.GetUser().ProviderUserKey.ToString(); } else { //using integrated auth return HttpContext.Current.Request.LogonUserIdentity.User.ToString(); } } } I'm then using the returned key (depending on the provider it's the UserID from the aspnetdb database, or the windows SID) as the UserID on items they create, etc. The UserID fields are not relational to a Users table in the database though like you would traditionally do. Is there a better way to go about this? I've thought of creating a users table with two fields of UserID (internal), and ExternalID (which stores the windows SID or the ID from aspnetdb) then using the internal UserID throughout the application, but then it's not as clean with the membership classes in c#. It seems like there's a lot of apps that allow you to switch between integrated auth, and FBA (Sharepoint 2007 comes to mind first) but I couldn't find any good tutorials on the web on how to architect the solution. Any help would be greatly appreciated. Thanks. A: Why not just use two different membership providers (Windows and Forms, instead of using LogonUserIdentify specifically)? In the code example you posted, you could use the same method in the Membership namespace for any provider. You can change which provider is the default in the Web.config file. I agree that using code specific to "integrated authentication" is not clean. Here's an example: <membership defaultProvider="1"> <providers> <clear/> <add name="1" ... /> <add name="2" ... /> </providers> </membership> Then, change the defaultProvider. The ASP.NET controls that deal with Membership (e.g. the Login control) have a property that lets you choose a Membership provider, which means you can select one programatically. The user ID is only relevant in the context of the provider, so using an "internal" user name seems unnecessary - use the provider name and the external user ID (since the same user ID could exist in several providers) in your own data store. There usually isn't any need to create your own user IDs, since the ASP.NET providers will take care of that behind the scenes. For example, if you use an ASP.NET Profile provider, you will have per-user profile information, independent of which Membership provider was used to authenticate the user.
ASP.net Membership Provider - Switching Between Forms and Integrated Auth
I'm writing a web application that I want to be able to use forms authentication pointing to a SQL database, or use integrated authentication in different installations of the web app. I'm authenticating users just fine with either provider but I have a question on how to architect my database. Currently what I'm doing is using the code: public static string UserID { get { if (HttpContext.Current.User.Identity.AuthenticationType == "Forms") { //using database auth return Membership.GetUser().ProviderUserKey.ToString(); } else { //using integrated auth return HttpContext.Current.Request.LogonUserIdentity.User.ToString(); } } } I'm then using the returned key (depending on the provider it's the UserID from the aspnetdb database, or the windows SID) as the UserID on items they create, etc. The UserID fields are not relational to a Users table in the database though like you would traditionally do. Is there a better way to go about this? I've thought of creating a users table with two fields of UserID (internal), and ExternalID (which stores the windows SID or the ID from aspnetdb) then using the internal UserID throughout the application, but then it's not as clean with the membership classes in c#. It seems like there's a lot of apps that allow you to switch between integrated auth, and FBA (Sharepoint 2007 comes to mind first) but I couldn't find any good tutorials on the web on how to architect the solution. Any help would be greatly appreciated. Thanks.
[ "Why not just use two different membership providers (Windows and Forms, instead of using LogonUserIdentify specifically)? In the code example you posted, you could use the same method in the Membership namespace for any provider. You can change which provider is the default in the Web.config file. I agree that using code specific to \"integrated authentication\" is not clean. Here's an example:\n<membership defaultProvider=\"1\">\n <providers>\n <clear/>\n <add name=\"1\" ... />\n <add name=\"2\" ... />\n </providers>\n</membership>\n\nThen, change the defaultProvider. The ASP.NET controls that deal with Membership (e.g. the Login control) have a property that lets you choose a Membership provider, which means you can select one programatically.\nThe user ID is only relevant in the context of the provider, so using an \"internal\" user name seems unnecessary - use the provider name and the external user ID (since the same user ID could exist in several providers) in your own data store.\nThere usually isn't any need to create your own user IDs, since the ASP.NET providers will take care of that behind the scenes. For example, if you use an ASP.NET Profile provider, you will have per-user profile information, independent of which Membership provider was used to authenticate the user.\n" ]
[ 1 ]
[]
[]
[ "asp.net", "authentication" ]
stackoverflow_0000065024_asp.net_authentication.txt
Q: Expose an event handler to VBScript users of my COM object Suppose I have a COM object which users can access via a call such as: Set s = CreateObject("Server") What I'd like to be able to do is allow the user to specify an event handler for the object, like so: Function ServerEvent MsgBox "Event handled" End Function s.OnDoSomething = ServerEvent Is this possible and, if so, how do I expose this in my type library in C++ (specifically BCB 2007)? A: This is how I did it just recently. Add an interface that implements IDispatch and a coclass for that interface to your IDL: [ object, uuid(6EDA5438-0915-4183-841D-D3F0AEDFA466), nonextensible, oleautomation, pointer_default(unique) ] interface IServerEvents : IDispatch { [id(1)] HRESULT OnServerEvent(); } //... [ uuid(FA8F24B3-1751-4D44-8258-D649B6529494), ] coclass ServerEvents { [default] interface IServerEvents; [default, source] dispinterface IServerEvents; }; This is the declaration of the CServerEvents class: class ATL_NO_VTABLE CServerEvents : public CComObjectRootEx<CComSingleThreadModel>, public CComCoClass<CServerEvents, &CLSID_ServerEvents>, public IDispatchImpl<IServerEvents, &IID_IServerEvents , &LIBID_YourLibrary, -1, -1>, public IConnectionPointContainerImpl<CServerEvents>, public IConnectionPointImpl<CServerEvents,&__uuidof(IServerEvents)> { public: CServerEvents() { } // ... BEGIN_COM_MAP(CServerEvents) COM_INTERFACE_ENTRY(IServerEvents) COM_INTERFACE_ENTRY(IDispatch) COM_INTERFACE_ENTRY(IConnectionPointContainer) END_COM_MAP() BEGIN_CONNECTION_POINT_MAP(CServerEvents) CONNECTION_POINT_ENTRY(__uuidof(IServerEvents)) END_CONNECTION_POINT_MAP() // .. // IServerEvents STDMETHOD(OnServerEvent)(); private: CRITICAL_SECTION m_csLock; }; The key here is the implementation of the IConnectionPointImpl and IConnectionPointContainerImpl interfaces and the connection point map. The definition of the OnServerEvent method looks like this: STDMETHODIMP CServerEvents::OnServerEvent() { ::EnterCriticalSection( &m_csLock ); IUnknown* pUnknown; for ( unsigned i = 0; ( pUnknown = m_vec.GetAt( i ) ) != NULL; ++i ) { CComPtr<IDispatch> spDisp; pUnknown->QueryInterface( &spDisp ); if ( spDisp ) { spDisp.Invoke0( CComBSTR( L"OnServerEvent" ) ); } } ::LeaveCriticalSection( &m_csLock ); return S_OK; } You need to provide a way for your client to specify their handler for your events. You can do this with a dedicated method like "SetHandler" or something, but I prefer to make the handler an argument to the method that is called asynchronously. This way, the user only has to call one method: STDMETHOD(DoSomethingAsynchronous)( IServerEvents *pCallback ); Store the pointer to the IServerEvents, and then when you want to fire your event, just call the method: m_pCallback->OnServerEvent(); As for the VB code, the syntax for dealing with events is a little different than what you suggested: Private m_server As Server Private WithEvents m_serverEvents As ServerEvents Private Sub MainMethod() Set s = CreateObject("Server") Set m_serverEvents = New ServerEvents Call m_searchService.DoSomethingAsynchronous(m_serverEvents) End Sub Private Sub m_serverEvents_OnServerEvent() MsgBox "Event handled" End Sub I hope this helps. A: I'm a little hazy on the details, but maybe the link below might help: http://msdn.microsoft.com/en-us/library/ms974564.aspx It looks like your server object needs to implement IProvideClassInfo and then you call ConnectObject in your VBScript code. See also: http://blogs.msdn.com/ericlippert/archive/2005/02/15/373330.aspx A: I ended up following the technique described here.
Expose an event handler to VBScript users of my COM object
Suppose I have a COM object which users can access via a call such as: Set s = CreateObject("Server") What I'd like to be able to do is allow the user to specify an event handler for the object, like so: Function ServerEvent MsgBox "Event handled" End Function s.OnDoSomething = ServerEvent Is this possible and, if so, how do I expose this in my type library in C++ (specifically BCB 2007)?
[ "This is how I did it just recently. Add an interface that implements IDispatch and a coclass for that interface to your IDL:\n[\n object,\n uuid(6EDA5438-0915-4183-841D-D3F0AEDFA466),\n nonextensible,\n oleautomation,\n pointer_default(unique)\n]\ninterface IServerEvents : IDispatch\n{\n [id(1)]\n HRESULT OnServerEvent();\n}\n\n//...\n\n[\n uuid(FA8F24B3-1751-4D44-8258-D649B6529494),\n]\ncoclass ServerEvents\n{\n [default] interface IServerEvents;\n [default, source] dispinterface IServerEvents;\n};\n\nThis is the declaration of the CServerEvents class:\nclass ATL_NO_VTABLE CServerEvents :\n public CComObjectRootEx<CComSingleThreadModel>,\n public CComCoClass<CServerEvents, &CLSID_ServerEvents>,\n public IDispatchImpl<IServerEvents, &IID_IServerEvents , &LIBID_YourLibrary, -1, -1>,\n public IConnectionPointContainerImpl<CServerEvents>,\n public IConnectionPointImpl<CServerEvents,&__uuidof(IServerEvents)>\n{\npublic:\n CServerEvents()\n {\n }\n\n // ...\n\nBEGIN_COM_MAP(CServerEvents)\n COM_INTERFACE_ENTRY(IServerEvents)\n COM_INTERFACE_ENTRY(IDispatch)\n COM_INTERFACE_ENTRY(IConnectionPointContainer)\nEND_COM_MAP()\n\nBEGIN_CONNECTION_POINT_MAP(CServerEvents)\n CONNECTION_POINT_ENTRY(__uuidof(IServerEvents))\nEND_CONNECTION_POINT_MAP()\n\n // ..\n\n // IServerEvents\n STDMETHOD(OnServerEvent)();\n\nprivate:\n CRITICAL_SECTION m_csLock; \n};\n\nThe key here is the implementation of the IConnectionPointImpl and IConnectionPointContainerImpl interfaces and the connection point map. The definition of the OnServerEvent method looks like this:\nSTDMETHODIMP CServerEvents::OnServerEvent()\n{\n ::EnterCriticalSection( &m_csLock );\n\n IUnknown* pUnknown;\n\n for ( unsigned i = 0; ( pUnknown = m_vec.GetAt( i ) ) != NULL; ++i )\n { \n CComPtr<IDispatch> spDisp;\n pUnknown->QueryInterface( &spDisp );\n\n if ( spDisp )\n {\n spDisp.Invoke0( CComBSTR( L\"OnServerEvent\" ) );\n }\n }\n\n ::LeaveCriticalSection( &m_csLock );\n\n return S_OK;\n}\n\nYou need to provide a way for your client to specify their handler for your events. You can do this with a dedicated method like \"SetHandler\" or something, but I prefer to make the handler an argument to the method that is called asynchronously. This way, the user only has to call one method:\nSTDMETHOD(DoSomethingAsynchronous)( IServerEvents *pCallback );\n\nStore the pointer to the IServerEvents, and then when you want to fire your event, just call the method:\nm_pCallback->OnServerEvent();\n\nAs for the VB code, the syntax for dealing with events is a little different than what you suggested:\nPrivate m_server As Server\nPrivate WithEvents m_serverEvents As ServerEvents\n\nPrivate Sub MainMethod()\n Set s = CreateObject(\"Server\")\n Set m_serverEvents = New ServerEvents\n\n Call m_searchService.DoSomethingAsynchronous(m_serverEvents)\nEnd Sub\n\nPrivate Sub m_serverEvents_OnServerEvent()\n MsgBox \"Event handled\"\nEnd Sub\n\nI hope this helps.\n", "I'm a little hazy on the details, but maybe the link below might help:\nhttp://msdn.microsoft.com/en-us/library/ms974564.aspx\nIt looks like your server object needs to implement IProvideClassInfo and then you call ConnectObject in your VBScript code. See also:\nhttp://blogs.msdn.com/ericlippert/archive/2005/02/15/373330.aspx\n", "I ended up following the technique described here.\n" ]
[ 5, 2, 1 ]
[]
[]
[ "c++builder", "com", "events", "vbscript" ]
stackoverflow_0000061677_c++builder_com_events_vbscript.txt
Q: Read Access File into a DataSet Is there an easy way to read an entire Access file (.mdb) into a DataSet in .NET (specifically C# or VB)? Or at least to get a list of tables from an access file so that I can loop through it and add them one at a time into a DataSet? A: Thanks for the suggestions. I was able to use those samples to put together this code, which seems to achieve what I'm looking for. Using cn = New OleDbConnection(connectionstring) cn.Open() Dim ds As DataSet = new DataSet() Dim Schema As DataTable = cn.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, New Object() {Nothing, Nothing, Nothing, "TABLE"}) For i As Integer = 0 To Schema.Rows.Count - 1 Dim dt As DataTable = New DataTable(Schema.Rows(i)!TABLE_NAME.ToString()) Using adapter = New OleDbDataAdapter("SELECT * FROM " + Schema.Rows(i)!TABLE_NAME.ToString(), cn) adapter.Fill(dt) End Using ds.Tables.Add(dt) Next i End Using A: You should be able to access it using an OleDbConnection. Heres a tut on DB access using it for MS Access files. In terms of getting the table names, back in my VB6 days I always used ADOX, not sure how they do this in .NET now.. Although I know there is a system table in the access file - wanna say "mso...". I google! EDIT Ah ha! msysobjects !! xD A: MSDN has an article on how to use ADO.NET to connect and edit records in an Access database. Once your OleDB connection is made, you can easily create your DataReader/DataAdapter and process as needed. EDIT: Gah! Curse you Rob and your god-like typing abilities!!! 8^D A: There is a discussion on this point in Less Than Dot. Here is one example of code from the discussion. public DataTable GetColumns(string tableName) { string[] restrictions = new string[4]; restrictions[2] = tableName; _connDb.Open(); DataTable mDT = _connDb.GetSchema("Columns", restrictions); _connDb.Close(); return mDT; } A: Your original question as worded is nonsense: Is there an easy way to read an entire Access file (.mdb) into... You certainly don't want the entire contents of the MDB file. What you want is the contents of your data tables that are stored in the MDB. Keep in mind that you don't want the contents of the system tables, either. The key point: You're asking about JET, not about ACCESS. Jet is the database engine that ships as the default data store of Access (and in which Access's own objects are stored). But "Access" means something much more than just the data tables. Whenever you ask a question and confuse Access and Jet, you will likely get at least some unuseful answers. And you'll get scolded by the likes of me, because developers really should know better than to obfuscate crucial distinctions.
Read Access File into a DataSet
Is there an easy way to read an entire Access file (.mdb) into a DataSet in .NET (specifically C# or VB)? Or at least to get a list of tables from an access file so that I can loop through it and add them one at a time into a DataSet?
[ "Thanks for the suggestions. I was able to use those samples to put together this code, which seems to achieve what I'm looking for.\nUsing cn = New OleDbConnection(connectionstring)\n cn.Open()\n Dim ds As DataSet = new DataSet()\n\n Dim Schema As DataTable = cn.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, New Object() {Nothing, Nothing, Nothing, \"TABLE\"})\n For i As Integer = 0 To Schema.Rows.Count - 1\n Dim dt As DataTable = New DataTable(Schema.Rows(i)!TABLE_NAME.ToString())\n\n Using adapter = New OleDbDataAdapter(\"SELECT * FROM \" + Schema.Rows(i)!TABLE_NAME.ToString(), cn)\n adapter.Fill(dt)\n End Using\n\n ds.Tables.Add(dt)\n Next i\nEnd Using\n\n", "You should be able to access it using an OleDbConnection.\nHeres a tut on DB access using it for MS Access files.\nIn terms of getting the table names, back in my VB6 days I always used ADOX, not sure how they do this in .NET now.. Although I know there is a system table in the access file - wanna say \"mso...\". I google!\nEDIT\nAh ha! msysobjects !! xD\n", "MSDN has an article on how to use ADO.NET to connect and edit records in an Access database. Once your OleDB connection is made, you can easily create your DataReader/DataAdapter and process as needed.\nEDIT: Gah! Curse you Rob and your god-like typing abilities!!! 8^D\n", "There is a discussion on this point in Less Than Dot. Here is one example of code from the discussion.\n public DataTable GetColumns(string tableName)\n {\n string[] restrictions = new string[4];\n restrictions[2] = tableName;\n\n _connDb.Open();\n\n DataTable mDT = _connDb.GetSchema(\"Columns\", restrictions);\n\n _connDb.Close();\n\n return mDT;\n }\n\n", "Your original question as worded is nonsense:\n\nIs there an easy way to read an entire Access file (.mdb) into...\n\nYou certainly don't want the entire contents of the MDB file. What you want is the contents of your data tables that are stored in the MDB. Keep in mind that you don't want the contents of the system tables, either.\nThe key point:\nYou're asking about JET, not about ACCESS.\nJet is the database engine that ships as the default data store of Access (and in which Access's own objects are stored). But \"Access\" means something much more than just the data tables.\nWhenever you ask a question and confuse Access and Jet, you will likely get at least some unuseful answers.\nAnd you'll get scolded by the likes of me, because developers really should know better than to obfuscate crucial distinctions.\n" ]
[ 5, 3, 2, 0, 0 ]
[]
[]
[ ".net", "dataset", "ms_access" ]
stackoverflow_0000046482_.net_dataset_ms_access.txt
Q: Is Asp.net and Windows Workflow good combination? I am implementing a quite simple state-machine order processing application. It is a e-commerce application with a few twists. The users of the application will not be editing workflows by themselves. Microsoft claims that asp.net and Windows Workflow is possible to combine. How hard is it to install and maintain a combination of asp.net and Windows Workflow? I would be keeping the workflow state in sql-server. Is it easier for me to roll my own state machine code or is Windows Workflow the right tool for the job? A: Asp.net and WF get along just fine, and WF doesn't add much maintenance overhead. Whether or not this is the right design for you depends a lot on your needs. If you have a lot of event driven actions then WF might be worthwhile, otherwise the overhead of rolling your own tracking would probably add less complexity to the system. WF is reasonably easy to work with so I'd suggest working up a prototype and experimenting with it. Also, in my opinion, based on your requirements, I doubt WF would be the right solution for you. A: It depends on your needs. How complex is the state machine? Where do you want the state machine to live (e.g. model vs. database)? WWF provides an event based state machine, which is good enough if your state machine is embedded in the model. Personally I've implemented an e-commerce framework and other workflow based websites and I've have always had a lot of joy from implementing database based state machines. Always worked without a hitch. On the other hand, some colleagues of mine prefer WWF. In any case it works perfectly with ASP.NET. A: If your state machine is very simple, then I would say that you should just roll your own. You have more control over everything. You can deal with persistence on your own terms and not worry about how they do it. WF does look pretty cool though, but I think that it's power probably lies in the fact that it is easy to tie it into frameworks like CRM and Sharepoint. If you are going to use these in your application, then I would definitely consider using WF. Full disclosure: I am definitely not a WF expert.
Is Asp.net and Windows Workflow good combination?
I am implementing a quite simple state-machine order processing application. It is a e-commerce application with a few twists. The users of the application will not be editing workflows by themselves. Microsoft claims that asp.net and Windows Workflow is possible to combine. How hard is it to install and maintain a combination of asp.net and Windows Workflow? I would be keeping the workflow state in sql-server. Is it easier for me to roll my own state machine code or is Windows Workflow the right tool for the job?
[ "Asp.net and WF get along just fine, and WF doesn't add much maintenance overhead.\nWhether or not this is the right design for you depends a lot on your needs. If you have a lot of event driven actions then WF might be worthwhile, otherwise the overhead of rolling your own tracking would probably add less complexity to the system.\nWF is reasonably easy to work with so I'd suggest working up a prototype and experimenting with it.\nAlso, in my opinion, based on your requirements, I doubt WF would be the right solution for you.\n", "It depends on your needs. How complex is the state machine? Where do you want the state machine to live (e.g. model vs. database)? WWF provides an event based state machine, which is good enough if your state machine is embedded in the model.\nPersonally I've implemented an e-commerce framework and other workflow based websites and I've have always had a lot of joy from implementing database based state machines. Always worked without a hitch.\nOn the other hand, some colleagues of mine prefer WWF.\nIn any case it works perfectly with ASP.NET.\n", "If your state machine is very simple, then I would say that you should just roll your own. You have more control over everything. You can deal with persistence on your own terms and not worry about how they do it.\nWF does look pretty cool though, but I think that it's power probably lies in the fact that it is easy to tie it into frameworks like CRM and Sharepoint. If you are going to use these in your application, then I would definitely consider using WF.\nFull disclosure: I am definitely not a WF expert.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "asp.net", "workflow" ]
stackoverflow_0000067104_asp.net_workflow.txt
Q: Detect changes in random ordered input (hash function?) I'm reading lines of text that can come in any order. The problem is that the output can actually be indentical to the previous output. How can I detect this, without sorting the output first? Is there some kind of hash function that can take identical input, but in any order, and still produce the same result? A: The easiest way would seem to be to hash each line on the way in, storing the hash and the original data, and then compare each new hash with your collection of existing hashes. If you get a positive, you could compare the actual data, to make sure it's not a false positive - though this would be extremely rare, you could go with a quicker hash algorithm, like MD5 or CRC (instead of something like SHA, which is slower but less likely to collide), just so it's quick, and then compare the actual data when you get a hit. A: So you have input like A B C D D E F G C B A D and you need to detect that the first and third lines are identical? A: If you want to find out if two files contain the same set of lines, but in a different order, you can use a regular hash function on each line individually, then combine them with a function where ordering doesn't matter, like addition. A: If the lines are fairly long, you could just keep a list of the hashes of each line -- sort those and compare with previous outputs. If you don't need a 100% fool-proof solution, you could store the hash of each line in a Bloom filter (look it up on Wikipedia) and compare the Bloom filters at the end of processing. This can give you false positives (i.e. you think you have the same output but it isn't really the same) but you can tweak the error rate by adjusting the size of the Bloom filter... A: If you add up the ASCII values of each character, you'd get the same result regardless of order. (This may be a bit too simplified, but perhaps it sparks an idea for you. See Programming Pearls, section 2.8, for an interesting back story.) A: Any of the hash-based methods may produce bad results because more than one string can produce the same hash. (It's not likely, but it's possible.) This is particularly true of the suggestion to add the hashes, since you would essentially be taking a particularly bad hash of the hash values. A hash method should only be attempted if it's not critical that you miss a change or spot a change where none exists. The most accurate way would be to keep a Map using the line strings as key and storing the count of each as the value. (If each string can only appear once, you don't need the count.) Compute this for the expected set of lines. Duplicate this collection to examine the incoming lines, reducing the count for each line as you see it. If you encounter a line with a zero count (or no map entry at all), you've seen a line you didn't expect. If you end this with non-zero entries remaining in the Map, you didn't see something you expected. A: Well the problem specification is a bit limited. As I understand it you wish to see if several strings contain the same elements regardless of order. For example: A B C C B A are the same. The way to do this is to create a set of the values then compare the sets. To create a set do: HashSet set = new HashSet(); foreach (item : string) { set.add(item); } Then just compare the contents of the sets by running through one of the sets and comparing it w/others. The execution time will be O(N) instead of O(NlogN) for the sorting example.
Detect changes in random ordered input (hash function?)
I'm reading lines of text that can come in any order. The problem is that the output can actually be indentical to the previous output. How can I detect this, without sorting the output first? Is there some kind of hash function that can take identical input, but in any order, and still produce the same result?
[ "The easiest way would seem to be to hash each line on the way in, storing the hash and the original data, and then compare each new hash with your collection of existing hashes. If you get a positive, you could compare the actual data, to make sure it's not a false positive - though this would be extremely rare, you could go with a quicker hash algorithm, like MD5 or CRC (instead of something like SHA, which is slower but less likely to collide), just so it's quick, and then compare the actual data when you get a hit.\n", "So you have input like\nA B C D\nD E F G\nC B A D\n\nand you need to detect that the first and third lines are identical?\n", "If you want to find out if two files contain the same set of lines, but in a different order, you can use a regular hash function on each line individually, then combine them with a function where ordering doesn't matter, like addition.\n", "If the lines are fairly long, you could just keep a list of the hashes of each line -- sort those and compare with previous outputs.\nIf you don't need a 100% fool-proof solution, you could store the hash of each line in a Bloom filter (look it up on Wikipedia) and compare the Bloom filters at the end of processing. This can give you false positives (i.e. you think you have the same output but it isn't really the same) but you can tweak the error rate by adjusting the size of the Bloom filter...\n", "If you add up the ASCII values of each character, you'd get the same result regardless of order.\n(This may be a bit too simplified, but perhaps it sparks an idea for you.\nSee Programming Pearls, section 2.8, for an interesting back story.)\n", "Any of the hash-based methods may produce bad results because more than one string can produce the same hash. (It's not likely, but it's possible.) This is particularly true of the suggestion to add the hashes, since you would essentially be taking a particularly bad hash of the hash values.\nA hash method should only be attempted if it's not critical that you miss a change or spot a change where none exists.\nThe most accurate way would be to keep a Map using the line strings as key and storing the count of each as the value. (If each string can only appear once, you don't need the count.) Compute this for the expected set of lines. Duplicate this collection to examine the incoming lines, reducing the count for each line as you see it.\n\nIf you encounter a line with a zero count (or no map entry at all), you've seen a line you didn't expect.\nIf you end this with non-zero entries remaining in the Map, you didn't see something you expected.\n\n", "Well the problem specification is a bit limited.\nAs I understand it you wish to see if several strings contain the same elements regardless of order.\nFor example:\nA B C\nC B A\n\nare the same.\nThe way to do this is to create a set of the values then compare the sets. To create a set do:\nHashSet set = new HashSet();\nforeach (item : string) {\n set.add(item);\n}\n\nThen just compare the contents of the sets by running through one of the sets and comparing it w/others. The execution time will be O(N) instead of O(NlogN) for the sorting example.\n" ]
[ 3, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "hash", "java", "multithreading" ]
stackoverflow_0000064209_hash_java_multithreading.txt
Q: Why does sed fail with International characters and how to fix? GNU sed version 4.1.5 seems to fail with International chars. Here is my input file: Gras Och Stenar Trad - From Moja to Minneapolis DVD [G2007DVD] 7812 | X <br> Gras Och Stenar Trad - From Möja to Minneapolis DVD [G2007DVD] 7812 | Y (Note the umlaut in the second line.) And when I do sed 's/.*| //' < in I would expect to see only the X and Y, as I've asked to remove ALL chars up to the '|' and space beyond it. Instead, I get: X<br> Gras Och Stenar Trad - From M? Y I know I can use tr to remove the International chars. first, but is there a way to just use sed? A: I think the error occurs if the input encoding of the file is different from the preferred encoding of your environment. Example: in is UTF-8 $ LANG=de_DE.UTF-8 sed 's/.*| //' < in X Y $ LANG=de_DE.iso88591 sed 's/.*| //' < in X Y UTF-8 can safely be interpreted as ISO-8859-1, you'll get strange characters but apart from that everything is fine. Example: in is ISO-8859-1 $ LANG=de_DE.UTF-8 sed 's/.*| //' < in X Gras Och Stenar Trad - From MöY $ LANG=de_DE.iso88591 sed 's/.*| //' < in X Y ISO-8859-1 cannot be interpreted as UTF-8, decoding the input file fails. The strange match is probably due to the fact that sed tries to recover rather than fail completely. The answer is based on Debian Lenny/Sid and sed 4.1.5. A: sed is not very well setup for non-ASCII text. However you can use (almost) the same code in perl and get the result you want: perl -pe 's/.*\| //' x
Why does sed fail with International characters and how to fix?
GNU sed version 4.1.5 seems to fail with International chars. Here is my input file: Gras Och Stenar Trad - From Moja to Minneapolis DVD [G2007DVD] 7812 | X <br> Gras Och Stenar Trad - From Möja to Minneapolis DVD [G2007DVD] 7812 | Y (Note the umlaut in the second line.) And when I do sed 's/.*| //' < in I would expect to see only the X and Y, as I've asked to remove ALL chars up to the '|' and space beyond it. Instead, I get: X<br> Gras Och Stenar Trad - From M? Y I know I can use tr to remove the International chars. first, but is there a way to just use sed?
[ "I think the error occurs if the input encoding of the file is different from the preferred encoding of your environment. \nExample: in is UTF-8\n$ LANG=de_DE.UTF-8 sed 's/.*| //' < in\nX\nY\n$ LANG=de_DE.iso88591 sed 's/.*| //' < in\nX \nY\n\nUTF-8 can safely be interpreted as ISO-8859-1, you'll get strange characters but apart from that everything is fine.\nExample: in is ISO-8859-1\n$ LANG=de_DE.UTF-8 sed 's/.*| //' < in\nX\nGras Och Stenar Trad - From MöY\n$ LANG=de_DE.iso88591 sed 's/.*| //' < in\nX \nY\n\nISO-8859-1 cannot be interpreted as UTF-8, decoding the input file fails. The strange match is probably due to the fact that sed tries to recover rather than fail completely.\nThe answer is based on Debian Lenny/Sid and sed 4.1.5.\n", "sed is not very well setup for non-ASCII text. However you can use (almost) the same code in perl and get the result you want:\nperl -pe 's/.*\\| //' x\n\n" ]
[ 25, 11 ]
[]
[]
[ "character", "internationalization", "linux", "sed" ]
stackoverflow_0000067410_character_internationalization_linux_sed.txt
Q: What's the best HTML WYSISYG editor available to web developers and why? There are many different flavored HTML WYSIWYG editors from javascript to ASP.Net web controls, but all too often the features are the same. Does anyone have a favorite HTML editor they like to use in projects? Why? A: I'm partial to TinyMCE WYSIWYG editor due to the following reasons: Javascript - so it is broadly usable regardless of the platform I'm working in. Easy to use - just a couple lines of code and a textarea and the control is up and running. Easily themed - so I can quickly make it look like the site in which it is being used Most importantly - easily customized to show/hide particular buttons depending on my application needs A: FCKeditor, available at http://www.fckeditor.net/, is great because it is compatible with all major browsers including Safari. Most HTML rich text editors don't support Safari. A: I have had decent luck with the FCKEditor in my ASP.Net applications. A: TinyMCE is great because its very easy to customise and to write plugins for. It's also been around for a while so there are plenty of resources and help available for it. A: If you're looking for WYSIWYG editor to place on a web site, I like TinyMCE. Free with a decent community for support. A: If you're talking about editing components (as opposed to stand-alone apps like DreamWeaver) I've used CuteEditor, TinyMCE, FCKEditor, eWebEditPro, and Telerik's RAD Editor. They all have thier plusses and minuses, and you have to get used to their way of seeing the world. The HTML they spit out varies greatly, and I find I frequently have to tweak the results. A: TinyMCE by far. It does everything that FCKeditor does and more. We had problems with clients pasting in content from MS Word, FCK has problems parsing it and we've had mangled text. One of my department's goals is to move our CMS to TinyMCE. A: I use TinyMCE in my projects. I did my research quite some time ago, but am still happy: filters MS Word HTML better has less files than FCKe :) has a free adaptation of FCKe plugins for advanced file and image manipulation, tinymcpuk A: I would highly recommend the HTML Kit http://www.chami.com/html-kit/ not only does it come in a free version, but there are many modules that are available that make HTML markup alot easier. A: I always use FCKEditor, even tinymce have allmost the same features like fcke. You can use drag/drop, just like regular TextBox, and it's really easy to edit config file and add/remove features. Also, for fcke and tinymce you have to buy image thumbnail browser, in free version you have only file names list. I think that there's no major differences between this two editors, and they are probably the best on market. A: Spaw is good looking, and easy to use. We have used it in several projects to date.
What's the best HTML WYSISYG editor available to web developers and why?
There are many different flavored HTML WYSIWYG editors from javascript to ASP.Net web controls, but all too often the features are the same. Does anyone have a favorite HTML editor they like to use in projects? Why?
[ "I'm partial to TinyMCE WYSIWYG editor due to the following reasons:\n\nJavascript - so it is broadly usable\nregardless of the platform I'm\nworking in.\nEasy to use - just a couple lines of\ncode and a textarea and the control is up and\nrunning.\nEasily themed - so I can quickly\nmake it look like the site in which\nit is being used\nMost importantly - easily customized\nto show/hide particular buttons\ndepending on my application needs\n\n", "FCKeditor, available at http://www.fckeditor.net/, is great because it is compatible with all major browsers including Safari. Most HTML rich text editors don't support Safari.\n", "I have had decent luck with the FCKEditor in my ASP.Net applications.\n", "TinyMCE is great because its very easy to customise and to write plugins for. \nIt's also been around for a while so there are plenty of resources and help available for it. \n", "If you're looking for WYSIWYG editor to place on a web site, I like TinyMCE. Free with a decent community for support.\n", "If you're talking about editing components (as opposed to stand-alone apps like DreamWeaver) I've used CuteEditor, TinyMCE, FCKEditor, eWebEditPro, and Telerik's RAD Editor. They all have thier plusses and minuses, and you have to get used to their way of seeing the world. The HTML they spit out varies greatly, and I find I frequently have to tweak the results.\n", "TinyMCE by far. It does everything that FCKeditor does and more. We had problems with clients pasting in content from MS Word, FCK has problems parsing it and we've had mangled text. One of my department's goals is to move our CMS to TinyMCE.\n", "I use TinyMCE in my projects. I did my research quite some time ago, but am still happy:\n\nfilters MS Word HTML better\nhas less files than FCKe :)\nhas a free adaptation of FCKe plugins for advanced file and image manipulation, tinymcpuk\n\n", "I would highly recommend the HTML Kit http://www.chami.com/html-kit/ not only does it come in a free version, but there are many modules that are available that make HTML markup alot easier.\n", "I always use FCKEditor, even tinymce have allmost the same features like fcke. You can use drag/drop, just like regular TextBox, and it's really easy to edit config file and add/remove features.\nAlso, for fcke and tinymce you have to buy image thumbnail browser, in free version you have only file names list. I think that there's no major differences between this two editors, and they are probably the best on market.\n", "Spaw is good looking, and easy to use. We have used it in several projects to date.\n" ]
[ 12, 8, 2, 2, 1, 1, 1, 1, 0, 0, 0 ]
[ "It completely depends on what you are using it for.\nFor instance, I use VS2008 for ASP.NET coding, and Notepad++ for looking at HTML source. It's all in what your end use for the editor will be. You won't care how well it renders a decent PHP development experience if all you are doing is modifying CSS files, for instance.\n", "Adobe Dreamweaver. It will let you work visually or in code and switch between them. The killer criteria for a tool like that is that it builds good, clean code when you build visually, and Dreamweaver does that. \n" ]
[ -6, -6 ]
[ "asp.net", "editor", "html", "javascript", "wysiwyg" ]
stackoverflow_0000065800_asp.net_editor_html_javascript_wysiwyg.txt
Q: How do I best localize an entire app to many different languages? I'm using Visual Studio (2005 and up). I am looking into trying out making an application where the user can change language for all menues, input formats and such. How would I go on doing this, as I suppose that there is some complete feature within .Net that can help me with this? I need to take the following into account (and fill me in if I miss some obvious stuff) Strings (menues, texts) Input data (parsing floats, dates, etc..) Should be easy to add support for another language A: I'm not an expert with .NET by any means but Localization is never just as simple as "swapping out String values" or "changing date formats". There is much more to be taken into consideration such as layout, proper text placement. Take Chinese for example. The way you read is top to bottom not left to right. If properly localized the app should take that into account. http://msdn.microsoft.com/en-us/library/y99d1cd3(VS.80).aspx seems to be a good start though if you're dealing with Windows Forms. A: The classic recipe is: design the app with no native language but a localization facility, and develop an initialization into one language (e.g., English). So you build the app and localize it into English every night; without the localization step it would not be usable. Do that well, and the resources for the initial sample localization can be replaced with those for any other language. Take into account non-roman scripts from the beginning. It's much cleaner to have a no-language app that always requires localization rather than a language-specific app that needs to have its native language subtracted and a replacement added. A: For strings you should just separate your strings from your code (having an XML/DLL that will transform string IDs to real strings is one way to go). However you do need to make sure that you are supporting double byte characters for some languages (this is relevant if you use C/C++). For input data what you want is to have different locale's. In Java this is relatively easy, and if you use C# it probably is quite easy also. In C/C++ I don't really know. The basic idea is that the input parsers should be different based on the locale selected at that time. So each field (textfield, textbox, etc.) must have an abstract parser that is then implemented by a different class depending on the locale (right to left, double byte, etc.). Check the Java implementation for details on how they did it. It is quite functional. A: You definitely need to be using the .NET ResourceManager and the resx file xml format, however there are a number of approaches to using this. It really depends on what you are wanting to achieve. For me I wanted a single xml resource file (for each supported language) that could be modified by anyone. I created a helper class that loaded the global resource file into ResourceManager (once only) and I had a helper function that gives me the required resource for a given name. The only disadvantage in this approach was that I could not leverage dynamic binding of resources to properties. I found this better and easier to manage than multiple or embedded resource files for every form. Additionally exactly the same approach can used in an ASP.NET application. I also found this approach means that outsourcing translation of resources and shipping language packs to customers much more manageable. A: Microsoft's recommended approach is to use satellite assemblies, as described in Packaging and Deploying Resources. If you're using a ResourceManager to load resources, .NET will load the correct resources for the CurrentUICulture. This defaults to the user's current UI language setting in Windows. It is possible to localize Windows Forms either through Visual Studio or an external tool, WinRes.exe. This article describes WinRes and how to use Visual Studio to localize the form.
How do I best localize an entire app to many different languages?
I'm using Visual Studio (2005 and up). I am looking into trying out making an application where the user can change language for all menues, input formats and such. How would I go on doing this, as I suppose that there is some complete feature within .Net that can help me with this? I need to take the following into account (and fill me in if I miss some obvious stuff) Strings (menues, texts) Input data (parsing floats, dates, etc..) Should be easy to add support for another language
[ "I'm not an expert with .NET by any means but Localization is never just as simple as \"swapping out String values\" or \"changing date formats\". There is much more to be taken into consideration such as layout, proper text placement.\nTake Chinese for example. The way you read is top to bottom not left to right. If properly localized the app should take that into account.\nhttp://msdn.microsoft.com/en-us/library/y99d1cd3(VS.80).aspx seems to be a good start though if you're dealing with Windows Forms.\n", "The classic recipe is: design the app with no native language but a localization facility, and develop an initialization into one language (e.g., English). So you build the app and localize it into English every night; without the localization step it would not be usable. Do that well, and the resources for the initial sample localization can be replaced with those for any other language. Take into account non-roman scripts from the beginning. It's much cleaner to have a no-language app that always requires localization rather than a language-specific app that needs to have its native language subtracted and a replacement added.\n", "For strings you should just separate your strings from your code (having an XML/DLL that will transform string IDs to real strings is one way to go). However you do need to make sure that you are supporting double byte characters for some languages (this is relevant if you use C/C++).\nFor input data what you want is to have different locale's. In Java this is relatively easy, and if you use C# it probably is quite easy also. In C/C++ I don't really know. The basic idea is that the input parsers should be different based on the locale selected at that time. So each field (textfield, textbox, etc.) must have an abstract parser that is then implemented by a different class depending on the locale (right to left, double byte, etc.).\nCheck the Java implementation for details on how they did it. It is quite functional.\n", "You definitely need to be using the .NET ResourceManager and the resx file xml format, however there are a number of approaches to using this.\nIt really depends on what you are wanting to achieve. For me I wanted a single xml resource file (for each supported language) that could be modified by anyone. I created a helper class that loaded the global resource file into ResourceManager (once only) and I had a helper function that gives me the required resource for a given name. The only disadvantage in this approach was that I could not leverage dynamic binding of resources to properties.\nI found this better and easier to manage than multiple or embedded resource files for every form. Additionally exactly the same approach can used in an ASP.NET application. I also found this approach means that outsourcing translation of resources and shipping language packs to customers much more manageable. \n", "Microsoft's recommended approach is to use satellite assemblies, as described in Packaging and Deploying Resources. If you're using a ResourceManager to load resources, .NET will load the correct resources for the CurrentUICulture. This defaults to the user's current UI language setting in Windows.\nIt is possible to localize Windows Forms either through Visual Studio or an external tool, WinRes.exe. This article describes WinRes and how to use Visual Studio to localize the form.\n" ]
[ 3, 3, 1, 1, 1 ]
[]
[]
[ ".net", "c#", "localization" ]
stackoverflow_0000066012_.net_c#_localization.txt
Q: Any tips on getting Rails to run with an Access back-end? I shudder to ask, but my client might offer no other SQL (or SQL-like) solution. I know Access has some SQL hooks; are they enough for basic ActiveRecord? Later: I appreciate all the suggestions to use other databases, but trust me: I've tried convincing them. There is an "approved" list, and no SQL databases are on it. Getting something onto the list could take more than a year, and this project will be done in three weeks. A: It's a long shot but there's an ODBC adapter for ActiveRecord that might work. A: There seems to be something of an Access connection adapter here: http://svn.behindlogic.com/public/rails/activerecord/lib/active_record/connection_adapters/msaccess_adapter.rb The database.yml file would look like this: development: adapter: msaccess database: C:\path\to\access_file.mdb I'll post more after I've tried it out with Rails 2.1 A: Another option that is more complicated but could work if you were forced to do it, is to write a layer of RESTful web services that will expose Access to rails. If you are careful in your design, those RESTful web services can be consumed directly by ActiveResoure which will give you a lot of the functionality of ActiveRecord. A: There are some wierd things in Access that might cause issues and I don't know if ODBC takes care of it. If it does @John Topley is right, ODBC would be your only cance. True in access = -1 not 1 Access treats dates differently than regular TSQL. You might run into trouble creating relations. If you go with access, will probably learn more about debuging AcriveRecord then you ever cared to ( which might not be a bad thing) A: Maudite wrote: True in access = -1 not 1 Not correct. True is defined as not being false. So, if you want to use True in a WHERE clause, use Not False instead. This will provide complete cross-platform compatibility with all SQL engines. All that said, it's hardly an issue, since whatever driver you're using to connect to your back end will properly translate True in WHERE clauses to the appropriate value. The only exception might be in passthrough queries, but in that case, you should be writing the SQL outside Access and testing it against your back end and just pasting the working SQL into the SQL view of your passthrough query in Access. Maudite wrote: Access treats dates differently than regular TSQL. Again, this is only going to be an issue if you don't go through the ODBC or OLEDB drivers, which will take care of translating Jet SQL into TSQL for you. Maudite wrote: You might run into trouble creating relations. I'm not sure why you'd want an Access application to be altering the schema of your back end, so this seems to me like a non-issue. A: You should really talk them into allowing SQLite. It is super-simple to setup, and operates like Access would (as a file sitting next to the app on the same server). A: Firstly, you really want to be using sqlite. In my experience Access itself is a pile of [redacted], but the Jet database engine it uses is actually pretty fast and can handle some pretty complex SQL queries. If you can find a rails adapter that actually works I'd say you'll be fine. Just don't open the DB with the access frontend while your rails app is running :-) If your client is anal enough to only allow you to develop with an approved list of databases, they may be more concerned by the fact that Jet is deprectated and will get no more support from MS. This might give you some ammunition in your quest to use a real database. Good luck
Any tips on getting Rails to run with an Access back-end?
I shudder to ask, but my client might offer no other SQL (or SQL-like) solution. I know Access has some SQL hooks; are they enough for basic ActiveRecord? Later: I appreciate all the suggestions to use other databases, but trust me: I've tried convincing them. There is an "approved" list, and no SQL databases are on it. Getting something onto the list could take more than a year, and this project will be done in three weeks.
[ "It's a long shot but there's an ODBC adapter for ActiveRecord that might work.\n", "There seems to be something of an Access connection adapter here: http://svn.behindlogic.com/public/rails/activerecord/lib/active_record/connection_adapters/msaccess_adapter.rb\nThe database.yml file would look like this:\ndevelopment:\n adapter: msaccess\n database: C:\\path\\to\\access_file.mdb\n\nI'll post more after I've tried it out with Rails 2.1\n", "Another option that is more complicated but could work if you were forced to do it, is to write a layer of RESTful web services that will expose Access to rails. If you are careful in your design, those RESTful web services can be consumed directly by ActiveResoure which will give you a lot of the functionality of ActiveRecord.\n", "There are some wierd things in Access that might cause issues and I don't know if ODBC takes care of it. If it does @John Topley is right, ODBC would be your only cance. \n\nTrue in access = -1 not 1\nAccess treats dates differently than regular TSQL.\nYou might run into trouble creating relations. \n\nIf you go with access, will probably learn more about debuging AcriveRecord then you ever cared to ( which might not be a bad thing) \n", "Maudite wrote:\n\nTrue in access = -1 not 1\n\nNot correct. True is defined as not being false. So, if you want to use True in a WHERE clause, use Not False instead. This will provide complete cross-platform compatibility with all SQL engines.\nAll that said, it's hardly an issue, since whatever driver you're using to connect to your back end will properly translate True in WHERE clauses to the appropriate value. The only exception might be in passthrough queries, but in that case, you should be writing the SQL outside Access and testing it against your back end and just pasting the working SQL into the SQL view of your passthrough query in Access.\nMaudite wrote:\n\nAccess treats dates differently than regular TSQL.\n\nAgain, this is only going to be an issue if you don't go through the ODBC or OLEDB drivers, which will take care of translating Jet SQL into TSQL for you.\nMaudite wrote:\n\nYou might run into trouble creating relations.\n\nI'm not sure why you'd want an Access application to be altering the schema of your back end, so this seems to me like a non-issue.\n", "You should really talk them into allowing SQLite. It is super-simple to setup, and operates like Access would (as a file sitting next to the app on the same server).\n", "Firstly, you really want to be using sqlite.\nIn my experience Access itself is a pile of [redacted], but the Jet database engine it uses is actually pretty fast and can handle some pretty complex SQL queries. If you can find a rails adapter that actually works I'd say you'll be fine. Just don't open the DB with the access frontend while your rails app is running :-)\nIf your client is anal enough to only allow you to develop with an approved list of databases, they may be more concerned by the fact that Jet is deprectated and will get no more support from MS.\nThis might give you some ammunition in your quest to use a real database. Good luck\n" ]
[ 3, 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "activerecord", "ms_access", "ruby", "ruby_on_rails" ]
stackoverflow_0000020054_activerecord_ms_access_ruby_ruby_on_rails.txt
Q: .NET Web Application Portability to SilverLight The company where I work created this application which is core to our business and relies on the web browser to enforce certain "rules" that without them renders the application kinda useless to our customers. Sorry about having to be circumspect, An NDA along with a host of other things prevents me from saying exactly what the application is. Essentially, JavaScript controls certain timed events (that have to be accurate down to at least the second) that make it difficult to control with ajax/postbacks etc. My question is this: how hard is it to convert an ASP.NET application to SilverLight assuming that most of the code is really C# business logic and not asp.net controls? I just got finished listening to Deep Fried bytes and the MS people make it sounds like this really isn't that big of a deal. Is this true for web apps, or mainly Win32 ones? I know the asp.net front end is fundamentally different from SilverLight, but there is a bunch of C# code I would like to not have to rewrite if necessary. The replacement of the javascript code to silverlight I am assuming is trivial (i know bad assumption, but I have to start somewhere) since it deals with timed events, so I am not really concerned with that. I need to come up with a solution on how to mitigate this problem, and I am hoping this is a middle ground between: do nothing and watch us get pounded by our clients, and rewrite the whole application in something more secure than a web page with only front end validation. Has anyone tried to convert ASP.NET code to a SilverLight project? A: If the bulk of your application is on the back-end, you should still be able to keep the majority of the code intact and only replace the front-end. However, Silverlight requires an understanding of WPF, which is dramatically different than the HTML/JS that your app currently uses. I'd say if your UI is pretty thin, it should be pretty easy to port to Silverlight, but the more business logic is in the UI, the harder it will be. A: How heavily do you use the class libraries, and things that might be considered 'dangerous', like pinvoke, file system access and System.Diagnostics.Process? A: Porting code from ASP.NET to Silverlight is not an easy task. As Nate points outs it depends on how much of ASP.NET application is AJAX based, and how much is based around server controls. Silverlight is a state full client side technology, meaning everything is running on the client inside the browser. ASP.NET is a server technology, and is built around a request/response model. Since these two are completely different paradigms it's not a straight port. However, since ASP.NET is just HTML and HTTP POST of form data people have done experiments where they have added a Silverlight application directly on top of an ASP.NET page, and manually built the HTTP POST request by hand sending back the exact data the ASP.NET application work. It's almost like doing "screen scraping" for your own application. This could work, but wouldn't be optimal. You wouldn't get a performance increase as your ASP.NET application would have to go through a full page cycle on every request. A better alternative is to start out wrapping any functionality the user has in the APS.NET application as web services. You can add these services alongside your ASPX pages, and gradually port the application over. The UI you would build from the ground up based on these services. Good luck!
.NET Web Application Portability to SilverLight
The company where I work created this application which is core to our business and relies on the web browser to enforce certain "rules" that without them renders the application kinda useless to our customers. Sorry about having to be circumspect, An NDA along with a host of other things prevents me from saying exactly what the application is. Essentially, JavaScript controls certain timed events (that have to be accurate down to at least the second) that make it difficult to control with ajax/postbacks etc. My question is this: how hard is it to convert an ASP.NET application to SilverLight assuming that most of the code is really C# business logic and not asp.net controls? I just got finished listening to Deep Fried bytes and the MS people make it sounds like this really isn't that big of a deal. Is this true for web apps, or mainly Win32 ones? I know the asp.net front end is fundamentally different from SilverLight, but there is a bunch of C# code I would like to not have to rewrite if necessary. The replacement of the javascript code to silverlight I am assuming is trivial (i know bad assumption, but I have to start somewhere) since it deals with timed events, so I am not really concerned with that. I need to come up with a solution on how to mitigate this problem, and I am hoping this is a middle ground between: do nothing and watch us get pounded by our clients, and rewrite the whole application in something more secure than a web page with only front end validation. Has anyone tried to convert ASP.NET code to a SilverLight project?
[ "If the bulk of your application is on the back-end, you should still be able to keep the majority of the code intact and only replace the front-end. However, Silverlight requires an understanding of WPF, which is dramatically different than the HTML/JS that your app currently uses. I'd say if your UI is pretty thin, it should be pretty easy to port to Silverlight, but the more business logic is in the UI, the harder it will be.\n", "How heavily do you use the class libraries, and things that might be considered 'dangerous', like pinvoke, file system access and System.Diagnostics.Process? \n", "Porting code from ASP.NET to Silverlight is not an easy task. As Nate points outs it depends on how much of ASP.NET application is AJAX based, and how much is based around server controls. \nSilverlight is a state full client side technology, meaning everything is running on the client inside the browser. ASP.NET is a server technology, and is built around a request/response model. Since these two are completely different paradigms it's not a straight port.\nHowever, since ASP.NET is just HTML and HTTP POST of form data people have done experiments where they have added a Silverlight application directly on top of an ASP.NET page, and manually built the HTTP POST request by hand sending back the exact data the ASP.NET application work. It's almost like doing \"screen scraping\" for your own application. This could work, but wouldn't be optimal. You wouldn't get a performance increase as your ASP.NET application would have to go through a full page cycle on every request.\nA better alternative is to start out wrapping any functionality the user has in the APS.NET application as web services. You can add these services alongside your ASPX pages, and gradually port the application over. The UI you would build from the ground up based on these services.\nGood luck!\n" ]
[ 1, 0, 0 ]
[]
[]
[ "asp.net", "silverlight" ]
stackoverflow_0000064575_asp.net_silverlight.txt
Q: What is "Client-only Framework subset" in Visual Studio 2008? What does "Client-only Framework subset" in Visual Studio 2008 do? A: You mean Client Profile? The .NET Framework Client Profile setup contains just those assemblies and files in the .NET Framework that are typically used for client application scenarios. For example: it includes Windows Forms, WPF, and WCF. It does not include ASP.NET and those libraries and components used primarily for server scenarios. We expect this setup package to be about 26MB in size, and it can be downloaded and installed much quicker than the full .NET Framework setup package.
What is "Client-only Framework subset" in Visual Studio 2008?
What does "Client-only Framework subset" in Visual Studio 2008 do?
[ "You mean Client Profile?\nThe .NET Framework Client Profile setup contains just those assemblies and files in the .NET Framework that are typically used for client application scenarios. For example: it includes Windows Forms, WPF, and WCF. It does not include ASP.NET and those libraries and components used primarily for server scenarios. We expect this setup package to be about 26MB in size, and it can be downloaded and installed much quicker than the full .NET Framework setup package.\n" ]
[ 9 ]
[]
[]
[ ".net", ".net_client_profile", "visual_studio_2008" ]
stackoverflow_0000067629_.net_.net_client_profile_visual_studio_2008.txt
Q: How might I display a web page in a window with a transparent background using C#? How can I show a web page in a transparent window and have the white part of the web page also transparent. A: If you're using a Browser control, there may be a property in it to change the background color to Transparent or to use Alpha channel layering. I'm not entirely sure how effective this would be, but it's worth a try. Another thing to consider would be to create a small parser for the web page's HTML you're trying to view, and with that you could modify the style sheet or something to change the background color. I'm not sure you could make the page transparent doing this, however. That's all off the top of my head. A: The BackColor property has an alpha property, which is the same as opacity. If it's pure html, there should be an opacity property or style.
How might I display a web page in a window with a transparent background using C#?
How can I show a web page in a transparent window and have the white part of the web page also transparent.
[ "If you're using a Browser control, there may be a property in it to change the background color to Transparent or to use Alpha channel layering. I'm not entirely sure how effective this would be, but it's worth a try.\nAnother thing to consider would be to create a small parser for the web page's HTML you're trying to view, and with that you could modify the style sheet or something to change the background color. I'm not sure you could make the page transparent doing this, however. That's all off the top of my head.\n", "The BackColor property has an alpha property, which is the same as opacity. If it's pure html, there should be an opacity property or style.\n" ]
[ 1, 1 ]
[]
[]
[ ".net", "c#", "transparency", "webbrowser_control", "winforms" ]
stackoverflow_0000067457_.net_c#_transparency_webbrowser_control_winforms.txt
Q: Aspectj doesn't catch all events in spring framework? My project is based on spring framework 2.5.4. And I try to add aspects for some controllers (I use aspectj 1.5.3). I've enabled auto-proxy in application-servlet.xml, just pasted these lines to the end of the xml file: <aop:aspectj-autoproxy /> <bean id="auditLogProcessor" class="com.example.bg.web.utils.AuditLogProcessor" /> Created aspect: package com.example.bg.web.utils; import org.apache.log4j.Logger; import org.aspectj.lang.annotation.After; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; @Aspect public class AuditLogProcessor { private final static Logger log = Logger.getLogger(AuditLogProcessor.class); @After("execution(* com.example.bg.web.controllers.assets.AssetThumbnailRebuildController.rebuildThumbnail(..))") public void afterHandleRequest() { log.info("test111"); } @After("execution(* com.example.bg.web.controllers.assets.AssetThumbnailRebuildController.rebuildThumbnail(..))") public void afterRebuildThumbnail() { log.info("test222"); } } My controllers: class AssetAddController implements Controller class AssetThumbnailRebuildController extends MultiActionController When I set brake points in aspect advisors and invoke controllers I catch only afterHandleRequest() but not afterRebildThumbnail() What did I do wrong? NOTE I'm asking this question on behalf of my friend who doesn't have access to SO beta, and I don't have a clue what it's all about. EDIT There were indeed some misspellings, thanks Cheekysoft. But the problem still persists. A: Your breakpoints aren't being hit because you are using Spring's AOP Proxies. See understanding-aop-proxies for a description of how AOP Proxies are special. Basically, the MVC framework is going to call the handleRequest method on your controller's proxy (which for example the MultiActionController you're using as a base class implements), this method will then make an "internal" call to its rebuildThumbnail method, but this won't go through the proxy and thus won't pick up any aspects. (This has nothing to do with the methods being final.) To achieve what you want, investigate using "real" AOP via load time weaving (which Spring supports very nicely). A: AspectJ doesn't work well with classes in the Spring Web MVC framework. Read the bottom of the "Open for extension..." box on the right side of the page Instead, take a look at the HandlerInterceptor interface. The new Spring MVC Annotations may work as well since then the Controller classes are all POJOs, but I haven't tried it myself. A: The basic setup looks ok. The syntax can be simplified slightly by not defining an in-place pointcut and just specifying the method to which the after-advice should be applied. (The named pointcuts for methods are automatically created for you.) e.g. @After( "com.example.bg.web.controllers.assets.AssetAddController.handleRequest()" ) public void afterHandleRequest() { log.info( "test111" ); } @After( "com.example.bg.web.controllers.assets.AssetThumbnailRebuildController.rebuildThumbnail()" ) public void afterRebuildThumbnail() { log.info( "test222" ); } As long as the rebuildThumbnail method is not final, and the method name and class are correct. I don't see why this won't work. see http://static.springframework.org/spring/docs/2.0.x/reference/aop.html A: Is this as simple as spelling? or are there just typos in the question? Sometimes you write rebuildThumbnail and sometimes you write rebildThumbnail The methods you are trying to override with advice are not final methods in the MVC framework, so whilst bpapas answer is useful, my understanding is that this is not the problem in this case. However, do make sure that the rebuildThumbnail controller action is not final @bpapas: please correct me if I'm wrong. The programmer's own controller action is what he is trying to override. Looking at the MultiActionController source (and its parents') the only finalized method potentially in the stack is MultiActionController.invokeNamedMethod, although I'm not 100% sure if this would be in the stack at that time or not. Would having a finalized method higher up the stack cause a problem adding AOP advice to a method further down?
Aspectj doesn't catch all events in spring framework?
My project is based on spring framework 2.5.4. And I try to add aspects for some controllers (I use aspectj 1.5.3). I've enabled auto-proxy in application-servlet.xml, just pasted these lines to the end of the xml file: <aop:aspectj-autoproxy /> <bean id="auditLogProcessor" class="com.example.bg.web.utils.AuditLogProcessor" /> Created aspect: package com.example.bg.web.utils; import org.apache.log4j.Logger; import org.aspectj.lang.annotation.After; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; @Aspect public class AuditLogProcessor { private final static Logger log = Logger.getLogger(AuditLogProcessor.class); @After("execution(* com.example.bg.web.controllers.assets.AssetThumbnailRebuildController.rebuildThumbnail(..))") public void afterHandleRequest() { log.info("test111"); } @After("execution(* com.example.bg.web.controllers.assets.AssetThumbnailRebuildController.rebuildThumbnail(..))") public void afterRebuildThumbnail() { log.info("test222"); } } My controllers: class AssetAddController implements Controller class AssetThumbnailRebuildController extends MultiActionController When I set brake points in aspect advisors and invoke controllers I catch only afterHandleRequest() but not afterRebildThumbnail() What did I do wrong? NOTE I'm asking this question on behalf of my friend who doesn't have access to SO beta, and I don't have a clue what it's all about. EDIT There were indeed some misspellings, thanks Cheekysoft. But the problem still persists.
[ "Your breakpoints aren't being hit because you are using Spring's AOP Proxies. See understanding-aop-proxies for a description of how AOP Proxies are special. \nBasically, the MVC framework is going to call the handleRequest method on your controller's proxy (which for example the MultiActionController you're using as a base class implements), this method will then make an \"internal\" call to its rebuildThumbnail method, but this won't go through the proxy and thus won't pick up any aspects. (This has nothing to do with the methods being final.)\nTo achieve what you want, investigate using \"real\" AOP via load time weaving (which Spring supports very nicely).\n", "AspectJ doesn't work well with classes in the Spring Web MVC framework. Read the bottom of the \"Open for extension...\" box on the right side of the page \nInstead, take a look at the HandlerInterceptor interface. \nThe new Spring MVC Annotations may work as well since then the Controller classes are all POJOs, but I haven't tried it myself. \n", "The basic setup looks ok.\nThe syntax can be simplified slightly by not defining an in-place pointcut and just specifying the method to which the after-advice should be applied. (The named pointcuts for methods are automatically created for you.)\ne.g.\n@After( \"com.example.bg.web.controllers.assets.AssetAddController.handleRequest()\" )\npublic void afterHandleRequest() {\n log.info( \"test111\" );\n}\n\n@After( \"com.example.bg.web.controllers.assets.AssetThumbnailRebuildController.rebuildThumbnail()\" ) \npublic void afterRebuildThumbnail() {\n log.info( \"test222\" );\n}\n\nAs long as the rebuildThumbnail method is not final, and the method name and class are correct. I don't see why this won't work.\nsee http://static.springframework.org/spring/docs/2.0.x/reference/aop.html\n", "Is this as simple as spelling? or are there just typos in the question?\nSometimes you write rebuildThumbnail and sometimes you write rebildThumbnail\nThe methods you are trying to override with advice are not final methods in the MVC framework, so whilst bpapas answer is useful, my understanding is that this is not the problem in this case. However, do make sure that the rebuildThumbnail controller action is not final\n@bpapas: please correct me if I'm wrong. The programmer's own controller action is what he is trying to override. Looking at the MultiActionController source (and its parents') the only finalized method potentially in the stack is MultiActionController.invokeNamedMethod, although I'm not 100% sure if this would be in the stack at that time or not. Would having a finalized method higher up the stack cause a problem adding AOP advice to a method further down?\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "java", "spring", "spring_aop" ]
stackoverflow_0000039639_java_spring_spring_aop.txt
Q: Delete Records from Access database, error while deleting I have the following situation: I built an Access form with a subform (which records are linked to the records of main form via certain key). When I try to delete any record in the subform, I get the following message: “Access has suspended the action because you and another user tried to change the data” (approximate translation from German). Does anyone know how to delete those records from the subform (and, respectively, from the table behind the form). A: If you are currently 'editing' the current form then it will not allow the action. Editing a record can sometimes be triggered by simply clicking inside a field, or other simple actions you wouldn't normally consider 'editing'. This is usually avoided in Access by using the RunCommand method to undo any edits before deleting the record: DoCmd.RunCommand acCmdUndo A: samjudson suggested: DoCmd.RunCommand acCmdUndo You can also use Me.Undo, to undo the last edit to the form in which the code runs. Or, Me!MySubForm.Form.Undo to undo the last unsaved edit in the subform whose subform control is named "MySubForm". You can also use Me!MyControl.Undo to cancel the last edit to a particular control. "DoCmd.RunCommand acCmdUndo" will apply the Undo operation to the currently selected object, but you won't know for sure whether it will apply at the control or form level. Using the commands I suggested completely disambiguates what gets undone. Keep in mind, though, that Undo will not undo edits to a control after the control's AfterUpdate event has fired, or to a form after its AfterUpdate event has fired (i.e., the data has been saved to the underlying data table). A: Also check the "row locking mechanism" that you have. I haven't used Access in a while but I remember that you could use set that in the table properties. You can access those properties clicking in the famous "dot" in the upper left corner of the table to bring up its properties. Well if you're using Access, you know what I'm talking about.
Delete Records from Access database, error while deleting
I have the following situation: I built an Access form with a subform (which records are linked to the records of main form via certain key). When I try to delete any record in the subform, I get the following message: “Access has suspended the action because you and another user tried to change the data” (approximate translation from German). Does anyone know how to delete those records from the subform (and, respectively, from the table behind the form).
[ "If you are currently 'editing' the current form then it will not allow the action. Editing a record can sometimes be triggered by simply clicking inside a field, or other simple actions you wouldn't normally consider 'editing'.\nThis is usually avoided in Access by using the RunCommand method to undo any edits before deleting the record:\nDoCmd.RunCommand acCmdUndo\n\n", "samjudson suggested:\n\nDoCmd.RunCommand acCmdUndo\n\nYou can also use Me.Undo, to undo the last edit to the form in which the code runs.\nOr, Me!MySubForm.Form.Undo to undo the last unsaved edit in the subform whose subform control is named \"MySubForm\".\nYou can also use Me!MyControl.Undo to cancel the last edit to a particular control.\n\"DoCmd.RunCommand acCmdUndo\" will apply the Undo operation to the currently selected object, but you won't know for sure whether it will apply at the control or form level. Using the commands I suggested completely disambiguates what gets undone.\nKeep in mind, though, that Undo will not undo edits to a control after the control's AfterUpdate event has fired, or to a form after its AfterUpdate event has fired (i.e., the data has been saved to the underlying data table).\n", "Also check the \"row locking mechanism\" that you have. I haven't used Access in a while but I remember that you could use set that in the table properties. You can access those properties clicking in the famous \"dot\" in the upper left corner of the table to bring up its properties. Well if you're using Access, you know what I'm talking about.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "database", "ms_access" ]
stackoverflow_0000034161_database_ms_access.txt
Q: Anyone know of a list of delegates already built into the framework? I find myself writing delegates occasionally for really simple functions (take no arguments and return void for example) and am wondering if anyone knows someplace that has compiled a list of all the predefined delegates already available in the .NET framework so I can reuse them? To be clear I am looking for something like this: void System.AsyncCallback(System.IAsyncResult) int System.Comparison(T x, T y) void System.IO.ErrorEventHandler(object, System.Io.ErrorEventArgs) and so on If not, sounds like a good idea for a blog article. A: Just look in the msdn database for (T) delegate. Here you got a direct link: List of delegates That should get you started. A: I have previously blogged along these lines here. Basically, I describe how you can find an existing delegate to meet your needs using Reflector. A: One thing to keep in mind is that you write code to be readable to future coders, including your future self. Even if you can find a built-in delegate with the correct signature in the framework, it's not always correct to use that delegate if it obscures the purpose of the code. Six months down the road, the use of a delegate of type BondMaturationAction is going to be much clearer than that of one with a type Action, even if the signatures are the same. A: Just use the Action, Action<T>, Action<T1,T2,..> delegates for methods not returning anything (void), or the Func<TResult>, Func<T, TResult>, Func<T1, ..., TResult> delegates for methods returning TResult. Those delegates are new in .net 3.5. A: In .NET 2.0 and later, use EventHandler if you have no arguments at all, and EventHandler<T> if you want to provide some custom data (you will need to derive a class from EventArgs with your additional data in it). If you have no EventArgs to use, pass EventArgs.Empty. Because EventArgs is a reference type, all instances of EventHandler<T> use the same JITted code.
Anyone know of a list of delegates already built into the framework?
I find myself writing delegates occasionally for really simple functions (take no arguments and return void for example) and am wondering if anyone knows someplace that has compiled a list of all the predefined delegates already available in the .NET framework so I can reuse them? To be clear I am looking for something like this: void System.AsyncCallback(System.IAsyncResult) int System.Comparison(T x, T y) void System.IO.ErrorEventHandler(object, System.Io.ErrorEventArgs) and so on If not, sounds like a good idea for a blog article.
[ "Just look in the msdn database for (T) delegate.\nHere you got a direct link: List of delegates\nThat should get you started.\n", "I have previously blogged along these lines here. Basically, I describe how you can find an existing delegate to meet your needs using Reflector.\n", "One thing to keep in mind is that you write code to be readable to future coders, including your future self. Even if you can find a built-in delegate with the correct signature in the framework, it's not always correct to use that delegate if it obscures the purpose of the code.\nSix months down the road, the use of a delegate of type BondMaturationAction is going to be much clearer than that of one with a type Action, even if the signatures are the same.\n", "Just use the Action, Action<T>, Action<T1,T2,..> delegates for methods not returning anything (void), or the Func<TResult>, Func<T, TResult>, Func<T1, ..., TResult> delegates for methods returning TResult.\nThose delegates are new in .net 3.5.\n", "In .NET 2.0 and later, use EventHandler if you have no arguments at all, and EventHandler<T> if you want to provide some custom data (you will need to derive a class from EventArgs with your additional data in it). If you have no EventArgs to use, pass EventArgs.Empty.\nBecause EventArgs is a reference type, all instances of EventHandler<T> use the same JITted code.\n" ]
[ 7, 3, 2, 1, 0 ]
[]
[]
[ ".net" ]
stackoverflow_0000065990_.net.txt
Q: IE7 CSS Scrolling Div Bug I recently came across an IE7 only bug that I thought I'd share so when I come to this site 6 months from now to figure out the same thing, I'll have it on hand. I believe the easiest way to recreate this bug would be the following html in a page with a declared doctype (it works correctly in "quirks mode" / no-doctype): <div style="overflow: auto; height: 150px;"> <div style="position: relative;">[...]</div> </div> In IE7, the outer div is a fixed size and the inner div is relatively positioned and contains more content (assuming the inner div causes an overflow). In all other browsers, this seems to work as expected. Screenshot: A: The easiest fix would be to add position: relative; to the outer div. This will make IE7 work as intended. (See: http://rowanw.com/bugs/overflow_relative.htm). EDIT: Cache version of the broken link on waybackmachine.org
IE7 CSS Scrolling Div Bug
I recently came across an IE7 only bug that I thought I'd share so when I come to this site 6 months from now to figure out the same thing, I'll have it on hand. I believe the easiest way to recreate this bug would be the following html in a page with a declared doctype (it works correctly in "quirks mode" / no-doctype): <div style="overflow: auto; height: 150px;"> <div style="position: relative;">[...]</div> </div> In IE7, the outer div is a fixed size and the inner div is relatively positioned and contains more content (assuming the inner div causes an overflow). In all other browsers, this seems to work as expected. Screenshot:
[ "The easiest fix would be to add position: relative; to the outer div. This will make IE7 work as intended. \n(See: http://rowanw.com/bugs/overflow_relative.htm).\nEDIT: Cache version of the broken link on waybackmachine.org\n" ]
[ 98 ]
[]
[]
[ "css", "html", "internet_explorer_7" ]
stackoverflow_0000067665_css_html_internet_explorer_7.txt
Q: Why do fixed elements slow down scrolling in Firefox? Why do elements with the CSS position: fixed applied to them cause Firefox to eat 100% CPU when scrolling the page they are in? And are there any workarounds? I've noticed this behavior on a few sites, for example the notification bar at the top of the page on StackOverflow. I'm using Linux in case that matters. A: This is bug #201307. A: It's a bug reported in bugzilla Apparently a work-around (with mixed reports of success..) is to disable smooth-scrolling Just disable smooth scrolling in Edit > Preferences > Advanced. A: As already stated, this is bug #201307. The workaround is to disable smooth scrolling: Edit -> Prefrences -> Advanced -> General tab -> uncheck "Use smooth scrolling" A: This website has a fixed element "First time at Stack Overflow? Check out the FAQ!", and it's slow as hell in firefox. Works better with Opera and Chrome though. FF3, Windows XP, ATI. A: it eats CPU because the browser has to repaint the entire viewport every scroll change rather than just the newly visible area A: Are you sure that there's a direct link here? Have you created a static HTML page with fixed elements to verify your theory? Given how widely these CSS properties are used, I'd think someone else would have noticed it by now, whatever browser/OS you're running.
Why do fixed elements slow down scrolling in Firefox?
Why do elements with the CSS position: fixed applied to them cause Firefox to eat 100% CPU when scrolling the page they are in? And are there any workarounds? I've noticed this behavior on a few sites, for example the notification bar at the top of the page on StackOverflow. I'm using Linux in case that matters.
[ "This is bug #201307.\n", "It's a bug reported in bugzilla\nApparently a work-around (with mixed reports of success..) is to disable smooth-scrolling\n\nJust disable smooth scrolling in Edit > Preferences > Advanced.\n\n", "As already stated, this is bug #201307. The workaround is to disable smooth scrolling:\nEdit -> Prefrences -> Advanced -> General tab -> uncheck \"Use smooth scrolling\"\n", "This website has a fixed element \"First time at Stack Overflow? Check out the FAQ!\", and it's slow as hell in firefox. Works better with Opera and Chrome though.\nFF3, Windows XP, ATI.\n", "it eats CPU because the browser has to repaint the entire viewport every scroll change rather than just the newly visible area\n", "Are you sure that there's a direct link here? Have you created a static HTML page with fixed elements to verify your theory? Given how widely these CSS properties are used, I'd think someone else would have noticed it by now, whatever browser/OS you're running.\n" ]
[ 6, 5, 2, 1, 1, 0 ]
[]
[]
[ "css", "css_position", "firefox" ]
stackoverflow_0000067588_css_css_position_firefox.txt
Q: Tool recommendation for converting VB to C# We have a project with over 500,000 lines of VB.NET that we need to convert to C#. Any recommendations, based on experience, for tools to use? We are using Visual Studio 2008 and we're targeting 3.5 . A: I would concur with the comment. You have 500,000 lines of tried and true VB.NET code. Why on earth would you waste any time changing that? No one says that you can't write all new components in C#. I would consider not worrying about a tool and instead ask yourself, truly, why you are doing this? A: Reflector will decompile the IL and produce C# for you, it will be rough, but a decent start. A: Did this eval a while back. You will find a lot of "free" solutions that are horrible at edge cases. This commercial product http://www.tangiblesoftwaresolutions.com is by no means perfect; but, was the best we could find at the time doing real conversion tests. Note: I am speaking only as a customer. If someone has found a solution that in real-world use produces better conversions than this, please let me know. A: There used to be an add-in to Reflector which creates a complete Visual Studio solution. However, I don't know if it's still available or working, now that Red Gate has taken over Reflector. A: SharpDevelop has a converter built-in IIRC. A: I've used this site for a while now for some of my smaller conversions. It has been quite reliable. According to the site, their converter is based off an open source IDE that has the converter built in, so you might try the "source site" as well. A: The converter from Telerik works well. http://converter.telerik.com/ http://converter.telerik.com/batch.aspx
Tool recommendation for converting VB to C#
We have a project with over 500,000 lines of VB.NET that we need to convert to C#. Any recommendations, based on experience, for tools to use? We are using Visual Studio 2008 and we're targeting 3.5 .
[ "I would concur with the comment. You have 500,000 lines of tried and true VB.NET code. Why on earth would you waste any time changing that? No one says that you can't write all new components in C#.\nI would consider not worrying about a tool and instead ask yourself, truly, why you are doing this?\n", "Reflector will decompile the IL and produce C# for you, it will be rough, but a decent start.\n", "Did this eval a while back. You will find a lot of \"free\" solutions that are horrible at edge cases. This commercial product http://www.tangiblesoftwaresolutions.com is by no means perfect; but, was the best we could find at the time doing real conversion tests. Note: I am speaking only as a customer. If someone has found a solution that in real-world use produces better conversions than this, please let me know.\n", "There used to be an add-in to Reflector which creates a complete Visual Studio solution. However, I don't know if it's still available or working, now that Red Gate has taken over Reflector.\n", "SharpDevelop has a converter built-in IIRC.\n", "I've used this site for a while now for some of my smaller conversions. It has been quite reliable.\nAccording to the site, their converter is based off an open source IDE that has the converter built in, so you might try the \"source site\" as well.\n", "The converter from Telerik works well.\nhttp://converter.telerik.com/\nhttp://converter.telerik.com/batch.aspx\n" ]
[ 10, 3, 3, 2, 2, 1, 1 ]
[]
[]
[ "c#", "vb.net", "visual_studio_2008" ]
stackoverflow_0000067200_c#_vb.net_visual_studio_2008.txt
Q: How can I install libgluezilla on Ubuntu 8.04? I want to use the Web Browser control within an mono application, but when I do get the error "libgluezilla not found. To have webbrowser support, you need libgluezilla installed." Installing the Intrepid Deb causes any application that references the web browser control to crash on startup with : 'Thread (nil) may have been prematurely finalized'. A: apt-cache search libgluezilla libmono-mozilla0.1-cil - Mono Mozilla library From the package description: Description: Mono Mozilla library Mono is a platform for running and developing applications based on the ECMA/ISO Standards. Mono is an open source effort led by Novell. Mono provides a complete CLR (Common Language Runtime) including compiler and runtime, which can produce and execute CIL (Common Intermediate Language) bytecode (aka assemblies), and a class library. . This package contains the implementation of the WebControl class based on the Mozilla engine using libgluezilla. Homepage: http://www.mono-project.com/ You'll probably need to uninstall anything that came in from intrepid without being properly backported. A: here's a link to it on the ubuntu site: http://packages.ubuntu.com/intrepid/libgluezilla there is a download section at the bottom for a deb package A: After installing the DEB that John pointed to, my app crashes... Is this because the deb is for the wrong Ubuntu (8.08 rather than 8.04)? It appears to be the correct version of libgluezilla for the version of Mono (everything is. 1.9.1)... Here is what I get when I try to run the application with $MONO_LOG_LEVEL=debug mono TestbedCSharp.exe Mono-INFO: Assembly Loader probing location: '/usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll'. Mono-INFO: Image addref Mono.Mozilla 0x8514cb0 -> /usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll 0x8514590: 2 Mono-INFO: Assembly Ref addref Mono.Mozilla 0x8514cb0 -> mscorlib 0x823ba30: 10 Mono-INFO: Assembly Mono.Mozilla 0x8514cb0 added to domain TestbedCSharp.exe, ref_count=1 Mono-INFO: AOT failed to load AOT module /usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll.so: /usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll.so: cannot open shared object file: No such file or directory Mono-INFO: Assembly Loader loaded assembly from location: '/usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll'. Mono-INFO: Config attempting to parse: '/usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll.config'. Mono-INFO: Config attempting to parse: '/etc/mono/assemblies/Mono.Mozilla/Mono.Mozilla.config'. Mono-INFO: Config attempting to parse: '/home/kris/.mono/assemblies/Mono.Mozilla/Mono.Mozilla.config'. Mono-INFO: Assembly Ref addref System.Windows.Forms 0x82880d8 -> Mono.Mozilla 0x8514cb0: 2 Mono-INFO: Assembly Ref addref Mono.Mozilla 0x8514cb0 -> System 0x8290908: 5 Mono-INFO: DllImport attempting to load: 'gluezilla'. Mono-INFO: DllImport loading location: 'libgluezilla.so'. Mono-INFO: Searching for 'gluezilla_init'. Mono-INFO: Probing 'gluezilla_init'. Mono-INFO: Found as 'gluezilla_init'. ** (TestbedCSharp.exe:22700): WARNING **: Thread (nil) may have been prematurely finalized
How can I install libgluezilla on Ubuntu 8.04?
I want to use the Web Browser control within an mono application, but when I do get the error "libgluezilla not found. To have webbrowser support, you need libgluezilla installed." Installing the Intrepid Deb causes any application that references the web browser control to crash on startup with : 'Thread (nil) may have been prematurely finalized'.
[ "apt-cache search libgluezilla\nlibmono-mozilla0.1-cil - Mono Mozilla library\n\nFrom the package description: \nDescription: Mono Mozilla library\n Mono is a platform for running and developing applications based on the\n ECMA/ISO Standards. Mono is an open source effort led by Novell.\n Mono provides a complete CLR (Common Language Runtime) including compiler and\n runtime, which can produce and execute CIL (Common Intermediate Language)\n bytecode (aka assemblies), and a class library.\n .\n This package contains the implementation of the WebControl class based on the\n Mozilla engine using libgluezilla.\nHomepage: http://www.mono-project.com/\n\nYou'll probably need to uninstall anything that came in from intrepid without being properly backported.\n", "here's a link to it on the ubuntu site:\nhttp://packages.ubuntu.com/intrepid/libgluezilla\nthere is a download section at the bottom for a deb package\n", "After installing the DEB that John pointed to, my app crashes... Is this because the deb is for the wrong Ubuntu (8.08 rather than 8.04)? It appears to be the correct version of libgluezilla for the version of Mono (everything is. 1.9.1)...\nHere is what I get when I try to run the application with \n\n$MONO_LOG_LEVEL=debug mono TestbedCSharp.exe\n\n\nMono-INFO: Assembly Loader probing location: '/usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll'.\nMono-INFO: Image addref Mono.Mozilla 0x8514cb0 -> /usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll 0x8514590: 2\n\nMono-INFO: Assembly Ref addref Mono.Mozilla 0x8514cb0 -> mscorlib 0x823ba30: 10\n\nMono-INFO: Assembly Mono.Mozilla 0x8514cb0 added to domain TestbedCSharp.exe, ref_count=1\n\nMono-INFO: AOT failed to load AOT module /usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll.so: /usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll.so: cannot open shared object file: No such file or directory\n\nMono-INFO: Assembly Loader loaded assembly from location: '/usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll'.\nMono-INFO: Config attempting to parse: '/usr/lib/mono/gac/Mono.Mozilla/0.2.0.0__0738eb9f132ed756/Mono.Mozilla.dll.config'.\nMono-INFO: Config attempting to parse: '/etc/mono/assemblies/Mono.Mozilla/Mono.Mozilla.config'.\nMono-INFO: Config attempting to parse: '/home/kris/.mono/assemblies/Mono.Mozilla/Mono.Mozilla.config'.\nMono-INFO: Assembly Ref addref System.Windows.Forms 0x82880d8 -> Mono.Mozilla 0x8514cb0: 2\n\nMono-INFO: Assembly Ref addref Mono.Mozilla 0x8514cb0 -> System 0x8290908: 5\n\nMono-INFO: DllImport attempting to load: 'gluezilla'.\nMono-INFO: DllImport loading location: 'libgluezilla.so'.\nMono-INFO: Searching for 'gluezilla_init'.\nMono-INFO: Probing 'gluezilla_init'.\nMono-INFO: Found as 'gluezilla_init'.\n\n** (TestbedCSharp.exe:22700): WARNING **: Thread (nil) may have been prematurely finalized\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "mono", "ubuntu" ]
stackoverflow_0000042416_mono_ubuntu.txt
Q: How to save code snippets (vb/c#/.net/sql) to sql server I want to create a code/knowledge base where I can save my vb.net/c#.net/sqlserver code snippets for use later. I've tried setting the ValidateRequest property to false in my page directive, and encoding the value with HttpUtility.HtmlEncode (c#.net), but I still get errors. thoughts? A: The HttpUtility.HtmlEncode will happen too late, assuming you are getting the exception on postback of code from the client. You can run some javascript on the client to pre-encode the server Postback. See the following link for a quick example: Comparing escape(), encodeURI(), and encodeURIComponent()
How to save code snippets (vb/c#/.net/sql) to sql server
I want to create a code/knowledge base where I can save my vb.net/c#.net/sqlserver code snippets for use later. I've tried setting the ValidateRequest property to false in my page directive, and encoding the value with HttpUtility.HtmlEncode (c#.net), but I still get errors. thoughts?
[ "The HttpUtility.HtmlEncode will happen too late, assuming you are getting the exception on postback of code from the client. You can run some javascript on the client to pre-encode the server Postback.\nSee the following link for a quick example: Comparing escape(), encodeURI(), and encodeURIComponent()\n" ]
[ 1 ]
[]
[]
[ ".net", "code_snippets" ]
stackoverflow_0000067669_.net_code_snippets.txt
Q: Performance issues regarding Access 2003 and the OLE Object data type In MS Access 2003 (I know, I know), I'm using the OLE Object data type to persist the sate of some objects that are marked as serializable (just using a IO.BinaryFormatter to serialize to a MemoryStream, and then saving that to the db as a Byte array). Does this work pretty much like a varbinary, or a blob? Are there any gotchas looming in the shadows that anyone knows about? Any performance advise or war stories? I'd profit from any advice. A: In access I never figured out how to properly use the OLE object data type without real performance problems (and structural too -- lots of compact and repair jobs). The solution path I've always taken (mind you I haven't used Access in anger now for years) is to just store the blogs onto disk somewhere and store the file location in the data table. A: I can't answer your specific question, but you might want to look at the GetChunk and AppendChunk methods in Access help, since those are the methods used for writing and manipulating data in binary fields.
Performance issues regarding Access 2003 and the OLE Object data type
In MS Access 2003 (I know, I know), I'm using the OLE Object data type to persist the sate of some objects that are marked as serializable (just using a IO.BinaryFormatter to serialize to a MemoryStream, and then saving that to the db as a Byte array). Does this work pretty much like a varbinary, or a blob? Are there any gotchas looming in the shadows that anyone knows about? Any performance advise or war stories? I'd profit from any advice.
[ "In access I never figured out how to properly use the OLE object data type without real performance problems (and structural too -- lots of compact and repair jobs). The solution path I've always taken (mind you I haven't used Access in anger now for years) is to just store the blogs onto disk somewhere and store the file location in the data table.\n", "I can't answer your specific question, but you might want to look at the GetChunk and AppendChunk methods in Access help, since those are the methods used for writing and manipulating data in binary fields.\n" ]
[ 1, 0 ]
[]
[]
[ "ms_access", "oledb", "serialization" ]
stackoverflow_0000031812_ms_access_oledb_serialization.txt
Q: Access 2000 connecting to SQL Server 2005 The company I work for has an old Access 2000 application that was using a SQL Server 2000 back-end. We were tasked with moving the back-end to a SQL Server 2005 database on a new server. Unfortunately, the application was not functioning correctly while trying to do any inserts or updates. My research has found many forum posts that Access 2000 -> SQL 2005 is not supported by Microsoft, but I cannot find any Microsoft documentation to verify that. Can anyone either link me to some official documentation, or has anyone used this setup and can confirm that this should be working and our problems lie somewhere else? Not sure if it matters, but the app is an ADP compiled into an ADE. A: I've had a similar problem before when using ODBC linked tables to connect to an Sql Server. The solution was to relink the tables and specify the primary key to the table. If Access doesn't know the primary key it cannot perform inserts or updates. I haven't any experience with ADPs but it could be a similar thing, theres a knowledge base article about it here http://support.microsoft.com/?scid=kb%3Ben-us%3B235267&x=15&y=13 A: I'd say check the VBA in the Macros to see how it is doing it. It is probably using some form of VB connection to the Database in the back. I love the fact a Database is contacting a Database for it's data... :) A: All I've read about Access 2000 -> SQL Server 2005 is that the upsizing wizard isn't supported. If only the inserts and updates aren't functioning, it sounds like a permissions issue. Make sure the sql server login you are using in your connection string has read/write permission on your database. Please avoid using the "sa" account for this purpose! A: I'm not sure about that particular combination being supported, but have you tried setting the compatibilty mode for the database to sql server 2000. Maybe that will resolve your issues. Edit: To do this run the following SQL: EXEC sp_dbcmptlevel Name_of_your_database, 80; More details here: http://blog.sqlauthority.com/2007/05/29/sql-server-2005-change-database-compatible-level-backward-compatibility/ A: If only the inserts and updates aren't functioning, it sounds like a permissions issue. Make sure the sql server login you are using in your connection string has read/write permission on your database. Please avoid using the "sa" account for this purpose! We wanted to use a generic apps account but that login "could not find" any of the stored procedures even though they existed and the login has explicit permissions to run them (and was also tested successfully, as that user, in SQL Management Studio). It wasn't until we granted that login "sa" privileges that we could actually access the database at all through the application. but have you tried setting the compatibilty mode for the database to sql server 2000. I'm not really sure how this is done. Could you explain? Also of note, if we upgrade the app to Access 2003, everything works fine. Unfortunately, our IT dept does not want to upgrade everyone from Office 2000 to 2003, so this is not an option. Thanks for your help. A: but have you tried setting the compatibilty mode for the database to sql server 2000. I just checked the 2005 database, I selected the database, and clicked Properties->Options, and it says the db is already in 2000 compatibility mode. A: Access ADPs are very closely tied to SQL Server versions, and MS has done a really poor job of fixing and breaking ADPs in the 3 major versions that have been released (2000, 2002 and 2003). If you are trying to use the compiled ADE, I'd suggest that first you find the original ADP and see if you can get it to work. You may need to do some work there before creating your ADE. Caveat: I don't do ADPs, and am glad I made the decision not to, as Microsoft is now deprecating them in favor of MDB=>ODBC=>SQL Server.
Access 2000 connecting to SQL Server 2005
The company I work for has an old Access 2000 application that was using a SQL Server 2000 back-end. We were tasked with moving the back-end to a SQL Server 2005 database on a new server. Unfortunately, the application was not functioning correctly while trying to do any inserts or updates. My research has found many forum posts that Access 2000 -> SQL 2005 is not supported by Microsoft, but I cannot find any Microsoft documentation to verify that. Can anyone either link me to some official documentation, or has anyone used this setup and can confirm that this should be working and our problems lie somewhere else? Not sure if it matters, but the app is an ADP compiled into an ADE.
[ "I've had a similar problem before when using ODBC linked tables to connect to an Sql Server. The solution was to relink the tables and specify the primary key to the table. If Access doesn't know the primary key it cannot perform inserts or updates.\nI haven't any experience with ADPs but it could be a similar thing, theres a knowledge base article about it here http://support.microsoft.com/?scid=kb%3Ben-us%3B235267&x=15&y=13\n", "I'd say check the VBA in the Macros to see how it is doing it. It is probably using some form of VB connection to the Database in the back. I love the fact a Database is contacting a Database for it's data... :)\n", "All I've read about Access 2000 -> SQL Server 2005 is that the upsizing wizard isn't supported. \nIf only the inserts and updates aren't functioning, it sounds like a permissions issue. Make sure the sql server login you are using in your connection string has read/write permission on your database. \nPlease avoid using the \"sa\" account for this purpose!\n", "I'm not sure about that particular combination being supported, but have you tried setting the compatibilty mode for the database to sql server 2000. Maybe that will resolve your issues.\nEdit: To do this run the following SQL:\nEXEC sp_dbcmptlevel Name_of_your_database, 80;\n\nMore details here: http://blog.sqlauthority.com/2007/05/29/sql-server-2005-change-database-compatible-level-backward-compatibility/\n", "\nIf only the inserts and updates aren't\n functioning, it sounds like a\n permissions issue. Make sure the sql\n server login you are using in your\n connection string has read/write\n permission on your database.\nPlease avoid using the \"sa\" account\n for this purpose!\n\nWe wanted to use a generic apps account but that login \"could not find\" any of the stored procedures even though they existed and the login has explicit permissions to run them (and was also tested successfully, as that user, in SQL Management Studio). It wasn't until we granted that login \"sa\" privileges that we could actually access the database at all through the application. \n\nbut have you tried setting the\n compatibilty mode for the database to\n sql server 2000.\n\nI'm not really sure how this is done. Could you explain?\nAlso of note, if we upgrade the app to Access 2003, everything works fine. Unfortunately, our IT dept does not want to upgrade everyone from Office 2000 to 2003, so this is not an option.\nThanks for your help.\n", "\nbut have you tried setting the\n compatibilty mode for the database to\n sql server 2000.\n\nI just checked the 2005 database, I selected the database, and clicked Properties->Options, and it says the db is already in 2000 compatibility mode.\n", "Access ADPs are very closely tied to SQL Server versions, and MS has done a really poor job of fixing and breaking ADPs in the 3 major versions that have been released (2000, 2002 and 2003).\nIf you are trying to use the compiled ADE, I'd suggest that first you find the original ADP and see if you can get it to work. You may need to do some work there before creating your ADE.\nCaveat: I don't do ADPs, and am glad I made the decision not to, as Microsoft is now deprecating them in favor of MDB=>ODBC=>SQL Server.\n" ]
[ 2, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "ms_access", "ms_access_2000", "sql_server", "sql_server_2005" ]
stackoverflow_0000015087_ms_access_ms_access_2000_sql_server_sql_server_2005.txt
Q: Dynamically Create a generic type for template I'm programming WCF using the ChannelFactory which expects a type in order to call the CreateChannel method. For example: IProxy proxy = ChannelFactory<IProxy>.CreateChannel(...); In my case I'm doing routing so I don't know what type my channel factory will be using. I can parse a message header to determine the type but I hit a brick wall there because even if I have an instance of Type I can't pass that where ChannelFactory expects a generic type. Another way of restating this problem in very simple terms would be that I'm attempting to do something like this: string listtype = Console.ReadLine(); // say "System.Int32" Type t = Type.GetType( listtype); List<t> myIntegers = new List<>(); // does not compile, expects a "type" List<typeof(t)> myIntegers = new List<typeof(t)>(); // interesting - type must resolve at compile time? Is there an approach to this I can leverage within C#? A: What you are looking for is MakeGenericType string elementTypeName = Console.ReadLine(); Type elementType = Type.GetType(elementTypeName); Type[] types = new Type[] { elementType }; Type listType = typeof(List<>); Type genericType = listType.MakeGenericType(types); IProxy proxy = (IProxy)Activator.CreateInstance(genericType); So what you are doing is getting the type-definition of the generic "template" class, then building a specialization of the type using your runtime-driving types. A: You should look at this post from Ayende: WCF, Mocking and IoC: Oh MY!. Somewhere near the bottom is a method called GetCreationDelegate which should help. It basically does this: string typeName = ...; Type proxyType = Type.GetType(typeName); Type type = typeof (ChannelFactory<>).MakeGenericType(proxyType); object target = Activator.CreateInstance(type); MethodInfo methodInfo = type.GetMethod("CreateChannel", new Type[] {}); return methodInfo.Invoke(target, new object[0]); A: Here's a question: Do you really need to create a channel with the exact contract type in your specific case? Since you're doing routing, there's a very good chance you could simply deal with the generic channel shapes. For example, if you're routing a one-way only message, then you could create a channel to send the message out like this: ChannelFactory<IOutputChannel> factory = new ChannelFactory<IOutputChannel>(binding, endpoint); IOutputChannel channel = factory.CreateChannel(); ... channel.SendMessage(myRawMessage); If you needed to send to a two-way service, just use IRequestChannel instead. If you're doing routing, it is, in general, a lot easier to just deal with generic channel shapes (with a generic catch-all service contract to the outside) and just make sure the message you're sending has all the right headers and properties.
Dynamically Create a generic type for template
I'm programming WCF using the ChannelFactory which expects a type in order to call the CreateChannel method. For example: IProxy proxy = ChannelFactory<IProxy>.CreateChannel(...); In my case I'm doing routing so I don't know what type my channel factory will be using. I can parse a message header to determine the type but I hit a brick wall there because even if I have an instance of Type I can't pass that where ChannelFactory expects a generic type. Another way of restating this problem in very simple terms would be that I'm attempting to do something like this: string listtype = Console.ReadLine(); // say "System.Int32" Type t = Type.GetType( listtype); List<t> myIntegers = new List<>(); // does not compile, expects a "type" List<typeof(t)> myIntegers = new List<typeof(t)>(); // interesting - type must resolve at compile time? Is there an approach to this I can leverage within C#?
[ "What you are looking for is MakeGenericType\nstring elementTypeName = Console.ReadLine();\nType elementType = Type.GetType(elementTypeName);\nType[] types = new Type[] { elementType };\n\nType listType = typeof(List<>);\nType genericType = listType.MakeGenericType(types);\nIProxy proxy = (IProxy)Activator.CreateInstance(genericType);\n\nSo what you are doing is getting the type-definition of the generic \"template\" class, then building a specialization of the type using your runtime-driving types.\n", "You should look at this post from Ayende: WCF, Mocking and IoC: Oh MY!. Somewhere near the bottom is a method called GetCreationDelegate which should help. It basically does this:\nstring typeName = ...;\nType proxyType = Type.GetType(typeName);\n\nType type = typeof (ChannelFactory<>).MakeGenericType(proxyType);\n\nobject target = Activator.CreateInstance(type);\n\nMethodInfo methodInfo = type.GetMethod(\"CreateChannel\", new Type[] {});\n\nreturn methodInfo.Invoke(target, new object[0]);\n\n", "Here's a question: Do you really need to create a channel with the exact contract type in your specific case?\nSince you're doing routing, there's a very good chance you could simply deal with the generic channel shapes. For example, if you're routing a one-way only message, then you could create a channel to send the message out like this:\nChannelFactory<IOutputChannel> factory = new ChannelFactory<IOutputChannel>(binding, endpoint);\nIOutputChannel channel = factory.CreateChannel();\n...\nchannel.SendMessage(myRawMessage);\n\nIf you needed to send to a two-way service, just use IRequestChannel instead.\nIf you're doing routing, it is, in general, a lot easier to just deal with generic channel shapes (with a generic catch-all service contract to the outside) and just make sure the message you're sending has all the right headers and properties.\n" ]
[ 35, 8, 4 ]
[]
[]
[ "c#", "generics" ]
stackoverflow_0000067370_c#_generics.txt
Q: Can a LINQ to SQL IsDiscriminator column NOT inherit? I'm designing my database and LINQ To SQL ASP.NET web application. Imagine I have two types of pages: normal and root. Some pages are roots. Some pages are not. I have a Page database table and a RootPage database table: Page ---- PK PageId HtmlTitle PageHeading MetaDescription IsRoot RootPage -------- FK PK PageId FavIcon StyleSheet MasterPage I think that if within my DBML file I set the IsDiscriminator property of the IsRoot column, then my RootPage class will inherit the Page class. I want to be able to work like this in my code: MyDataContext db = new MyDataContext(); var roots = from p in db.Pages where p is RootPage select (RootPage)p; Or like this: RootPage r = new RootPage(); r.HtmlTitle = "Foo"; r.FavIcon = "bar.ico"; ... db.Pages.Add(r); db.SubmitChanges(); Can a LINQ to SQL IsDiscriminator column be nullable or false? Will this work? A: The problem here is that you are trying to split your class between two tables, RootPage and Page. Unfortunately LINQ to SQL only supports single table inheritence so this would not work. You would need to merge the two table definitions together and make the RootPage-specific fields nullable. e.g. Page ---- PK PageId HtmlTitle PageHeading MetaDescription IsRoot FavIcon (Nullable) StyleSheet (Nullable) MasterPage (Nullable) You would then set IsRoot to be the discriminator and mark the Page class as the default type and RootPage as being the class for the discriminator value of 'True'. An alternative if you didn't mind things being read only would be to make a view that joined the two tables together and base the classes off that. A third option might be to consider composition such as renaming the RootPage table to Root and creating an association between RootPage and Root. This would mean that instead of your RootPage class having all those properties it would instead only expose the Root property where they actually reside.
Can a LINQ to SQL IsDiscriminator column NOT inherit?
I'm designing my database and LINQ To SQL ASP.NET web application. Imagine I have two types of pages: normal and root. Some pages are roots. Some pages are not. I have a Page database table and a RootPage database table: Page ---- PK PageId HtmlTitle PageHeading MetaDescription IsRoot RootPage -------- FK PK PageId FavIcon StyleSheet MasterPage I think that if within my DBML file I set the IsDiscriminator property of the IsRoot column, then my RootPage class will inherit the Page class. I want to be able to work like this in my code: MyDataContext db = new MyDataContext(); var roots = from p in db.Pages where p is RootPage select (RootPage)p; Or like this: RootPage r = new RootPage(); r.HtmlTitle = "Foo"; r.FavIcon = "bar.ico"; ... db.Pages.Add(r); db.SubmitChanges(); Can a LINQ to SQL IsDiscriminator column be nullable or false? Will this work?
[ "The problem here is that you are trying to split your class between two tables, RootPage and Page.\nUnfortunately LINQ to SQL only supports single table inheritence so this would not work.\nYou would need to merge the two table definitions together and make the RootPage-specific fields nullable. e.g.\n Page\n ----\nPK PageId\n HtmlTitle\n PageHeading\n MetaDescription\n IsRoot\n FavIcon (Nullable)\n StyleSheet (Nullable)\n MasterPage (Nullable)\n\nYou would then set IsRoot to be the discriminator and mark the Page class as the default type and RootPage as being the class for the discriminator value of 'True'.\nAn alternative if you didn't mind things being read only would be to make a view that joined the two tables together and base the classes off that.\nA third option might be to consider composition such as renaming the RootPage table to Root and creating an association between RootPage and Root. This would mean that instead of your RootPage class having all those properties it would instead only expose the Root property where they actually reside.\n" ]
[ 2 ]
[]
[]
[ ".net", "asp.net", "inheritance", "linq", "linq_to_sql" ]
stackoverflow_0000067659_.net_asp.net_inheritance_linq_linq_to_sql.txt
Q: Does SQL Server 2005 scale to a large number of databases? If I add 3-400 databases to a single SQL Server instance will I encounter scaling issues introduced by the large number of databases? A: This is one of those questions best answered by: Why are you trying to do this in the first place? What is the concurrency against those databases? Are you generating databases when you could have normalized tables to do the same functionality? That said, yes MSSQL 2005 will handle that level of database per installation. It will more or less be what you are doing with the databases which will seriously impede your performance (incoming connections, CPU usage, etc.) A: According to Joel Spolsky in the SO podcast # 11 you will in any version up to 2005, however this is supposedly fixed in SQL Server 2005. You can see the transcript from the podcast here. A: I have never tried this in 2005. But a company I used to work for tried this on 7.0 and it failed miserably. With 2000 things got a lot better but querying across databases was still painfully slow and took too many system resources. I can only imagine things improved again in 2005. Are you querying across the databases or just hosting them on the same server? If you are querying across the databases, I think you need to take another look at your data architecture and find other ways to separate the data. If it's just a hosting issue, you can always try it out and move off databases to other servers as capacity is reached. Sorry, I don't have a definite answer here.
Does SQL Server 2005 scale to a large number of databases?
If I add 3-400 databases to a single SQL Server instance will I encounter scaling issues introduced by the large number of databases?
[ "This is one of those questions best answered by: Why are you trying to do this in the first place? What is the concurrency against those databases? Are you generating databases when you could have normalized tables to do the same functionality?\nThat said, yes MSSQL 2005 will handle that level of database per installation. It will more or less be what you are doing with the databases which will seriously impede your performance (incoming connections, CPU usage, etc.)\n", "According to Joel Spolsky in the SO podcast # 11 you will in any version up to 2005, however this is supposedly fixed in SQL Server 2005.\nYou can see the transcript from the podcast here.\n", "I have never tried this in 2005. But a company I used to work for tried this on 7.0 and it failed miserably. With 2000 things got a lot better but querying across databases was still painfully slow and took too many system resources. I can only imagine things improved again in 2005.\nAre you querying across the databases or just hosting them on the same server? If you are querying across the databases, I think you need to take another look at your data architecture and find other ways to separate the data. If it's just a hosting issue, you can always try it out and move off databases to other servers as capacity is reached.\nSorry, I don't have a definite answer here.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "sql_server", "sql_server_2005" ]
stackoverflow_0000067788_sql_server_sql_server_2005.txt
Q: Flex and .NET - What's a good way to get data into Flex, WebORB? Web Services? Ok, I asked a question earlier about Flex and ADO.NET Data Services but didn't get much response so I thought I'd rephrase. Does anyone have any experience building Adobe Flex applications with a .NET back-end? If so, what architecture did you use and what third-party tools if any did you employ. I've read a little about doing Flex remoting with WebORB but it seems more complicated than it should be, are web services an adequate alternative? A: I believe web services is actually more complicated and more restrictive. You cannot create stateful web services, data exchange is fairly slow due to verboseness of XML. Developing with WebORB is not that hard. It basically boils down to developing an assembly and deploying it into the /bin folder of a weborb-enabled ASP.NET application. Once you do that you can invoke your .NET classes using Flex's RemoteObject API. For instance: var ro:RemoteObject = new RemoteObject( "GenericDestination" ); ro.source = "com.bar.FooService" ro.foo.addEventListener( ResultEvent.RESULT, gotFooResult ); ro.foo(); public function gotFooResult( evt:ResultEvent ):void { // evt.result contains the return value; } It is important to compile your Flex builder project with the -service compiler argument. You can add in the Flex Builder's "Flex compiler" project properties: -services c:/Inetpub/wwwroot/weborb30/web-inf/flex/services-config.xml If you point to that path, then make sure to deploy your DLL into: c:/Inetpub/wwwroot/weborb30/bin A: I've mainly used plain ASP.NET pages that return XML for situations that are mainly one-way (data from ASP.NET --> Flex/Flash) communication. The Flex side just uses a URLLoader to hit the ASP.NET page and loads the result as XML. If the communication needs to be a little more two-sided (sending more than a couple parameters to ASP.NET lets say), I have used standard ASP.NET webservices. I've never used WebOrb or Flex remoting because I've never really needed that type of interaction between the server and the SWF. Hope that helps.
Flex and .NET - What's a good way to get data into Flex, WebORB? Web Services?
Ok, I asked a question earlier about Flex and ADO.NET Data Services but didn't get much response so I thought I'd rephrase. Does anyone have any experience building Adobe Flex applications with a .NET back-end? If so, what architecture did you use and what third-party tools if any did you employ. I've read a little about doing Flex remoting with WebORB but it seems more complicated than it should be, are web services an adequate alternative?
[ "I believe web services is actually more complicated and more restrictive. You cannot create stateful web services, data exchange is fairly slow due to verboseness of XML. Developing with WebORB is not that hard. It basically boils down to developing an assembly and deploying it into the /bin folder of a weborb-enabled ASP.NET application. Once you do that you can invoke your .NET classes using Flex's RemoteObject API. For instance:\nvar ro:RemoteObject = new RemoteObject( \"GenericDestination\" );\nro.source = \"com.bar.FooService\"\nro.foo.addEventListener( ResultEvent.RESULT, gotFooResult );\nro.foo();\n\npublic function gotFooResult( evt:ResultEvent ):void\n{\n // evt.result contains the return value;\n}\n\nIt is important to compile your Flex builder project with the -service compiler argument. You can add in the Flex Builder's \"Flex compiler\" project properties:\n-services c:/Inetpub/wwwroot/weborb30/web-inf/flex/services-config.xml\n\nIf you point to that path, then make sure to deploy your DLL into:\nc:/Inetpub/wwwroot/weborb30/bin\n\n", "I've mainly used plain ASP.NET pages that return XML for situations that are mainly one-way (data from ASP.NET --> Flex/Flash) communication. The Flex side just uses a URLLoader to hit the ASP.NET page and loads the result as XML.\nIf the communication needs to be a little more two-sided (sending more than a couple parameters to ASP.NET lets say), I have used standard ASP.NET webservices. \nI've never used WebOrb or Flex remoting because I've never really needed that type of interaction between the server and the SWF.\nHope that helps.\n" ]
[ 3, 1 ]
[]
[]
[ ".net", "apache_flex" ]
stackoverflow_0000045078_.net_apache_flex.txt
Q: Can you link 68K code compiled with CodeWarrior for Palm OS with code compiled with PRC-Tools (GCC)? I've got a Palm OS/Garnet 68K application that uses a third-party static library built with CodeWarrior. Can I rebuilt the application using PRC-Tools, the port of GCC for the Palm OS platform and still link with the third-party library? A: (Expanding on Ben's original answer... not sure of the exact etiquette for that but I can't edit yet so I'll re-post) No, CodeWarrior uses a different object file format than PRC-Tools. Also, the compiler support libraries are different, so even if the code could be statically linked together, it may use symbols in a different way. However, if you can wrap the third-party static library into a Palm OS shared library using CodeWarrior, then you should be able to call it from PRC-Tools applications. The Palm OS shared library interface works across tools, but shared libraries have limited system support so you'll need to be sure the original code doesn't use global variables for this to work. For more information on shared libraries, see Shared libraries on the Palm Pilot. A: No, CodeWarrior uses a different object file format than PRC-Tools. Also, the compiler support libraries are different, so even if the code could be linked together, it may use symbols in a different way. However, if you can wrap the third-party library into a shared library using CodeWarrior, then you should be able to call it from PRC-Tools applications. The shared library interface works across tools, but shared libraries have limited system support, so you'll need to be sure the original code doesn't use global variables for this to work.
Can you link 68K code compiled with CodeWarrior for Palm OS with code compiled with PRC-Tools (GCC)?
I've got a Palm OS/Garnet 68K application that uses a third-party static library built with CodeWarrior. Can I rebuilt the application using PRC-Tools, the port of GCC for the Palm OS platform and still link with the third-party library?
[ "(Expanding on Ben's original answer... not sure of the exact etiquette for that but I can't edit yet so I'll re-post)\nNo, CodeWarrior uses a different object file format than PRC-Tools. Also, the compiler support libraries are different, so even if the code could be statically linked together, it may use symbols in a different way.\nHowever, if you can wrap the third-party static library into a Palm OS shared library using CodeWarrior, then you should be able to call it from PRC-Tools applications. The Palm OS shared library interface works across tools, but shared libraries have limited system support so you'll need to be sure the original code doesn't use global variables for this to work.\nFor more information on shared libraries, see Shared libraries on the Palm Pilot.\n", "No, CodeWarrior uses a different object file format than PRC-Tools. Also, the compiler support libraries are different, so even if the code could be linked together, it may use symbols in a different way.\nHowever, if you can wrap the third-party library into a shared library using CodeWarrior, then you should be able to call it from PRC-Tools applications. The shared library interface works across tools, but shared libraries have limited system support, so you'll need to be sure the original code doesn't use global variables for this to work.\n" ]
[ 4, 2 ]
[]
[]
[ "codewarrior", "garnet_os", "palm_os", "prc_tools" ]
stackoverflow_0000017127_codewarrior_garnet_os_palm_os_prc_tools.txt
Q: how to get the number of apache children free within php In php, how can I get the number of apache children that are currently available (status = SERVER_READY in the apache scoreboard)? I'm really hoping there is a simple way to do this in php that I am missing. A: You could execute a shell command of ps aux | grep httpd or ps aux | grep apache and count the number of lines in the output. exec('ps aux | grep apache', $output); $processes = count($output); I'm not sure which status in the status column indicates that it's ready to accept a connection, but you can filter against that to get a count of ready processes. A: If you have access to the Apache server status page, try using the ?auto flag: http://yourserver/server-status?auto The output is a machine-readable version of the status page. I believe you are looking for "IdleWorkers". Here's some simple PHP5 code to get you started. In real life you'd probably want to use cURL or a socket connection to initiate a timeout in case the server is offline. <?php $status = file('http://yourserver/server-status?auto'); foreach ($status as $line) { if (substr($line, 0, 10) == 'IdleWorkers') { $idle_workers = trim(substr($line, 12)); print $idle_workers; break; } } ?>
how to get the number of apache children free within php
In php, how can I get the number of apache children that are currently available (status = SERVER_READY in the apache scoreboard)? I'm really hoping there is a simple way to do this in php that I am missing.
[ "You could execute a shell command of ps aux | grep httpd or ps aux | grep apache and count the number of lines in the output.\nexec('ps aux | grep apache', $output);\n$processes = count($output);\n\nI'm not sure which status in the status column indicates that it's ready to accept a connection, but you can filter against that to get a count of ready processes.\n", "If you have access to the Apache server status page, try using the ?auto flag:\nhttp://yourserver/server-status?auto\nThe output is a machine-readable version of the status page. I believe you are looking for \"IdleWorkers\". Here's some simple PHP5 code to get you started. In real life you'd probably want to use cURL or a socket connection to initiate a timeout in case the server is offline.\n<?php\n\n$status = file('http://yourserver/server-status?auto');\nforeach ($status as $line) {\n if (substr($line, 0, 10) == 'IdleWorkers') {\n $idle_workers = trim(substr($line, 12));\n print $idle_workers;\n break;\n }\n}\n\n?>\n\n" ]
[ 2, 1 ]
[]
[]
[ "apache", "php" ]
stackoverflow_0000067819_apache_php.txt
Q: OpenGL Rotation I'm trying to do a simple rotation in OpenGL but must be missing the point. I'm not looking for a specific fix so much as a quick explanation or link that explains OpenGL rotation more generally. At the moment I have code like this: glPushMatrix(); glRotatef(90.0, 0.0, 1.0, 0.0); glBegin(GL_TRIANGLES); glVertex3f( 1.0, 1.0, 0.0 ); glVertex3f( 3.0, 2.0, 0.0 ); glVertex3f( 3.0, 1.0, 0.0 ); glEnd(); glPopMatrix(); But the result is not a triangle rotated 90 degrees. Edit Hmm thanks to Mike Haboustak - it appeared my code was calling a SetCamera function that use glOrtho. I'm too new to OpenGL to have any idea of what this meant but disabling this and rotating in the Z-axis produced the desired result. A: Ensure that you're modifying the modelview matrix by putting the following before the glRotatef call: glMatrixMode(GL_MODELVIEW); Otherwise, you may be modifying either the projection or a texture matrix instead. A: Do you get a 1 unit straight line? It seems that 90deg rot. around Y is going to have you looking at the side of a triangle with no depth. You should try rotating around the Z axis instead and see if you get something that makes more sense. OpenGL has two matrices related to the display of geometry, the ModelView and the Projection. Both are applied to coordinates before the data becomes visible on the screen. First the ModelView matrix is applied, transforming the data from model space into view space. Then the Projection matrix is applied with transforms the data from view space for "projection" on your 2D monitor. ModelView is used to position multiple objects to their locations in the "world", Projection is used to position the objects onto the screen. Your code seems fine, so I assume from reading the documentation you know what the nature of functions like glPushMatrix() is. If rotating around Z still doesn't make sense, verify that you're editing the ModelView matrix by calling glMatrixMode. A: The "accepted answer" is not fully correct - rotating around the Z will not help you see this triangle unless you've done some strange things prior to this code. Removing a glOrtho(...) call might have corrected the problem in this case, but you still have a couple of other issues. Two major problems with the code as written: Have you positioned the camera previously? In OpenGL, the camera is located at the origin, looking down the Z axis, with positive Y as up. In this case, the triangle is being drawn in the same plane as your eye, but up and to the right. Unless you have a very strange projection matrix, you won't see it. gluLookat() is the easiest command to do this, but any command that moves the current matrix (which should be MODELVIEW) can be made to work. You are drawing the triangle in a left handed, or clockwise method, whereas the default for OpenGL is a right handed, or counterclockwise coordinate system. This means that, if you are culling backfaces (which you are probably not, but will likely move onto as you get more advanced), you would not see the triangle as expected. To see the problem, put your right hand in front of your face and, imagining it is in the X-Y plane, move your fingers in the order you draw the vertices (1,1) to (3,2) to (3,1). When you do this, your thumb is facing away from your face, meaning you are looking at the back side of the triangle. You need to get into the habit of drawing faces in a right handed method, since that is the common way it is done in OpenGL. The best thing I can recommend is to use the NeHe tutorials - http://nehe.gamedev.net/. They begin by showing you how to set up OpenGL in several systems, move onto drawing triangles, and continue slowly and surely to more advanced topics. They are very easy to follow. A: When I had a first look at OpenGL, the NeHe tutorials (see the left menu) were invaluable. A: I'd like to recommend a book: 3D Computer Graphics : A Mathematical Introduction with OpenGL by Samuel R. Buss It provides very clear explanations, and the mathematics are widely applicable to non-graphics domains. You'll also find a thorough description of orthographic projections vs. perspective transformations. A: Regarding Projection matrix, you can find a good source to start with here: http://msdn.microsoft.com/en-us/library/bb147302(VS.85).aspx It explains a bit about how to construct one type of projection matrix. Orthographic projection is the very basic/primitive form of such a matrix and basically what is does is taking 2 of the 3 axes coordinates and project them to the screen (you can still flip axes and scale them but there is no warp or perspective effect). transformation of matrices is most likely one of the most important things when rendering in 3D and basically involves 3 matrix stages: Transform1 = Object coordinates system to World (for example - object rotation and scale) Transform2 = World coordinates system to Camera (placing the object in the right place) Transform3 = Camera coordinates system to Screen space (projecting to screen) Usually the 3 matrix multiplication result is referred to as the WorldViewProjection matrix (if you ever bump into this term), since it transforms the coordinates from Model space through World, then to Camera and finally to the screen representation. Have fun
OpenGL Rotation
I'm trying to do a simple rotation in OpenGL but must be missing the point. I'm not looking for a specific fix so much as a quick explanation or link that explains OpenGL rotation more generally. At the moment I have code like this: glPushMatrix(); glRotatef(90.0, 0.0, 1.0, 0.0); glBegin(GL_TRIANGLES); glVertex3f( 1.0, 1.0, 0.0 ); glVertex3f( 3.0, 2.0, 0.0 ); glVertex3f( 3.0, 1.0, 0.0 ); glEnd(); glPopMatrix(); But the result is not a triangle rotated 90 degrees. Edit Hmm thanks to Mike Haboustak - it appeared my code was calling a SetCamera function that use glOrtho. I'm too new to OpenGL to have any idea of what this meant but disabling this and rotating in the Z-axis produced the desired result.
[ "Ensure that you're modifying the modelview matrix by putting the following before the glRotatef call:\nglMatrixMode(GL_MODELVIEW);\n\nOtherwise, you may be modifying either the projection or a texture matrix instead.\n", "Do you get a 1 unit straight line? It seems that 90deg rot. around Y is going to have you looking at the side of a triangle with no depth.\nYou should try rotating around the Z axis instead and see if you get something that makes more sense. \nOpenGL has two matrices related to the display of geometry, the ModelView and the Projection. Both are applied to coordinates before the data becomes visible on the screen. First the ModelView matrix is applied, transforming the data from model space into view space. Then the Projection matrix is applied with transforms the data from view space for \"projection\" on your 2D monitor. \nModelView is used to position multiple objects to their locations in the \"world\", Projection is used to position the objects onto the screen.\nYour code seems fine, so I assume from reading the documentation you know what the nature of functions like glPushMatrix() is. If rotating around Z still doesn't make sense, verify that you're editing the ModelView matrix by calling glMatrixMode.\n", "The \"accepted answer\" is not fully correct - rotating around the Z will not help you see this triangle unless you've done some strange things prior to this code. Removing a glOrtho(...) call might have corrected the problem in this case, but you still have a couple of other issues.\nTwo major problems with the code as written:\n\nHave you positioned the camera previously? In OpenGL, the camera is located at the origin, looking down the Z axis, with positive Y as up. In this case, the triangle is being drawn in the same plane as your eye, but up and to the right. Unless you have a very strange projection matrix, you won't see it. gluLookat() is the easiest command to do this, but any command that moves the current matrix (which should be MODELVIEW) can be made to work.\nYou are drawing the triangle in a left handed, or clockwise method, whereas the default for OpenGL is a right handed, or counterclockwise coordinate system. This means that, if you are culling backfaces (which you are probably not, but will likely move onto as you get more advanced), you would not see the triangle as expected. To see the problem, put your right hand in front of your face and, imagining it is in the X-Y plane, move your fingers in the order you draw the vertices (1,1) to (3,2) to (3,1). When you do this, your thumb is facing away from your face, meaning you are looking at the back side of the triangle. You need to get into the habit of drawing faces in a right handed method, since that is the common way it is done in OpenGL.\n\nThe best thing I can recommend is to use the NeHe tutorials - http://nehe.gamedev.net/. They begin by showing you how to set up OpenGL in several systems, move onto drawing triangles, and continue slowly and surely to more advanced topics. They are very easy to follow. \n", "When I had a first look at OpenGL, the NeHe tutorials (see the left menu) were invaluable.\n", "I'd like to recommend a book:\n3D Computer Graphics : A Mathematical Introduction with OpenGL by Samuel R. Buss\nIt provides very clear explanations, and the mathematics are widely applicable to non-graphics domains. You'll also find a thorough description of orthographic projections vs. perspective transformations.\n", "Regarding Projection matrix, you can find a good source to start with here:\nhttp://msdn.microsoft.com/en-us/library/bb147302(VS.85).aspx\nIt explains a bit about how to construct one type of projection matrix. Orthographic projection is the very basic/primitive form of such a matrix and basically what is does is taking 2 of the 3 axes coordinates and project them to the screen (you can still flip axes and scale them but there is no warp or perspective effect).\ntransformation of matrices is most likely one of the most important things when rendering in 3D and basically involves 3 matrix stages:\n\nTransform1 = Object coordinates system to World (for example - object rotation and scale)\nTransform2 = World coordinates system to Camera (placing the object in the right place)\nTransform3 = Camera coordinates system to Screen space (projecting to screen)\n\nUsually the 3 matrix multiplication result is referred to as the WorldViewProjection matrix (if you ever bump into this term), since it transforms the coordinates from Model space through World, then to Camera and finally to the screen representation.\nHave fun\n" ]
[ 7, 6, 5, 1, 1, 1 ]
[]
[]
[ "c++", "glut", "opengl" ]
stackoverflow_0000023918_c++_glut_opengl.txt
Q: How do you extend Linq to SQL? Last year, Scott Guthrie stated “You can actually override the raw SQL that LINQ to SQL uses if you want absolute control over the SQL executed”, but I can’t find documentation describing an extensibility method. I would like to modify the following LINQ to SQL query: using (NorthwindContext northwind = new NorthwindContext ()) { var q = from row in northwind.Customers let orderCount = row.Orders.Count () select new { row.ContactName, orderCount }; } Which results in the following TSQL: SELECT [t0].[ContactName], ( SELECT COUNT(*) FROM [dbo].[Orders] AS [t1] WHERE [t1].[CustomerID] = [t0].[CustomerID] ) AS [orderCount] FROM [dbo].[Customers] AS [t0] To: using (NorthwindContext northwind = new NorthwindContext ()) { var q = from row in northwind.Customers.With ( TableHint.NoLock, TableHint.Index (0)) let orderCount = row.Orders.With ( TableHint.HoldLock).Count () select new { row.ContactName, orderCount }; } Which would result in the following TSQL: SELECT [t0].[ContactName], ( SELECT COUNT(*) FROM [dbo].[Orders] AS [t1] WITH (HOLDLOCK) WHERE [t1].[CustomerID] = [t0].[CustomerID] ) AS [orderCount] FROM [dbo].[Customers] AS [t0] WITH (NOLOCK, INDEX(0)) Using: public static Table<TEntity> With<TEntity> ( this Table<TEntity> table, params TableHint[] args) where TEntity : class { //TODO: implement return table; } public static EntitySet<TEntity> With<TEntity> ( this EntitySet<TEntity> entitySet, params TableHint[] args) where TEntity : class { //TODO: implement return entitySet; } And public class TableHint { //TODO: implement public static TableHint NoLock; public static TableHint HoldLock; public static TableHint Index (int id) { return null; } public static TableHint Index (string name) { return null; } } Using some type of LINQ to SQL extensibility, other than this one. Any ideas? A: The ability to change the underlying provider and thus modify the SQL did not make the final cut in LINQ to SQL.
How do you extend Linq to SQL?
Last year, Scott Guthrie stated “You can actually override the raw SQL that LINQ to SQL uses if you want absolute control over the SQL executed”, but I can’t find documentation describing an extensibility method. I would like to modify the following LINQ to SQL query: using (NorthwindContext northwind = new NorthwindContext ()) { var q = from row in northwind.Customers let orderCount = row.Orders.Count () select new { row.ContactName, orderCount }; } Which results in the following TSQL: SELECT [t0].[ContactName], ( SELECT COUNT(*) FROM [dbo].[Orders] AS [t1] WHERE [t1].[CustomerID] = [t0].[CustomerID] ) AS [orderCount] FROM [dbo].[Customers] AS [t0] To: using (NorthwindContext northwind = new NorthwindContext ()) { var q = from row in northwind.Customers.With ( TableHint.NoLock, TableHint.Index (0)) let orderCount = row.Orders.With ( TableHint.HoldLock).Count () select new { row.ContactName, orderCount }; } Which would result in the following TSQL: SELECT [t0].[ContactName], ( SELECT COUNT(*) FROM [dbo].[Orders] AS [t1] WITH (HOLDLOCK) WHERE [t1].[CustomerID] = [t0].[CustomerID] ) AS [orderCount] FROM [dbo].[Customers] AS [t0] WITH (NOLOCK, INDEX(0)) Using: public static Table<TEntity> With<TEntity> ( this Table<TEntity> table, params TableHint[] args) where TEntity : class { //TODO: implement return table; } public static EntitySet<TEntity> With<TEntity> ( this EntitySet<TEntity> entitySet, params TableHint[] args) where TEntity : class { //TODO: implement return entitySet; } And public class TableHint { //TODO: implement public static TableHint NoLock; public static TableHint HoldLock; public static TableHint Index (int id) { return null; } public static TableHint Index (string name) { return null; } } Using some type of LINQ to SQL extensibility, other than this one. Any ideas?
[ "The ability to change the underlying provider and thus modify the SQL did not make the final cut in LINQ to SQL.\n" ]
[ 11 ]
[ "DataContext x = new DataContext\nSomething like this perhaps?\nvar a = x.Where().with()...etc \nIt lets you have a much finer control over the SQL.\n" ]
[ -1 ]
[ "linq", "linq_to_sql" ]
stackoverflow_0000062963_linq_linq_to_sql.txt
Q: Folders or Projects in a Visual Studio Solution? When spliting a solution in to logical layers, when is it best to use a separate project over just grouping by a folder? A: By default, always just create new folder within the same project You will get single assembly (without additional ILMerge gymnastic) Easier to obfuscate (because you will have less public types and methods, ideally none at all) Separating your source code into multiple projects makes only sense if you... Have some portions of the source code that are part of the project but not deployable by default or at all (unit tests, extra plugins etc.) More developers involved and you want to treat their work as consumable black box. (not very recommended) If you can clearly separate your project into isolated layers/modules and you want to make sure that they can't cross-consume internal members. (also not recommended because you will need to decide which aspect is the most important) If you think that some portions of your source code could be reusable, still don't create it as a new project. Just wait until you will really want to reuse it in another solution and isolate it out of original project as needed. Programming is not a lego, reusing is usually very difficult and often won't happen as planned. A: denny wrote: I personally feel that if reusable code is split into projects it is simpler to use other places than if it is just in folders. I really agree with this - if you can reuse it, it should be in a separate project. With that said, it's also very difficult to reuse effectively :) Here at SO, we've tried to be very simple with three projects: MVC Web project (which does a nice job of separating your layers into folders by default) Database project for source control of our DB Unit tests against MVC models/controllers I can't speak for everyone, but I'm happy with how simple we've kept it - really speeds the builds along! A: Separating features into projects is often a YAGNI architecture optimization. How often have you reused those separate projects, really? If it's not a frequent occurrence, you're complicating your development, build, deployment, and maintenance for theoretical reuse. I much prefer separating into folders (using appropriate namespaces) and refactoring to separate projects when you've got a real-life reuse use case. A: I usually do a project for the GUI a project for the business logic a project for data access and a project for unit tests. But sometimes it is prudent to have separation based upon services (if you are using a service oriented architecture) Such as Authentication, Sales, etc. I guess the rule of thumb that I work off of is that if you can see it as a component that has a clear separation of concerns then a different project could be prudent. But I would think that folders versus projects could just be a preference or philosophy. I personally feel that if reusable code is split into projects it is simpler to use other places than if it is just in folders. A: I really think it is better to split the project as well, but it all depends on the size of the project and the number of people working on it. For larger projects, I have a projects for data access (models) services front end tests I got the model from Rob Connery and his storefront application... seems to work really well. mvc-storefront A: Separating your source code into multiple projects makes only sense if you... ... More developers involved and you want to treat their work as consumable black box. (not very recommended) ... Why isn't this recommended? I've found it a very useful way to manage an application with several devs working on different portions. Makes checkins much easier, mainly by virtually eliminating merges. Very rarely will two devs have to work on the same project at the same time. A: If you do go for creating several projects, make sure everyone who adds code to the solution is fully aware of the intention of them and do everything you can to get them to understand the dependencies between the projects. If you have ever tried to sort out the mess when someone has gone and added references that shouldn't have been there and got away with it for weeks you will understand this point
Folders or Projects in a Visual Studio Solution?
When spliting a solution in to logical layers, when is it best to use a separate project over just grouping by a folder?
[ "By default, always just create new folder within the same project\n\nYou will get single assembly (without additional ILMerge gymnastic)\nEasier to obfuscate (because you will have less public types and methods, ideally none at all)\n\nSeparating your source code into multiple projects makes only sense if you...\n\nHave some portions of the source code that are part of the project but not deployable by default or at all (unit tests, extra plugins etc.)\nMore developers involved and you want to treat their work as consumable black box. (not very recommended)\nIf you can clearly separate your project into isolated layers/modules and you want to make sure that they can't cross-consume internal members. (also not recommended because you will need to decide which aspect is the most important)\n\nIf you think that some portions of your source code could be reusable, still don't create it as a new project. Just wait until you will really want to reuse it in another solution and isolate it out of original project as needed. Programming is not a lego, reusing is usually very difficult and often won't happen as planned.\n", "denny wrote:\n\nI personally feel that if reusable code is split into projects it is simpler to use other places than if it is just in folders.\n\nI really agree with this - if you can reuse it, it should be in a separate project. With that said, it's also very difficult to reuse effectively :)\nHere at SO, we've tried to be very simple with three projects:\n\nMVC Web project (which does a nice job of separating your layers into folders by default)\nDatabase project for source control of our DB\nUnit tests against MVC models/controllers\n\nI can't speak for everyone, but I'm happy with how simple we've kept it - really speeds the builds along!\n", "Separating features into projects is often a YAGNI architecture optimization. How often have you reused those separate projects, really? If it's not a frequent occurrence, you're complicating your development, build, deployment, and maintenance for theoretical reuse.\nI much prefer separating into folders (using appropriate namespaces) and refactoring to separate projects when you've got a real-life reuse use case.\n", "I usually do a project for the GUI a project for the business logic a project for data access and a project for unit tests.\nBut sometimes it is prudent to have separation based upon services (if you are using a service oriented architecture) Such as Authentication, Sales, etc. \nI guess the rule of thumb that I work off of is that if you can see it as a component that has a clear separation of concerns then a different project could be prudent. But I would think that folders versus projects could just be a preference or philosophy.\nI personally feel that if reusable code is split into projects it is simpler to use other places than if it is just in folders.\n", "I really think it is better to split the project as well, but it all depends on the size of the project and the number of people working on it. \nFor larger projects, I have a projects for \n\ndata access (models) \nservices \nfront end \ntests \n\nI got the model from Rob Connery and his storefront application... seems to work really well. \nmvc-storefront\n", "\nSeparating your source code into\n multiple projects makes only sense if\n you... \n ... More developers involved\n and you want to treat their work as\n consumable black box. (not very\n recommended) ...\n\nWhy isn't this recommended? I've found it a very useful way to manage an application with several devs working on different portions. Makes checkins much easier, mainly by virtually eliminating merges. Very rarely will two devs have to work on the same project at the same time.\n", "If you do go for creating several projects, make sure everyone who adds code to the solution is fully aware of the intention of them and do everything you can to get them to understand the dependencies between the projects. If you have ever tried to sort out the mess when someone has gone and added references that shouldn't have been there and got away with it for weeks you will understand this point\n" ]
[ 17, 7, 7, 4, 1, 0, 0 ]
[]
[]
[ "projects_and_solutions", "visual_studio" ]
stackoverflow_0000001623_projects_and_solutions_visual_studio.txt
Q: Copying data from one DataTable to another What is the fastest way of transferring few thousand rows of data from one DataTable to another? Would be great to see some sample code snippets. Edit: I need to explain a bit more. There is a filtering condition for copying the rows. So, a plain Copy() will not work. A: You can't copy the whole table, you need to copy one rows. From http://support.microsoft.com/kb/308909 (sample code if you follow the link) "How to Copy DataRows Between DataTables Before you use the ImportRow method, you must ensure that the target table has the identical structure as the source table. This sample uses the Clone method of DataTable class to copy the structure of the DataTable, including all DataTable schemas, relations, and constraints. This sample uses the Products table that is included with the Microsoft SQL Server Northwind database. The first five rows are copied from the Products table to another table that is created in memory." A: What is wrong with DataTable.Copy? A: Copying rows to a table throws some flags at me. I've seen people try this before, and in every single case what they really wanted was a System.Data.DataView. You really should check to see if the RowFilter property will do what you need it to do.
Copying data from one DataTable to another
What is the fastest way of transferring few thousand rows of data from one DataTable to another? Would be great to see some sample code snippets. Edit: I need to explain a bit more. There is a filtering condition for copying the rows. So, a plain Copy() will not work.
[ "You can't copy the whole table, you need to copy one rows. From http://support.microsoft.com/kb/308909 (sample code if you follow the link)\n\"How to Copy DataRows Between DataTables\nBefore you use the ImportRow method, you must ensure that the target table has the identical structure as the source table. This sample uses the Clone method of DataTable class to copy the structure of the DataTable, including all DataTable schemas, relations, and constraints.\nThis sample uses the Products table that is included with the Microsoft SQL Server Northwind database. The first five rows are copied from the Products table to another table that is created in memory.\"\n", "What is wrong with DataTable.Copy?\n", "Copying rows to a table throws some flags at me. I've seen people try this before, and in every single case what they really wanted was a System.Data.DataView. You really should check to see if the RowFilter property will do what you need it to do.\n" ]
[ 7, 3, 2 ]
[]
[]
[ "ado.net", "c#" ]
stackoverflow_0000067929_ado.net_c#.txt
Q: Multiline C# Regex to match after a blank line I'm looking for a multiline regex that will match occurrences after a blank line. For example, given a sample email below, I'd like to match "From: Alex". ^From:\s*(.*)$ works to match any From line, but I want it to be restricted to lines in the body (anything after the first blank line). Received: from a server Date: today To: Ted From: James Subject: [fwd: hi] fyi ----- Forwarded Message ----- To: James From: Alex Subject: hi Party! A: I'm not sure of the syntax of C# regular expressions but you should have a way to anchor to the beginning of the string (not the beginning of the line such as ^). I'll call that "\A" in my example: \A.*?\r?\n\r?\n.*?^From:\s*([^\r\n]+)$ Make sure you turn the multiline matching option on, however that works, to make "." match \n A: Writing complicated regular expressions for such jobs is a bad idea IMO. It's better to combine several simple queries. For example, first search for "\r\n\r\n" to find the start of the body, then run the simple regex over the body. A: This is using a look-behind assertion. Group 1 will give you the "From" line, and group 2 will give you the actual value ("Alex", in your example). (?<=\n\n).*(From:\s*(.*?))$ A: \s{2,}.+?(.+?From:\s(?<Sender>.+?)\s)+? The \s{2,} matches at least two whitespace characters, meaning your first From: James won't hit. Then it's just a matter of looking for the next "From:" and start capturing from there. Use this with RegexOptions.SingleLine and RegexOptions.ExplicitCapture, this means the outer group won't hit.
Multiline C# Regex to match after a blank line
I'm looking for a multiline regex that will match occurrences after a blank line. For example, given a sample email below, I'd like to match "From: Alex". ^From:\s*(.*)$ works to match any From line, but I want it to be restricted to lines in the body (anything after the first blank line). Received: from a server Date: today To: Ted From: James Subject: [fwd: hi] fyi ----- Forwarded Message ----- To: James From: Alex Subject: hi Party!
[ "I'm not sure of the syntax of C# regular expressions but you should have a way to anchor to the beginning of the string (not the beginning of the line such as ^). I'll call that \"\\A\" in my example:\n\\A.*?\\r?\\n\\r?\\n.*?^From:\\s*([^\\r\\n]+)$\n\nMake sure you turn the multiline matching option on, however that works, to make \".\" match \\n\n", "Writing complicated regular expressions for such jobs is a bad idea IMO. It's better to combine several simple queries. For example, first search for \"\\r\\n\\r\\n\" to find the start of the body, then run the simple regex over the body.\n", "This is using a look-behind assertion. Group 1 will give you the \"From\" line, and group 2 will give you the actual value (\"Alex\", in your example).\n(?<=\\n\\n).*(From:\\s*(.*?))$\n\n", "\\s{2,}.+?(.+?From:\\s(?<Sender>.+?)\\s)+?\n\nThe \\s{2,} matches at least two whitespace characters, meaning your first From: James won't hit. Then it's just a matter of looking for the next \"From:\" and start capturing from there.\nUse this with RegexOptions.SingleLine and RegexOptions.ExplicitCapture, this means the outer group won't hit.\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "regex" ]
stackoverflow_0000067798_regex.txt
Q: What is the simplest way to initialize an Array of N numbers following a simple pattern? Let's say the first N integers divisible by 3 starting with 9. I'm sure there is some one line solution using lambdas, I just don't know it that area of the language well enough yet. A: Using Linq: int[] numbers = Enumerable.Range(9,10000) .Where(x => x % 3 == 0) .Take(20) .ToArray(); Also easily parallelizeable using PLinq if you need: int[] numbers = Enumerable.Range(9,10000) .AsParallel() //added this line .Where(x => x % 3 == 0) .Take(20) .ToArray(); A: Just to be different (and to avoid using a where statement) you could also do: var numbers = Enumerable.Range(0, n).Select(i => i * 3 + 9); Update This also has the benefit of not running out of numbers. A: const int __N = 100; const int __start = 9; const int __divisibleBy = 3; var array = Enumerable.Range(__start, __N * __divisibleBy).Where(x => x % __divisibleBy == 0).Take(__N).ToArray(); A: int n = 10; // Take first 10 that meet criteria int[] ia = Enumerable .Range(0,999) .Where(a => a % 3 == 0 && a.ToString()[0] == '9') .Take(n) .ToArray(); A: I want to see how this solution stacks up to the above Linq solutions. The trick here is modifying the predicate using the fact that the set of (q % m) starting from s is (s + (s % m) + m*n) (where n represent's the nth value in the set). In our case s=q. The only problem with this solution is that it has the side effect of making your implementation depend on the specific pattern you choose (and not all patterns have a suitable predicate). But it has the advantage of: Always running in exactly n iterations Never failing like the above proposed solutions (wrt to the limited Range). Besides, no matter what pattern you choose, you will always need to modify the predicate, so you might as well make it mathematically efficient: static int[] givemeN(int n) { const int baseVal = 9; const int modVal = 3; int i = 0; return Array.ConvertAll<int, int>( new int[n], new Converter<int, int>( x => baseVal + (baseVal % modVal) + ((i++) * modVal) )); } edit: I just want to illustrate how you could use this method with a delegate to improve code re-use: static int[] givemeN(int n, Func<int, int> func) { int i = 0; return Array.ConvertAll<int, int>(new int[n], new Converter<int, int>(a => func(i++))); } You can use it with givemeN(5, i => 9 + 3 * i). Again note that I modified the predicate, but you can do this with most simple patterns too. A: I can't say this is any good, I'm not a C# expert and I just whacked it out, but I think it's probably a canonical example of the use of yield. internal IEnumerable Answer(N) { int n=0; int i=9; while (true) { if (i % 3 == 0) { n++; yield return i; } if (n>=N) return; i++; } } A: You have to iterate through 0 or 1 to N and add them by hand. Or, you could just create your function f(int n), and in that function, you cache the results inside session or a global hashtable or dictionary. Pseudocode, where ht is a global Hashtable or Dictionary (strongly recommend the later, because it is strongly typed. public int f(int n) { if(ht[n].containsValue) return ht[n]; else { //do calculation ht[n] = result; return result; } } Just a side note. If you do this type of functional programming all the time, you might want to check out F#, or maybe even Iron Ruby or Python.
What is the simplest way to initialize an Array of N numbers following a simple pattern?
Let's say the first N integers divisible by 3 starting with 9. I'm sure there is some one line solution using lambdas, I just don't know it that area of the language well enough yet.
[ "Using Linq:\nint[] numbers =\n Enumerable.Range(9,10000)\n .Where(x => x % 3 == 0)\n .Take(20)\n .ToArray();\n\nAlso easily parallelizeable using PLinq if you need:\nint[] numbers =\n Enumerable.Range(9,10000)\n .AsParallel() //added this line\n .Where(x => x % 3 == 0)\n .Take(20)\n .ToArray();\n\n", "Just to be different (and to avoid using a where statement) you could also do:\nvar numbers = Enumerable.Range(0, n).Select(i => i * 3 + 9);\n\nUpdate This also has the benefit of not running out of numbers.\n", "const int __N = 100;\nconst int __start = 9;\nconst int __divisibleBy = 3;\n\n\nvar array = Enumerable.Range(__start, __N * __divisibleBy).Where(x => x % __divisibleBy == 0).Take(__N).ToArray();\n\n", "int n = 10; // Take first 10 that meet criteria\nint[] ia = Enumerable\n .Range(0,999)\n .Where(a => a % 3 == 0 && a.ToString()[0] == '9')\n .Take(n)\n .ToArray();\n\n", "I want to see how this solution stacks up to the above Linq solutions. The trick here is modifying the predicate using the fact that the set of (q % m) starting from s is (s + (s % m) + m*n) (where n represent's the nth value in the set). In our case s=q. \nThe only problem with this solution is that it has the side effect of making your implementation depend on the specific pattern you choose (and not all patterns have a suitable predicate). But it has the advantage of:\n\nAlways running in exactly n iterations \nNever failing like the above proposed solutions (wrt to the limited Range). \n\nBesides, no matter what pattern you choose, you will always need to modify the predicate, so you might as well make it mathematically efficient: \nstatic int[] givemeN(int n)\n{\n const int baseVal = 9;\n const int modVal = 3;\n\n int i = 0;\n return Array.ConvertAll<int, int>(\n new int[n],\n new Converter<int, int>(\n x => baseVal + (baseVal % modVal) + \n ((i++) * modVal)\n ));\n}\n\nedit: I just want to illustrate how you could use this method with a delegate to improve code re-use:\nstatic int[] givemeN(int n, Func<int, int> func)\n{\n int i = 0;\n return Array.ConvertAll<int, int>(new int[n], new Converter<int, int>(a => func(i++)));\n}\n\nYou can use it with givemeN(5, i => 9 + 3 * i). Again note that I modified the predicate, but you can do this with most simple patterns too.\n", "I can't say this is any good, I'm not a C# expert and I just whacked it out, but I think it's probably a canonical example of the use of yield.\ninternal IEnumerable Answer(N)\n{\n int n=0;\n int i=9;\n while (true)\n {\n if (i % 3 == 0)\n {\n n++;\n yield return i;\n }\n\n if (n>=N) return;\n i++;\n }\n}\n\n", "You have to iterate through 0 or 1 to N and add them by hand. Or, you could just create your function f(int n), and in that function, you cache the results inside session or a global hashtable or dictionary.\nPseudocode, where ht is a global Hashtable or Dictionary (strongly recommend the later, because it is strongly typed.\npublic int f(int n)\n{\n if(ht[n].containsValue)\n return ht[n];\n else\n {\n //do calculation\n ht[n] = result;\n return result;\n }\n}\n\nJust a side note. If you do this type of functional programming all the time, you might want to check out F#, or maybe even Iron Ruby or Python.\n" ]
[ 5, 5, 1, 1, 1, 0, 0 ]
[]
[]
[ "c#" ]
stackoverflow_0000067492_c#.txt
Q: Choosing a new development machine I'm not sure how this question will be recieved here but lets give it a shot... It's time for me to get a new dev PC. What's the best choice these days? I typically have 2-3 Visual Studios open along with mail and all that stuff. Ideally I would imagine 2+ GB of RAM would be nice as my current XP box is dying. =) I hopped on the Dell site (my days of building PC's are behind me. I just need something that gets the job done.) and started browsing around only to be confused from all the processor choices. What does a typical dev box need these days? Duo? Quad? Is it worth going to 64 bit Vista as well? It's been a while since I got a new machine so I'm just looking for some guidance. Thanks A: I just built a quad core - 8 GB of RAM and run Server 2008 with Hyper-V on it. I have VMs for my build server, dev platform, and deployment options (XP, Vista, Server 2003/2008) with snapshots at the various service pack levels. What's nice is you can spin up a VM whenever you need it, and re-allocate the resources when you don't.. So if I want to have 4 or 5 GB of ram and four processors available for my dev platform, no problem.. when I need to test some installs, I can save my status and spin up my test machines.. (and it only ran about $800 US). A: Jeff's ultimate developer rig series is great, but it's out of date. If you want to build your own ultimate developer rig, you can do hours of research to get the perfect list or use the tricks below to come up with a great component list in a short time. Credits: Mehul taught me this method and it's a huge time-saver. The Basic PC Builder Shopping List Start with the basic system builder shopping list: Computer case Power supply Motherboard CPU Video card RAM Hard Drive DVD-ROM Monitors Optional: Extra fans Optional: Windows (This list is good for most of us. Add/remove for your specific needs.) The Short Version Make a wish list at NewEgg.com to track your component choices and estimate price. For each item on the shopping list above, go to the NewEgg.com category and list the top sellers sorted by most reviews. Read some reviews on the top 3 items listed and add one to your wish list. You may want to check Dell.com and deal sites for monitor options. When you're finished you'll have a solid list of great components that have been well reviewed by a large group of talented system builders. The Detailed Version Start at Gear Geek Heaven: Go to NewEgg.com, create an account and start a wish list to keep track of your selections. NewEgg.com selection, prices and service are good, but you don't have to buy at NewEgg.com. You're going to use the site to keep track of your component choices and get a good price estimate. Let the Wisdom of the Geeks Narrow Your Options The biggest problem with spec'ing a new developer rig is that there are too many options. To narrow your options, observe the behavior of a large group of hardware enthusiasts, record their preferences and use that data to guide your decision. (Everyone who comments at NewEgg.com isn't an expert, but there are many intelligent buyers here who write helpful reviews.) In other words, find the top selling and best reviewed items on NewEgg.com, a popular hardware site for system builders. Score = (Sales-Rank + Review-Count) * Rating * Price NewEgg.com is the right place to learn what the system builders are doing, but it's not obvious at first glance how to do that. You'll have to drill down a bit to see the top selling items. You also need something more helpful than just the top selling items, you want gear that's been used and reviewed by a large group of active and enthusiastic gear geeks so you'll want to factor in customer reviews, too. Find Top sellers in the item category, then sort by Most Reviews Use the NewEgg.com top level menu to navigate to the category for that item type. Then use the left sidebar menu to drill down to a little more specific sub-category. Click the Top Sellers link on the left sidebar to list the top selling items for that category. Then sort by "Most Reviews" by selecting the dropdown next to the search box on the upper right part of the page. Don't input any search text. Hands-On Example Example: On the top menu bar of NewEgg.com, select Computer Hardware/Motherboards then click a sub-category linke on the left sidebar like Intel Motherboards.In the sub-category, you should see an option on the left sidebar for "Top Sellers" select that link to list the top selling items in that category. The search listing should now show the top selling items in the category. Sort this listing by "Most Reviews". At the top of listings on the upper right is a search box, next to the search box is the dropdown box with the option, "Most Reviews". Leave the search box blank and select the "Most Reviews" sort option from the dropdown box. Down to the Finalists One of the 2 or 3 products at the top of the list should be a good choice for your new system. Scan the reviews to see if the general buzz makes you comfortable with the component. Use the sidebar links and search to filter the results if the top sellers are out of your price range or you need to refine the specs. Judge the Judges When you scan the item reviews look at the range of ratings, you want to see more than 100 reviews with mostly 4 and 5 star ratings. Steer clear of items that have a high average but also have a lot of low ratings. Avoid very new items and watch out for older items that are on special and may be closing out. You want something that's been out for 6 months or a year. The price will be lower and the reviews will be more realistic. Pro-Choice When you're satisified that the item is what you want and the price is right, add it to your wishlist. Foreach component in system-shopping-list do Repeat this process for each item. It's fast and fun. If item == Monitor { search("Dell.com") }; Dell often has good monitor specials, so you might want to check that site for monitors. The best Dell deals are usually found on sites like techbargains.com and DealFire.com. Go Forth and Multi-buy When you're finished you'll have a solid list of popular components that are favored by enthusiastic system builders that frequent NewEgg.com. Order them from NewEgg.com or your favorite dealer and get building! A: Not looking to travel. I'd rather get a powerful desktop for my dollar. I have a nice big panel here so problem with that. The majority of my development is ASP.NET stuff with some winforms projects. A: Jeff built an Ultimate Developer Rig for Scott Hanselman a while back. You can check out his requirements and see if it matches closely to what you are looking for. From what you've mentioned, an Intel Q9450, 4 or 8gigs of ram and a couple good sized hard drives will suit you well. I would say there is no reason not to get Vista x64 at this point. The ability to utilize more than 3.2gb of ram is very important for a developer. If you're in the more than two monitor club, you'll need two video cards as well. Hope this helps! A: I recently built a version of the UDR as well but used Vista x64. It works great with the VMs. Just get lots of memory (8gbs) and fast hard drives. I've heard good things about Win Server 2008 but not sure if driver support is available. On a older dell laptop that I tried installing WinServer 2008 and it kept crashing on the nvidia drivers. Good luck. A: People are probably going to yell at me...but I've found that Vista 64 is mostly worth it. The main reason for me though is that I'm always maxing out my memory and having a 64bit OS allows me to go past the <4GB limit of 32bit. But even if you don't get 64bit, just buy 2 2GB RAM cards anyways....you will be able to use most of it (my system shows 3.5GB on 32bit) and then you've got it for if you upgrade later and (if your system has 4 slots) you'll have room to expand to 8GB later on.... A: There are some additional questions that would make our answers more complete. Are you going to want to travel with it? How important is screen real estate to you? Will you be doing interpreted or compiled? Is it web based development, or client based? I've seen some great deals on 17" HP laptops lately - one at Best Buy that had 4GB of RAM and a monster hard drive along with a 2.4+ Ghz Core 2 Duo for roughly $800 after tax. A: You didn't provide a budget or other considerations like sound footprint. You also didn't say if you actually can use more than a few cores at one time with the applications you are developing. So, everything below is a guess. If you have the budget, the Mac Pro with Bootcamp(or a vm if you are so inclinded) might be a consideration. You won't want to upgrade your HDD or memory from Apple, but, the parts are easy enough to find at Newegg. I know this seems a little crazy, but, you can get a good value if you need the dual processors at 4 cores each. It is currently $2800 for 2 x 2.8GHz 8 cores total.
Choosing a new development machine
I'm not sure how this question will be recieved here but lets give it a shot... It's time for me to get a new dev PC. What's the best choice these days? I typically have 2-3 Visual Studios open along with mail and all that stuff. Ideally I would imagine 2+ GB of RAM would be nice as my current XP box is dying. =) I hopped on the Dell site (my days of building PC's are behind me. I just need something that gets the job done.) and started browsing around only to be confused from all the processor choices. What does a typical dev box need these days? Duo? Quad? Is it worth going to 64 bit Vista as well? It's been a while since I got a new machine so I'm just looking for some guidance. Thanks
[ "I just built a quad core - 8 GB of RAM and run Server 2008 with Hyper-V on it. I have VMs for my build server, dev platform, and deployment options (XP, Vista, Server 2003/2008) with snapshots at the various service pack levels. What's nice is you can spin up a VM whenever you need it, and re-allocate the resources when you don't.. So if I want to have 4 or 5 GB of ram and four processors available for my dev platform, no problem.. when I need to test some installs, I can save my status and spin up my test machines.. (and it only ran about $800 US).\n", "Jeff's ultimate developer rig series is great, but it's out of date. If you want to build your own ultimate developer rig, you can do hours of research to get the perfect list or use the tricks below to come up with a great component list in a short time.\nCredits: Mehul taught me this method and it's a huge time-saver.\nThe Basic PC Builder Shopping List\nStart with the basic system builder shopping list:\n\nComputer case \nPower supply \nMotherboard CPU\nVideo card\nRAM\nHard Drive\nDVD-ROM\nMonitors \nOptional: Extra fans\nOptional: Windows\n\n(This list is good for most of us. Add/remove for your specific needs.)\nThe Short Version\nMake a wish list at NewEgg.com to track your component choices and estimate price. For each item on the shopping list above, go to the NewEgg.com category and list the top sellers sorted by most reviews. Read some reviews on the top 3 items listed and add one to your wish list. You may want to check Dell.com and deal sites for monitor options. When you're finished you'll have a solid list of great components that have been well reviewed by a large group of talented system builders.\nThe Detailed Version\nStart at Gear Geek Heaven:\nGo to NewEgg.com, create an account and start a wish list to keep track of your selections. NewEgg.com selection, prices and service are good, but you don't have to buy at NewEgg.com. You're going to use the site to keep track of your component choices and get a good price estimate. \nLet the Wisdom of the Geeks Narrow Your Options\nThe biggest problem with spec'ing a new developer rig is that there are too many options. To narrow your options, observe the behavior of a large group of hardware enthusiasts, record their preferences and use that data to guide your decision. (Everyone who comments at NewEgg.com isn't an expert, but there are many intelligent buyers here who write helpful reviews.)\nIn other words, find the top selling and best reviewed items on NewEgg.com, a popular hardware site for system builders.\nScore = (Sales-Rank + Review-Count) * Rating * Price\nNewEgg.com is the right place to learn what the system builders are doing, but it's not obvious at first glance how to do that. You'll have to drill down a bit to see the top selling items. You also need something more helpful than just the top selling items, you want gear that's been used and reviewed by a large group of active and enthusiastic gear geeks so you'll want to factor in customer reviews, too.\nFind Top sellers in the item category, then sort by Most Reviews\nUse the NewEgg.com top level menu to navigate to the category for that item type. Then use the left sidebar menu to drill down to a little more specific sub-category. Click the Top Sellers link on the left sidebar to list the top selling items for that category. Then sort by \"Most Reviews\" by selecting the dropdown next to the search box on the upper right part of the page. Don't input any search text.\nHands-On Example\nExample: On the top menu bar of NewEgg.com, select Computer Hardware/Motherboards then click a sub-category linke on the left sidebar like Intel Motherboards.In the sub-category, you should see an option on the left sidebar for \"Top Sellers\" select that link to list the top selling items in that category. The search listing should now show the top selling items in the category. Sort this listing by \"Most Reviews\". At the top of listings on the upper right is a search box, next to the search box is the dropdown box with the option, \"Most Reviews\". Leave the search box blank and select the \"Most Reviews\" sort option from the dropdown box.\nDown to the Finalists\nOne of the 2 or 3 products at the top of the list should be a good choice for your new system. Scan the reviews to see if the general buzz makes you comfortable with the component. Use the sidebar links and search to filter the results if the top sellers are out of your price range or you need to refine the specs.\nJudge the Judges\nWhen you scan the item reviews look at the range of ratings, you want to see more than 100 reviews with mostly 4 and 5 star ratings. Steer clear of items that have a high average but also have a lot of low ratings. Avoid very new items and watch out for older items that are on special and may be closing out. You want something that's been out for 6 months or a year. The price will be lower and the reviews will be more realistic.\nPro-Choice\nWhen you're satisified that the item is what you want and the price is right, add it to your wishlist.\nForeach component in system-shopping-list do\nRepeat this process for each item. It's fast and fun.\nIf item == Monitor { search(\"Dell.com\") };\nDell often has good monitor specials, so you might want to check that site for monitors. The best Dell deals are usually found on sites like techbargains.com and DealFire.com.\nGo Forth and Multi-buy\nWhen you're finished you'll have a solid list of popular components that are favored by enthusiastic system builders that frequent NewEgg.com. Order them from NewEgg.com or your favorite dealer and get building! \n", "Not looking to travel. I'd rather get a powerful desktop for my dollar. I have a nice big panel here so problem with that. The majority of my development is ASP.NET stuff with some winforms projects.\n", "Jeff built an Ultimate Developer Rig for Scott Hanselman a while back. You can check out his requirements and see if it matches closely to what you are looking for.\nFrom what you've mentioned, an Intel Q9450, 4 or 8gigs of ram and a couple good sized hard drives will suit you well. I would say there is no reason not to get Vista x64 at this point. The ability to utilize more than 3.2gb of ram is very important for a developer.\nIf you're in the more than two monitor club, you'll need two video cards as well.\nHope this helps!\n", "I recently built a version of the UDR as well but used Vista x64. It works great with the VMs. Just get lots of memory (8gbs) and fast hard drives. I've heard good things about Win Server 2008 but not sure if driver support is available. On a older dell laptop that I tried installing WinServer 2008 and it kept crashing on the nvidia drivers. Good luck.\n", "People are probably going to yell at me...but I've found that Vista 64 is mostly worth it. The main reason for me though is that I'm always maxing out my memory and having a 64bit OS allows me to go past the <4GB limit of 32bit.\nBut even if you don't get 64bit, just buy 2 2GB RAM cards anyways....you will be able to use most of it (my system shows 3.5GB on 32bit) and then you've got it for if you upgrade later and (if your system has 4 slots) you'll have room to expand to 8GB later on....\n", "There are some additional questions that would make our answers more complete. \n\nAre you going to want to travel with\nit?\nHow important is screen real estate\nto you?\nWill you be doing interpreted or\ncompiled?\nIs it web based development, or\nclient based?\n\nI've seen some great deals on 17\" HP laptops lately - one at Best Buy that had 4GB of RAM and a monster hard drive along with a 2.4+ Ghz Core 2 Duo for roughly $800 after tax.\n", "You didn't provide a budget or other considerations like sound footprint. You also didn't say if you actually can use more than a few cores at one time with the applications you are developing. So, everything below is a guess.\nIf you have the budget, the Mac Pro with Bootcamp(or a vm if you are so inclinded) might be a consideration. You won't want to upgrade your HDD or memory from Apple, but, the parts are easy enough to find at Newegg.\nI know this seems a little crazy, but, you can get a good value if you need the dual processors at 4 cores each. It is currently $2800 for 2 x 2.8GHz 8 cores total.\n" ]
[ 3, 3, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "cpu" ]
stackoverflow_0000015573_cpu.txt
Q: Windows Form Ordering using MDILayout I have a very specific problem using C# and a Windows MDI Form application. I want to display two (or more) images to the user, a 'left' and a 'right' image. The names of the images are concealed from the user, and then the user selects which image they prefer (this is part of a study involving medical image quality, so the user has to be blinded from possibly relevant capture parameters which might be revealed in the image name). Instead of showing the actual names, substitute names like 'image 0' and 'image 1' (etc) are shown to the user. Whenever I use the standard MDILayout.TileVertical or TileHorizontal, the images are loaded in reverse order. For example, if I have image 0 and image 1, they are displayed Image 1 Image 0 Three or more images would be something like 2 1 0 or 3 2 1 0 And so forth. The problem is, my users are confused by this right to leftness, and if I have another dialog box that asks them which image is better (or to rate the displayed images), they always confuse the order of images on the screen with the order of images in the dialog box. That is, if I just order the images 0 1 2 3 etc in a ratings dialog, they assume that image 3 as it's displayed is image 0 in the MDI parent window, image 2 is image 1, etc-- they read left to right, and the images are being displayed right to left. If I reorder the tabs in the ratings dialog box to reflect the order on the screen, that just confuses them further ("Why is image 3 before image 2?") and the results come out in the wrong order, and are generally unusable. So, how do I force the ordering of displayed windows using MDILayout in C#? Do I have to do it by hand, or is there some switch I can send to the layout manager? Thanks! A: Why are you using an MDI interface? Surely a single window with a TableLayoutPanel or similar providing layout would be more suitable. The only reason you'd want to use a MDI layout is to allow the users to move the windows, which as far as I can tell from your description of the problem isn't desirable anyway? A: Another idea would be to put the actual rating mechanism at the bottom of each child window. So the answer is actually attached to the picture on their child windows instead of having the answers in their own area. A: Could you avoid this problem by (before displaying the images) you: Put the image references in a structure (array or similar). Have a recursive function build a reverse order structure (or reorder the original). Use the new reversed order structure to build your child windows as before. It would add one more layer but might solve your problem if no one finds the reverse layout order switch soon enough. A: I strongly recommend following Groky's advice and using a single-form interface rather than MDI for this. If you must use MDI, you need to know that the MDI layout methods use the Z-order of MDI forms to determine where the forms end up. For example, if image 2 is behind image 1, then image 1 will be on the left and image 2 will be on the right. The most logical way to cause this to happen would be to load image 2's form, then image 1's form, then do the MDI layout. You can also use the ActivateMdiChild method to put the forms in a particular order (activating one form puts the other forms behind it). It's complicated and error-prone, and I strongly recommend having a two-pane interface on a single form instead, but this will work. A: Thanks Owen and Groky, but the Single-Form interface is just not going to work. First, I already have the display code in the MDI format, so that rewrite would require a very, very large rewrite of the code. It took me about three weeks to write the basics of the app a while ago; these aren't jpgs I'm showing here, these are DCM images, and each one is a good 30 mb, with a variety of support tools that I haven't seen outside of medical imaging. Second, some radiologists don't like split screening for image comparison, and others require it. As such, to accommodate both kinds of users, I set this up with tiling, but then the user can maximize images and then switch between them. So, MDI is the right approach for that differing set of tastes; a single interface with a very complicated set of tab controls just sounds like a nightmare compared to an already extant and (for the most part) working system. However, since I do control the way in which images are displayed, I can force the z-ordering, and then that should work, right? That's the basis of Fred and Owen's answers, if I'm reading them properly. The user enters 'evaluation mode', and then the program loads the images, shows them, and only once the user has entered an evaluation are the images closed. Given that constraint, I can probably enforce a particular z ordering (maybe by looping from length to 0 rather than from 0 to length).
Windows Form Ordering using MDILayout
I have a very specific problem using C# and a Windows MDI Form application. I want to display two (or more) images to the user, a 'left' and a 'right' image. The names of the images are concealed from the user, and then the user selects which image they prefer (this is part of a study involving medical image quality, so the user has to be blinded from possibly relevant capture parameters which might be revealed in the image name). Instead of showing the actual names, substitute names like 'image 0' and 'image 1' (etc) are shown to the user. Whenever I use the standard MDILayout.TileVertical or TileHorizontal, the images are loaded in reverse order. For example, if I have image 0 and image 1, they are displayed Image 1 Image 0 Three or more images would be something like 2 1 0 or 3 2 1 0 And so forth. The problem is, my users are confused by this right to leftness, and if I have another dialog box that asks them which image is better (or to rate the displayed images), they always confuse the order of images on the screen with the order of images in the dialog box. That is, if I just order the images 0 1 2 3 etc in a ratings dialog, they assume that image 3 as it's displayed is image 0 in the MDI parent window, image 2 is image 1, etc-- they read left to right, and the images are being displayed right to left. If I reorder the tabs in the ratings dialog box to reflect the order on the screen, that just confuses them further ("Why is image 3 before image 2?") and the results come out in the wrong order, and are generally unusable. So, how do I force the ordering of displayed windows using MDILayout in C#? Do I have to do it by hand, or is there some switch I can send to the layout manager? Thanks!
[ "Why are you using an MDI interface? Surely a single window with a TableLayoutPanel or similar providing layout would be more suitable. The only reason you'd want to use a MDI layout is to allow the users to move the windows, which as far as I can tell from your description of the problem isn't desirable anyway?\n", "Another idea would be to put the actual rating mechanism at the bottom of each child window. So the answer is actually attached to the picture on their child windows instead of having the answers in their own area.\n", "Could you avoid this problem by (before displaying the images) you:\n\nPut the image references in a structure (array or similar).\nHave a recursive function build a reverse order structure (or reorder the original).\nUse the new reversed order structure to build your child windows as before.\n\nIt would add one more layer but might solve your problem if no one finds the reverse layout order switch soon enough.\n", "I strongly recommend following Groky's advice and using a single-form interface rather than MDI for this.\nIf you must use MDI, you need to know that the MDI layout methods use the Z-order of MDI forms to determine where the forms end up. For example, if image 2 is behind image 1, then image 1 will be on the left and image 2 will be on the right. The most logical way to cause this to happen would be to load image 2's form, then image 1's form, then do the MDI layout. You can also use the ActivateMdiChild method to put the forms in a particular order (activating one form puts the other forms behind it).\nIt's complicated and error-prone, and I strongly recommend having a two-pane interface on a single form instead, but this will work.\n", "Thanks Owen and Groky, but the Single-Form interface is just not going to work. First, I already have the display code in the MDI format, so that rewrite would require a very, very large rewrite of the code. It took me about three weeks to write the basics of the app a while ago; these aren't jpgs I'm showing here, these are DCM images, and each one is a good 30 mb, with a variety of support tools that I haven't seen outside of medical imaging.\nSecond, some radiologists don't like split screening for image comparison, and others require it. As such, to accommodate both kinds of users, I set this up with tiling, but then the user can maximize images and then switch between them. So, MDI is the right approach for that differing set of tastes; a single interface with a very complicated set of tab controls just sounds like a nightmare compared to an already extant and (for the most part) working system.\nHowever, since I do control the way in which images are displayed, I can force the z-ordering, and then that should work, right? That's the basis of Fred and Owen's answers, if I'm reading them properly. The user enters 'evaluation mode', and then the program loads the images, shows them, and only once the user has entered an evaluation are the images closed. Given that constraint, I can probably enforce a particular z ordering (maybe by looping from length to 0 rather than from 0 to length).\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "c#", "mdi", "winforms" ]
stackoverflow_0000064029_c#_mdi_winforms.txt
Q: How would I host an external application in WPF? How can I host a (.Net, Java, VB6, MFC, etc) application in a WPF window ?. I have a need to use WPF windows to wrap external applications and control the window size and location. Does anyone have any ideas on how to accomplish this or a direction to research in? A: Use a HwndHost to host the outside window in your application. A: This article explains how to use HwndHost along with a few other Win32 API calls to accomplish the task.
How would I host an external application in WPF?
How can I host a (.Net, Java, VB6, MFC, etc) application in a WPF window ?. I have a need to use WPF windows to wrap external applications and control the window size and location. Does anyone have any ideas on how to accomplish this or a direction to research in?
[ "Use a HwndHost to host the outside window in your application.\n", "This article explains how to use HwndHost along with a few other Win32 API calls to accomplish the task.\n" ]
[ 2, 1 ]
[]
[]
[ "external", "hosting", "wpf" ]
stackoverflow_0000068072_external_hosting_wpf.txt
Q: How to display a form in any site's pages using a bookmarklet (like Note in Google Reader)? In Google Reader, you can use a bookmarklet to "note" a page you're visiting. When you press the bookmarklet, a little Google form is displayed on top of the current page. In the form you can enter a description, etc. When you press Submit, the form submits itself without leaving the page, and then the form disappears. All in all, a very smooth experience. I obviously tried to take a look at how it's done, but the most interesting parts are minified and unreadable. So... Any ideas on how to implement something like this (on the browser side)? What issues are there? Existing blog posts describing this? A: Aupajo has it right. I will, however, point you towards a bookmarklet framework I worked up for our site (www.iminta.com). The bookmarklet itself reads as follows: javascript:void((function(){ var e=document.createElement('script'); e.setAttribute('type','text/javascript'); e.setAttribute('src','http://www.iminta.com/javascripts/new_bookmarklet.js?noCache='+new%20Date().getTime()); document.body.appendChild(e) })()) This just injects a new script into the document that includes this file: http://www.iminta.com/javascripts/new_bookmarklet.js It's important to note that the bookmarklet creates an iframe, positions it, and adds events to the document to allow the user to do things like hit escape (to close the window) or to scroll (so it stays visible). It also hides elements that don't play well with z-positioning (flash, for example). Finally, it facilitates communicating across to the javascript that is running within the iframe. In this way, you can have a close button in the iframe that tells the parent document to remove the iframe. This kind of cross-domain stuff is a bit hacky, but it's the only way (I've seen) to do it. Not for the feint of heart; if you're not good at JavaScript, prepare to struggle. A: At it's very basic level it will be using createElement to create the elements to insert into the page and appendChild or insertBefore to insert them into the page. A: You can use a simple bookmarklet to add a <script> tag which loads an external JavaScript file that can push the necessary elements to the DOM and present a modal window to the user. The form is submitted via an AJAX request, it's processed server-side, and returns with success or a list of errors the user needs to correct. So the bookmarklet would look like: javascript:code-to-add-script-tag-and-init-the-script; The external script would include: The ability to add an element to the DOM The ability to update innerHTML of that element to be the markup you want to display for the user Handling for the AJAX form processing The window effect can be achieved with CSS positioning. As for one complete resource for this specific task, you'd be pretty lucky to find anything. But have a look at the smaller, individual parts and you'll find plenty of resources. Have a look around for information on modal windows, adding elements to the DOM, and AJAX processing.
How to display a form in any site's pages using a bookmarklet (like Note in Google Reader)?
In Google Reader, you can use a bookmarklet to "note" a page you're visiting. When you press the bookmarklet, a little Google form is displayed on top of the current page. In the form you can enter a description, etc. When you press Submit, the form submits itself without leaving the page, and then the form disappears. All in all, a very smooth experience. I obviously tried to take a look at how it's done, but the most interesting parts are minified and unreadable. So... Any ideas on how to implement something like this (on the browser side)? What issues are there? Existing blog posts describing this?
[ "Aupajo has it right. I will, however, point you towards a bookmarklet framework I worked up for our site (www.iminta.com).\nThe bookmarklet itself reads as follows:\njavascript:void((function(){\n var e=document.createElement('script');\n e.setAttribute('type','text/javascript');\n e.setAttribute('src','http://www.iminta.com/javascripts/new_bookmarklet.js?noCache='+new%20Date().getTime());\n document.body.appendChild(e)\n})())\n\nThis just injects a new script into the document that includes this file:\nhttp://www.iminta.com/javascripts/new_bookmarklet.js\nIt's important to note that the bookmarklet creates an iframe, positions it, and adds events to the document to allow the user to do things like hit escape (to close the window) or to scroll (so it stays visible). It also hides elements that don't play well with z-positioning (flash, for example). Finally, it facilitates communicating across to the javascript that is running within the iframe. In this way, you can have a close button in the iframe that tells the parent document to remove the iframe. This kind of cross-domain stuff is a bit hacky, but it's the only way (I've seen) to do it.\nNot for the feint of heart; if you're not good at JavaScript, prepare to struggle.\n", "At it's very basic level it will be using createElement to create the elements to insert into the page and appendChild or insertBefore to insert them into the page.\n", "You can use a simple bookmarklet to add a <script> tag which loads an external JavaScript file that can push the necessary elements to the DOM and present a modal window to the user. The form is submitted via an AJAX request, it's processed server-side, and returns with success or a list of errors the user needs to correct.\nSo the bookmarklet would look like:\njavascript:code-to-add-script-tag-and-init-the-script;\nThe external script would include:\n\nThe ability to add an element to the DOM\nThe ability to update innerHTML of that element to be the markup you want to display for the user\nHandling for the AJAX form processing\n\nThe window effect can be achieved with CSS positioning.\nAs for one complete resource for this specific task, you'd be pretty lucky to find anything. But have a look at the smaller, individual parts and you'll find plenty of resources. Have a look around for information on modal windows, adding elements to the DOM, and AJAX processing.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "bookmarklet", "browser", "client_side", "javascript" ]
stackoverflow_0000067713_bookmarklet_browser_client_side_javascript.txt
Q: WSE 2.0 raises wse910 error I had to use a Microsoft Web Services Enhancements 2.0 service and it raised wse910 error when the time difference between the server and client was more than 5 minutes. I read in many places that setting the timeToleranceInSeconds, ttlInSeconds and defaultTtlInSeconds values should help, but only setting the clock of the client machine solved the problem. Any experiences? A: Does this help? It also mentions setting timeToleranceInSeconds and defaultTtlInSeconds on the client as well as the server.
WSE 2.0 raises wse910 error
I had to use a Microsoft Web Services Enhancements 2.0 service and it raised wse910 error when the time difference between the server and client was more than 5 minutes. I read in many places that setting the timeToleranceInSeconds, ttlInSeconds and defaultTtlInSeconds values should help, but only setting the clock of the client machine solved the problem. Any experiences?
[ "Does this help? It also mentions setting timeToleranceInSeconds and defaultTtlInSeconds on the client as well as the server.\n" ]
[ 0 ]
[]
[]
[ "wse2.0" ]
stackoverflow_0000062027_wse2.0.txt
Q: ResourceBundle from Java/Struts and replace expressions If I have a Resource bundle property file: A.properties: thekey={0} This is a test And then I have java code that loads the resource bundle: ResourceBundle labels = ResourceBundle.getBundle("A", currentLocale); labels.getString("thekey"); How can I replace the {0} text with some value labels.getString("thekey", "Yes!!!"); Such that the output comes out as: Yes!!! This is a test. There are no methods that are part of Resource Bundle to do this. Also, I am in Struts, is there some way to use MessageProperties to do the replacement. A: The class you're looking for is java.text.MessageFormat; specifically, calling MessageFormat.format("{0} This {1} a test", new Object[] {"Yes!!!", "is"}); or MessageFormat.format("{0} This {1} a test", "Yes!!!", "is"); will return "Yes!!! This is a test" [Unfortunately, I can't help with the Struts connection, although this looks relevant.] A: There is the class org.apache.struts.util.MessageResources with various methods getMessage, some of them take arguments to insert to the actual message. Eg.: messageResources.getMessage("thekey", "Yes!!!");
ResourceBundle from Java/Struts and replace expressions
If I have a Resource bundle property file: A.properties: thekey={0} This is a test And then I have java code that loads the resource bundle: ResourceBundle labels = ResourceBundle.getBundle("A", currentLocale); labels.getString("thekey"); How can I replace the {0} text with some value labels.getString("thekey", "Yes!!!"); Such that the output comes out as: Yes!!! This is a test. There are no methods that are part of Resource Bundle to do this. Also, I am in Struts, is there some way to use MessageProperties to do the replacement.
[ "The class you're looking for is java.text.MessageFormat; specifically, calling\nMessageFormat.format(\"{0} This {1} a test\", new Object[] {\"Yes!!!\", \"is\"});\n\nor\nMessageFormat.format(\"{0} This {1} a test\", \"Yes!!!\", \"is\");\n\nwill return\n\"Yes!!! This is a test\"\n\n[Unfortunately, I can't help with the Struts connection, although this looks relevant.]\n", "There is the class org.apache.struts.util.MessageResources with various methods getMessage, some of them take arguments to insert to the actual message.\nEg.:\nmessageResources.getMessage(\"thekey\", \"Yes!!!\");\n\n" ]
[ 11, 2 ]
[]
[]
[ "jakarta_ee", "java", "resourcebundle", "struts" ]
stackoverflow_0000068018_jakarta_ee_java_resourcebundle_struts.txt
Q: A checklist for fixing .NET applications to SQL Server timeout problems and improve execution time A checklist for improving execution time between .NET code and SQL Server. Anything from the basic to weird solutions is appreciated. Code: Change default timeout in command and connection by avgbody. Use stored procedure calls instead of inline sql statement by avgbody. Look for blocking/locking using Activity monitor by Jay Shepherd. SQL Server: Watch out for parameter sniffing in stored procedures by AlexCuse. Beware of dynamically growing the database by Martin Clarke. Use Profiler to find any queries/stored procedures taking longer then 100 milliseconds by BradO. Increase transaction timeout by avgbody. Convert dynamic stored procedures into static ones by avgbody. Check how busy the server is by Jay Shepherd. A: In the past some of my solutions have been: Fix the default time out settings of the sqlcommand: Dim myCommand As New SqlCommand("[dbo].[spSetUserPreferences]", myConnection) myCommand.CommandType = CommandType.StoredProcedure myCommand.CommandTimeout = 120 Increase connection timeout string: Data Source=mydatabase;Initial Catalog=Match;Persist Security Info=True;User ID=User;Password=password;Connection Timeout=120 Increase transaction time-out in sql-server 2005 In management studio’s Tools > Option > Designers Increase the “Transaction time-out after:” even if “Override connection string time-out value for table designer updates” checked/unchecked. Convert dynamic stored procedures into static ones Make the code call a stored procedure instead of writing an inline sql statement in the code. A: A weird "solution" for complaints on long response time is to have a more interesting progress bar. Meaning, work on the user's feeling. One example is the Windows Vista wait icon. That fast rotating circle gives the feeling things are going faster. Google uses the same trick on Android (at least the version I've seen). However, I suggest trying to address the technical problem first, and working on human behavior only when you're out of choices. A: Are you using stored procedures? If so you should watch out for parameter sniffing. In certain situations this can make for some very long running queries. Some reading: http://blogs.msdn.com/queryoptteam/archive/2006/03/31/565991.aspx http://blogs.msdn.com/khen1234/archive/2005/06/02/424228.aspx A: First and foremost - Check the actual query being ran. I use SQL Server Profiler as I setup through my program and check that all my queries are using correct joins and referencing keys when I can. A: A few quick ones... Check Processor use of server to see if it's just too busy Look for blocking/locking going on with the Activity monitor Network issues/performance A: Run Profiler to measure the execution time of your queries. Check application logging for any deadlocks. A: I like using SQL Server Profiler as well. I like to setup a trace on a client site on their database server for a good 15-30 minute chunk of time in the midst of the business day and log all queries/stored procs with an duration > 100 milliseconds. That's my criteria anyway for "long-running" queries. A: Weird one that applied to SQL Server 2000 that might still apply today: Make sure that you aren't trying to dynamically grow the database in production. There comes a point where the amount of time it takes to allocate that extra space and your normal load running will cause your queries to timeout (and the growth too!)
A checklist for fixing .NET applications to SQL Server timeout problems and improve execution time
A checklist for improving execution time between .NET code and SQL Server. Anything from the basic to weird solutions is appreciated. Code: Change default timeout in command and connection by avgbody. Use stored procedure calls instead of inline sql statement by avgbody. Look for blocking/locking using Activity monitor by Jay Shepherd. SQL Server: Watch out for parameter sniffing in stored procedures by AlexCuse. Beware of dynamically growing the database by Martin Clarke. Use Profiler to find any queries/stored procedures taking longer then 100 milliseconds by BradO. Increase transaction timeout by avgbody. Convert dynamic stored procedures into static ones by avgbody. Check how busy the server is by Jay Shepherd.
[ "In the past some of my solutions have been:\n\nFix the default time out settings of the sqlcommand:\nDim myCommand As New SqlCommand(\"[dbo].[spSetUserPreferences]\", myConnection)\nmyCommand.CommandType = CommandType.StoredProcedure\nmyCommand.CommandTimeout = 120\nIncrease connection timeout string:\nData Source=mydatabase;Initial Catalog=Match;Persist Security Info=True;User ID=User;Password=password;Connection Timeout=120\nIncrease transaction time-out in sql-server 2005\nIn management studio’s Tools > Option > Designers Increase the “Transaction time-out after:” even if “Override connection string time-out value for table designer updates” checked/unchecked. \nConvert dynamic stored procedures into static ones\nMake the code call a stored procedure instead of writing an inline sql statement in the code.\n\n", "A weird \"solution\" for complaints on long response time is to have a more interesting progress bar. Meaning, work on the user's feeling. One example is the Windows Vista wait icon. That fast rotating circle gives the feeling things are going faster. Google uses the same trick on Android (at least the version I've seen).\nHowever, I suggest trying to address the technical problem first, and working on human behavior only when you're out of choices.\n", "Are you using stored procedures? If so you should watch out for parameter sniffing. In certain situations this can make for some very long running queries. Some reading:\nhttp://blogs.msdn.com/queryoptteam/archive/2006/03/31/565991.aspx\nhttp://blogs.msdn.com/khen1234/archive/2005/06/02/424228.aspx\n", "First and foremost - Check the actual query being ran. I use SQL Server Profiler as I setup through my program and check that all my queries are using correct joins and referencing keys when I can.\n", "A few quick ones...\n\nCheck Processor use of server to see if it's just too busy\nLook for blocking/locking going on with the Activity monitor\nNetwork issues/performance\n\n", "Run Profiler to measure the execution time of your queries.\nCheck application logging for any deadlocks.\n", "I like using SQL Server Profiler as well. I like to setup a trace on a client site on their database server for a good 15-30 minute chunk of time in the midst of the business day and log all queries/stored procs with an duration > 100 milliseconds. That's my criteria anyway for \"long-running\" queries.\n", "Weird one that applied to SQL Server 2000 that might still apply today:\nMake sure that you aren't trying to dynamically grow the database in production. There comes a point where the amount of time it takes to allocate that extra space and your normal load running will cause your queries to timeout (and the growth too!)\n" ]
[ 5, 3, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ ".net", "c#", "sql_server", "timeout", "vb.net" ]
stackoverflow_0000067366_.net_c#_sql_server_timeout_vb.net.txt
Q: Is it safe to manipulate objects that I created outside my thread if I don't explicitly access them on the thread which created them? I am working on a cocoa software and in order to keep the GUI responsive during a massive data import (Core Data) I need to run the import outside the main thread. Is it safe to access those objects even if I created them in the main thread without using locks if I don't explicitly access those objects while the thread is running. A: With Core Data, you should have a separate managed object context to use for your import thread, connected to the same coordinator and persistent store. You cannot simply throw objects created in a context used by the main thread into another thread and expect them to work. Furthermore, you cannot do your own locking for this; you must at minimum lock the managed object context the objects are in, as appropriate. But if those objects are bound to by your views a controls, there are no "hooks" that you can add that locking of the context to. There's no free lunch. Ben Trumbull explains some of the reasons why you need to use a separate context, and why "just reading" isn't as simple or as safe as you might think, in this great post from late 2004 on the webobjects-dev list. (The whole thread is great.) He's discussing the Enterprise Objects Framework and WebObjects, but his advice is fully applicable to Core Data as well. Just replace "EC" with "NSManagedObjectContext" and "EOF" with "Core Data" in the meat of his message. The solution to the problem of sharing data between threads in Core Data, like the Enterprise Objects Framework before it, is "don't." If you've thought about it further and you really, honestly do have to share data between threads, then the solution is to keep independent object graphs in thread-isolated contexts, and use the information in the save notification from one context to tell the other context what to re-fetch. -[NSManagedObjectContext refreshObject:mergeChanges:] is specifically designed to support this use. A: I believe that this is not safe to do with NSManagedObjects (or subclasses) that are managed by a CoreData NSManagedObjectContext. In general, CoreData may do many tricky things with the sate of managed objects, including firing faults related to those objects in separate threads. In particular, [NSManagedObject initWithEntity:insertIntoManagedObjectContext:] (the designated initializer for NSManagedObjects as of OS X 10.5), does not guarantee that the returned object is safe to pass to an other thread. Using CoreData with multiple threads is well documented on Apple's dev site. A: The whole point of using locks is to ensure that two threads don't try to access the same resource. If you can guarantee that through some other mechanism, go for it. A: Even if it's safe, but it's not the best practice to use shared data between threads without synchronizing the access to those fields. It doesn't matter which thread created the object, but if more than one line of execution (thread/process) is accessing the object at the same time, since it can lead to data inconsistency. If you're absolutely sure that only one thread will ever access this object, than it'd be safe to not synchronize the access. Even then, I'd rather put synchronization in my code now than wait till later when a change in the application puts a second thread sharing the same data without concern about synchronizing access. A: Yes, it's safe. A pretty common pattern is to create an object, then add it to a queue or some other collection. A second "consumer" thread takes items from the queue and does something with them. Here, you'd need to synchronize the queue but not the objects that are added to the queue. It's NOT a good idea to just synchronize everything and hope for the best. You will need to think very carefully about your design and exactly which threads can act upon your objects. A: Yes you can do it, it will be safe ... until the second programmer comes around and does not understand the same assumptions you have made. That second (or 3rd, 4th, 5th, ...) programmer is likely to start using the object in a non safe way (in the creator thread). The problems caused could be very subtle and difficult to track down. For that reason alone, and because its so tempting to use this object in multiple threads, I would make the object thread safe. To clarify, (thanks to those who left comments): By "thread safe" I mean programatically devising a scheme to avoid threading issues. I don't necessarily mean devise a locking scheme around your object. You could find a way in your language to make it illegal (or very hard) to use the object in the creator thread. For example, limiting the scope, in the creator thread, to the block of code that creates the object. Once created, pass the object over to the user thread, making sure that the creator thread no longer has a reference to it. For example, in C++ void CreateObject() { Object* sharedObj = new Object(); PassObjectToUsingThread( sharedObj); // this function would be system dependent } Then in your creating thread, you no longer have access to the object after its creation, responsibility is passed to the using thread. A: Two things to consider are: You must be able to guarantee that the object is fully created and initialised before it is made available to other threads. There must be some mechanism by which the main (GUI) thread detects that the data has been loaded and all is well. To be thread safe this will inevitably involve locking of some kind.
Is it safe to manipulate objects that I created outside my thread if I don't explicitly access them on the thread which created them?
I am working on a cocoa software and in order to keep the GUI responsive during a massive data import (Core Data) I need to run the import outside the main thread. Is it safe to access those objects even if I created them in the main thread without using locks if I don't explicitly access those objects while the thread is running.
[ "With Core Data, you should have a separate managed object context to use for your import thread, connected to the same coordinator and persistent store. You cannot simply throw objects created in a context used by the main thread into another thread and expect them to work. Furthermore, you cannot do your own locking for this; you must at minimum lock the managed object context the objects are in, as appropriate. But if those objects are bound to by your views a controls, there are no \"hooks\" that you can add that locking of the context to.\nThere's no free lunch.\nBen Trumbull explains some of the reasons why you need to use a separate context, and why \"just reading\" isn't as simple or as safe as you might think, in this great post from late 2004 on the webobjects-dev list. (The whole thread is great.) He's discussing the Enterprise Objects Framework and WebObjects, but his advice is fully applicable to Core Data as well. Just replace \"EC\" with \"NSManagedObjectContext\" and \"EOF\" with \"Core Data\" in the meat of his message.\nThe solution to the problem of sharing data between threads in Core Data, like the Enterprise Objects Framework before it, is \"don't.\" If you've thought about it further and you really, honestly do have to share data between threads, then the solution is to keep independent object graphs in thread-isolated contexts, and use the information in the save notification from one context to tell the other context what to re-fetch. -[NSManagedObjectContext refreshObject:mergeChanges:] is specifically designed to support this use.\n", "I believe that this is not safe to do with NSManagedObjects (or subclasses) that are managed by a CoreData NSManagedObjectContext. In general, CoreData may do many tricky things with the sate of managed objects, including firing faults related to those objects in separate threads. In particular, [NSManagedObject initWithEntity:insertIntoManagedObjectContext:] (the designated initializer for NSManagedObjects as of OS X 10.5), does not guarantee that the returned object is safe to pass to an other thread.\nUsing CoreData with multiple threads is well documented on Apple's dev site. \n", "The whole point of using locks is to ensure that two threads don't try to access the same resource. If you can guarantee that through some other mechanism, go for it.\n", "Even if it's safe, but it's not the best practice to use shared data between threads without synchronizing the access to those fields. It doesn't matter which thread created the object, but if more than one line of execution (thread/process) is accessing the object at the same time, since it can lead to data inconsistency.\nIf you're absolutely sure that only one thread will ever access this object, than it'd be safe to not synchronize the access. Even then, I'd rather put synchronization in my code now than wait till later when a change in the application puts a second thread sharing the same data without concern about synchronizing access.\n", "Yes, it's safe. A pretty common pattern is to create an object, then add it to a queue or some other collection. A second \"consumer\" thread takes items from the queue and does something with them. Here, you'd need to synchronize the queue but not the objects that are added to the queue.\nIt's NOT a good idea to just synchronize everything and hope for the best. You will need to think very carefully about your design and exactly which threads can act upon your objects.\n", "Yes you can do it, it will be safe\n... \nuntil the second programmer comes around and does not understand the same assumptions you have made. That second (or 3rd, 4th, 5th, ...) programmer is likely to start using the object in a non safe way (in the creator thread). The problems caused could be very subtle and difficult to track down. For that reason alone, and because its so tempting to use this object in multiple threads, I would make the object thread safe.\nTo clarify, (thanks to those who left comments):\nBy \"thread safe\" I mean programatically devising a scheme to avoid threading issues. I don't necessarily mean devise a locking scheme around your object. You could find a way in your language to make it illegal (or very hard) to use the object in the creator thread. For example, limiting the scope, in the creator thread, to the block of code that creates the object. Once created, pass the object over to the user thread, making sure that the creator thread no longer has a reference to it.\nFor example, in C++\nvoid CreateObject()\n{\n Object* sharedObj = new Object();\n PassObjectToUsingThread( sharedObj); // this function would be system dependent\n}\n\nThen in your creating thread, you no longer have access to the object after its creation, responsibility is passed to the using thread.\n", "Two things to consider are:\n\nYou must be able to guarantee that the object is fully created and initialised before it is made available to other threads.\nThere must be some mechanism by which the main (GUI) thread detects that the data has been loaded and all is well. To be thread safe this will inevitably involve locking of some kind.\n\n" ]
[ 4, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "cocoa", "core_data", "macos", "multithreading" ]
stackoverflow_0000067154_cocoa_core_data_macos_multithreading.txt
Q: Get mac address for remote computer under NT4 in C Is it possible to determine the MAC address of the originator of a remote connection under Windows NT 4? The remote PC opens a socket connection into my application and I can get the IP address. However I need to determine the MAC address from the information available from the socket such as the IP address of the remote device. I have tried using SendARP but this doesn't seem to be supported in Windows NT4. A: Try GetIpNetTable. This function is documented as supported as of NT 4.0 SP4. A: Hope the machine isn't too remote. MAC addresses will only be known for the local network (subnet).
Get mac address for remote computer under NT4 in C
Is it possible to determine the MAC address of the originator of a remote connection under Windows NT 4? The remote PC opens a socket connection into my application and I can get the IP address. However I need to determine the MAC address from the information available from the socket such as the IP address of the remote device. I have tried using SendARP but this doesn't seem to be supported in Windows NT4.
[ "Try GetIpNetTable. This function is documented as supported as of NT 4.0 SP4.\n", "Hope the machine isn't too remote. MAC addresses will only be known for the local network (subnet).\n" ]
[ 1, 0 ]
[]
[]
[ "c", "mac_address", "nt4" ]
stackoverflow_0000062868_c_mac_address_nt4.txt
Q: How to Track Queries on a Linq-to-sql DataContext In the herding code podcast 14 someone mentions that stackoverflow displayed the queries that were executed during a request at the bottom of the page. It sounds like an excellent idea to me. Every time a page loads I want to know what sql statements are executed and also a count of the total number of DB round trips. Does anyone have a neat solution to this problem? What do you think is an acceptable number of queries? I was thinking that during development I might have my application throw an exception if more than 30 queries are required to render a page. EDIT: I think I must not have explained my question clearly. During a HTTP request a web application might execute a dozen or more sql statements. I want to have those statements appended to the bottom of the page, along with a count of the number of statements. HERE IS MY SOLUTION: I created a TextWriter class that the DataContext can write to: public class Logger : StreamWriter { public string Buffer { get; private set; } public int QueryCounter { get; private set; } public Logger() : base(new MemoryStream()) {} public override void Write(string value) { Buffer += value + "<br/><br/>"; if (!value.StartsWith("--")) QueryCounter++; } public override void WriteLine(string value) { Buffer += value + "<br/><br/>"; if (!value.StartsWith("--")) QueryCounter++; } } In the DataContext's constructor I setup the logger: public HeraldDBDataContext() : base(ConfigurationManager.ConnectionStrings["Herald"].ConnectionString, mappingSource) { Log = new Logger(); } Finally, I use the Application_OnEndRequest event to add the results to the bottom of the page: protected void Application_OnEndRequest(Object sender, EventArgs e) { Logger logger = DataContextFactory.Context.Log as Logger; Response.Write("Query count : " + logger.QueryCounter); Response.Write("<br/><br/>"); Response.Write(logger.Buffer); } A: If you put .ToString() to a var query variable you get the sql. You can laso use this in Debug en VS2008. Debug Visualizer ex: var query = from p in db.Table select p; MessageBox.SHow(query.ToString()); A: System.IO.StreamWriter httpResponseStreamWriter = new StreamWriter(HttpContext.Current.Response.OutputStream); dataContext.Log = httpResponseStreamWriter; Stick that in your page and you'll get the SQL dumped out on the page. Obviously, I'd wrap that in a little method that you can enable/disable. A: I have a post on my blog that covers sending to log files, memory, the debug window or multiple writers. A: From Linq in Action Microsoft has a Query Visualizer tool that can be downloaded separetly from VS 2008. it is at http://weblogs.asp.net/scottgu/archive/2007/07/31/linq-to-sql-debug-visualizer.aspx
How to Track Queries on a Linq-to-sql DataContext
In the herding code podcast 14 someone mentions that stackoverflow displayed the queries that were executed during a request at the bottom of the page. It sounds like an excellent idea to me. Every time a page loads I want to know what sql statements are executed and also a count of the total number of DB round trips. Does anyone have a neat solution to this problem? What do you think is an acceptable number of queries? I was thinking that during development I might have my application throw an exception if more than 30 queries are required to render a page. EDIT: I think I must not have explained my question clearly. During a HTTP request a web application might execute a dozen or more sql statements. I want to have those statements appended to the bottom of the page, along with a count of the number of statements. HERE IS MY SOLUTION: I created a TextWriter class that the DataContext can write to: public class Logger : StreamWriter { public string Buffer { get; private set; } public int QueryCounter { get; private set; } public Logger() : base(new MemoryStream()) {} public override void Write(string value) { Buffer += value + "<br/><br/>"; if (!value.StartsWith("--")) QueryCounter++; } public override void WriteLine(string value) { Buffer += value + "<br/><br/>"; if (!value.StartsWith("--")) QueryCounter++; } } In the DataContext's constructor I setup the logger: public HeraldDBDataContext() : base(ConfigurationManager.ConnectionStrings["Herald"].ConnectionString, mappingSource) { Log = new Logger(); } Finally, I use the Application_OnEndRequest event to add the results to the bottom of the page: protected void Application_OnEndRequest(Object sender, EventArgs e) { Logger logger = DataContextFactory.Context.Log as Logger; Response.Write("Query count : " + logger.QueryCounter); Response.Write("<br/><br/>"); Response.Write(logger.Buffer); }
[ "If you put .ToString() to a var query variable you get the sql. You can laso use this in Debug en VS2008. Debug Visualizer\nex:\nvar query = from p in db.Table\n select p;\n\nMessageBox.SHow(query.ToString());\n\n", "System.IO.StreamWriter httpResponseStreamWriter = \nnew StreamWriter(HttpContext.Current.Response.OutputStream);\n\ndataContext.Log = httpResponseStreamWriter;\n\nStick that in your page and you'll get the SQL dumped out on the page. Obviously, I'd wrap that in a little method that you can enable/disable.\n", "I have a post on my blog that covers sending to log files, memory, the debug window or multiple writers.\n", "From Linq in Action\n\nMicrosoft has a Query Visualizer tool that can be downloaded separetly from VS 2008. it is at http://weblogs.asp.net/scottgu/archive/2007/07/31/linq-to-sql-debug-visualizer.aspx\n\n" ]
[ 3, 3, 1, 0 ]
[]
[]
[ "asp.net", "linq_to_sql" ]
stackoverflow_0000029308_asp.net_linq_to_sql.txt
Q: Doing away with Globals? I have a set of tree objects with a depth somewhere in the 20s. Each of the nodes in this tree needs access to its tree's root. A couple of solutions: Each node can store a reference to the root directly (wastes memory) I can compute the root at runtime by "going up" (wastes cycles) I can use static fields (but this amounts to globals) Can someone provide a design that doesn't use a global (in any variation) but is more efficient that #1 or #2 in both memory or cycles respectively? Edit: Since I have a Set of Trees, I can't simply store it in a static since it'd be hard to differentiate between trees. (thanks maccullt) A: Pass the root as a parameter to whichever functions in the node that need it. Edit: The options are really the following: Store the root reference in the node Don't store the root reference at all Store the root reference in a global Store the root reference on the stack (my suggestion, either visitor pattern or recursive) I think this all the possibilities, there is no option 5. A: Why would you need to do away with globals? I understand the stigma of globals being bad and all, but sometimes just having a global data structure with all elements is the fastest solution. You make a trade-off: code clarity and less future problems for performance. That's the meaning being 'Don't optimize yet'. Since you're in the optimize stage, sometimes it's necessary to cut out some readability and good programming practices in favor of performance. I mean, bitwise hacks aren't readable but they're fast. I'm not sure how many tree objects you have, but i'd personally go with option one. Unless you're dealing with thousands+ of trees, the pointers really won't amount to much more then a few strings. If memory really is a super-important issue, try both methods (they seem fairly simple to implement) and run it through a profiler. Or use the excellent Process Explorer. Edit: One of the apps I'm working on has a node tree containing about 55K nodes. We build the tree structure but also maintain an array for O(1) lookups. Much better then the O(m*n) we were getting when using a recursive FindNodeByID method. A: Passing the root as a paramter is generally best. If you're using some kind of iterator to navigate the tree, an alternative is to store a reference to root in that. A: Point #1 is a premature memory optimization. #2 is a premature performance optimization. Have you profiled your app to determine if memory or CPU bottlenecks are causing problems for you? If not, why sacrifice a more maintainable design for an "optimization" that doesn't help your users? I would strongly recommend you go with #2. Whenever you store something you could instead calculate, what you are doing is caching. There's a few times when caching is a good idea, but it's also a maintenance headache. (For example, what if you move a node from one tree to another by changing its parent but forget to also update the root field?) Don't cache if you don't have to. A: You could derive a class from TreeView and then add a singleton static property. That way you are effectively adding a global field that references the single instance of the class but have the benefit of it being namespace scoped to that class. A: Ignoring the distaste for inner classes, I could define a Tree class and define the nodes as Inner classes. Each of the nodes would have access to its tree's state including its root. This might end up being the same as #1 depending on how Java relates the nodes to their parents. (I'm not sure and I'll have to profile it)
Doing away with Globals?
I have a set of tree objects with a depth somewhere in the 20s. Each of the nodes in this tree needs access to its tree's root. A couple of solutions: Each node can store a reference to the root directly (wastes memory) I can compute the root at runtime by "going up" (wastes cycles) I can use static fields (but this amounts to globals) Can someone provide a design that doesn't use a global (in any variation) but is more efficient that #1 or #2 in both memory or cycles respectively? Edit: Since I have a Set of Trees, I can't simply store it in a static since it'd be hard to differentiate between trees. (thanks maccullt)
[ "Pass the root as a parameter to whichever functions in the node that need it.\nEdit: The options are really the following:\n\nStore the root reference in the node\nDon't store the root reference at all\nStore the root reference in a global\nStore the root reference on the stack (my suggestion, either visitor pattern or recursive)\n\nI think this all the possibilities, there is no option 5.\n", "Why would you need to do away with globals? I understand the stigma of globals being bad and all, but sometimes just having a global data structure with all elements is the fastest solution.\nYou make a trade-off: code clarity and less future problems for performance. That's the meaning being 'Don't optimize yet'. Since you're in the optimize stage, sometimes it's necessary to cut out some readability and good programming practices in favor of performance. I mean, bitwise hacks aren't readable but they're fast.\nI'm not sure how many tree objects you have, but i'd personally go with option one. Unless you're dealing with thousands+ of trees, the pointers really won't amount to much more then a few strings. If memory really is a super-important issue, try both methods (they seem fairly simple to implement) and run it through a profiler. Or use the excellent Process Explorer.\nEdit: One of the apps I'm working on has a node tree containing about 55K nodes. We build the tree structure but also maintain an array for O(1) lookups. Much better then the O(m*n) we were getting when using a recursive FindNodeByID method.\n", "Passing the root as a paramter is generally best. If you're using some kind of iterator to navigate the tree, an alternative is to store a reference to root in that.\n", "Point #1 is a premature memory optimization. #2 is a premature performance optimization. Have you profiled your app to determine if memory or CPU bottlenecks are causing problems for you? If not, why sacrifice a more maintainable design for an \"optimization\" that doesn't help your users?\nI would strongly recommend you go with #2. Whenever you store something you could instead calculate, what you are doing is caching. There's a few times when caching is a good idea, but it's also a maintenance headache. (For example, what if you move a node from one tree to another by changing its parent but forget to also update the root field?) Don't cache if you don't have to.\n", "You could derive a class from TreeView and then add a singleton static property. That way you are effectively adding a global field that references the single instance of the class but have the benefit of it being namespace scoped to that class.\n", "Ignoring the distaste for inner classes, I could define a Tree class and define the nodes as Inner classes. Each of the nodes would have access to its tree's state including its root.\nThis might end up being the same as #1 depending on how Java relates the nodes to their parents. (I'm not sure and I'll have to profile it)\n" ]
[ 6, 3, 2, 1, 0, 0 ]
[]
[]
[ "c#", "global_variables", "java", "oop" ]
stackoverflow_0000068156_c#_global_variables_java_oop.txt
Q: Installation of demo project - best practices Using Windows Installer (targeting XP and Vista), is there a best practice for installing demo projects and files with your application? A: From experience installing on Vista/XP I would recommend... 1, Install the source code/project/solution files into the 'Users' directory for Vista. That way when the user opens up the demo and compiles they have write access for generating the output files. If you put the files into the 'Program Files' directory under Vista you do not have write access and so the compile will just fail. 2, Add a shortcut to the solution to either the desktop or the start menu so that the user can then get access to it without having to know the exact location. Under Vista/XP when you install into the 'Users'/'Documents and Settings' directory it is not easy to find the installed files because they are placed inside a directory that is not shown unless you select 'Show Hidden Files' in file explorer. 3, I would recommend you sign the installer using your publisher certificate so that when the user gets a UAC dialog on Vista they can see the name of the publiser and be more likely to continue with the process. 4, At the moment the split between Visual Studio 2005/2008 is about 50%/50% and so make sure you provide both versions of the project/solutions files. Alternatively just supply the VS2005 files and let the user upgrade using the wizard in VS2008.
Installation of demo project - best practices
Using Windows Installer (targeting XP and Vista), is there a best practice for installing demo projects and files with your application?
[ "From experience installing on Vista/XP I would recommend...\n1, Install the source code/project/solution files into the 'Users' directory for Vista. That way when the user opens up the demo and compiles they have write access for generating the output files. If you put the files into the 'Program Files' directory under Vista you do not have write access and so the compile will just fail.\n2, Add a shortcut to the solution to either the desktop or the start menu so that the user can then get access to it without having to know the exact location. Under Vista/XP when you install into the 'Users'/'Documents and Settings' directory it is not easy to find the installed files because they are placed inside a directory that is not shown unless you select 'Show Hidden Files' in file explorer.\n3, I would recommend you sign the installer using your publisher certificate so that when the user gets a UAC dialog on Vista they can see the name of the publiser and be more likely to continue with the process.\n4, At the moment the split between Visual Studio 2005/2008 is about 50%/50% and so make sure you provide both versions of the project/solutions files. Alternatively just supply the VS2005 files and let the user upgrade using the wizard in VS2008.\n" ]
[ 1 ]
[]
[]
[ "installation", "windows_installer" ]
stackoverflow_0000066594_installation_windows_installer.txt
Q: XWindow ignores multiple ClentMessage's sent during same second I've encountered an interesting problem while developing for our legacy XWindows application. For reasons that don't bear explaining, I am sending ClientMessage from a comand-line utility to a GUI app.Most of the messages end up having the same contents, as the message's purpose is to trigger a synchronous communication process over some side pipes. I've noticed that some of the time I would send two messages, but only one gets delivered. I've traced this to the fact that both messages had the same contents and were sent in the same second (IOW, the log timestamp on the sending was the same number). As soon as I added some dummy contents to the messages to make them all different, the problem went away. This happened over two different X servers: vncserver and Exceed. Am I hitting some XWindows feature that I am not aware of - some kind of message throttling/compression? Has anyone encountered this kind of thing? A: The X server should never compress client messages that I'm aware of, but perhaps some X toolkits (Motif, Xaw, etc.) do compress them. That's the first thing I would look for - perhaps the GUI app receiving the message is compressing somewhere inside the toolkit, before the application code sees it. Then again, both vncserver and exceed probably focus more on remote usage than other X servers, and they could contain some ill-advised compression hacks, conceivably. I have read a lot of X specs and written a lot of X code and never heard of this behavior though. A random unlikely thought, be sure you have an XFlush() or XSync() at the end of your command line app before it exits, to be sure you write those messages out to the socket before closing down. But I don't know why message content would matter if this is the problem.
XWindow ignores multiple ClentMessage's sent during same second
I've encountered an interesting problem while developing for our legacy XWindows application. For reasons that don't bear explaining, I am sending ClientMessage from a comand-line utility to a GUI app.Most of the messages end up having the same contents, as the message's purpose is to trigger a synchronous communication process over some side pipes. I've noticed that some of the time I would send two messages, but only one gets delivered. I've traced this to the fact that both messages had the same contents and were sent in the same second (IOW, the log timestamp on the sending was the same number). As soon as I added some dummy contents to the messages to make them all different, the problem went away. This happened over two different X servers: vncserver and Exceed. Am I hitting some XWindows feature that I am not aware of - some kind of message throttling/compression? Has anyone encountered this kind of thing?
[ "The X server should never compress client messages that I'm aware of, but perhaps some X toolkits (Motif, Xaw, etc.) do compress them. That's the first thing I would look for - perhaps the GUI app receiving the message is compressing somewhere inside the toolkit, before the application code sees it.\nThen again, both vncserver and exceed probably focus more on remote usage than other X servers, and they could contain some ill-advised compression hacks, conceivably. I have read a lot of X specs and written a lot of X code and never heard of this behavior though.\nA random unlikely thought, be sure you have an XFlush() or XSync() at the end of your command line app before it exits, to be sure you write those messages out to the socket before closing down. But I don't know why message content would matter if this is the problem.\n" ]
[ 0 ]
[]
[]
[ "x11" ]
stackoverflow_0000066111_x11.txt
Q: Plone-like search box in Django? Plone has a beautiful search box with a "Google suggest" like functionality for its site. It even indexes uploaded documents like PDFs. Does anyone know of a module that can provide this kind of functionality in a Django site? A: Plone implements it's LiveSearch feature by maintaining a separate metadata table of indexed attributes (fields such as last modified, creator, title are copied from the content objects into this table). Content objects then send ObjectAdded/ObjectModified/ObjectRemoved events, and an event subscriber listens for these events and is responsible for updating the metadata table (in Django events are named signals). Then there is a Browser View exposed at a fixed URL that searches the metadata and returns the appropriate LiveSearch HTML, and finally each HTML page is sent the appropriate JavaScript to handle the autocomplete AJAX functionality to query this view and slot the resulting HTML results into the DOM. If you want your LiveSearch to query multiple Models/Content Types, you are likely going to need to send your own events and have a subscriber handle them appropriately. This isn't necessary for a smaller data sets or lower traffic sites, where the performance penalty for doing multiple queries for a single search isn't a concern (or you only want to search a single content type) and you can just do several queries from your View. As for the JavaScript side, you can roll-your-own or use an existing JavaScript library. This is usually called autocomplete in the JS library. There is YUI autocomplete and Scriptaculous autocomplete for starters, and likely lots more JavaScript autocomplete implementations out there. Plone uses KSS for it's JavaScript library, the KSS livesearch plugin is a good place to start if looking for example code to pluck from. http://pypi.python.org/pypi/kss.plugin.livesearch And a tutorial on using KSS with Django: http://kssproject.org/docs/tutorial/kss-in-django-with-kss-django-application KSS is quite nice since it cleanly separates behaviour from content on the client side (without needing to write JavaScript), but Scriptaculous is conceptually a little simpler and has somewhat better documentation (http://github.com/madrobby/scriptaculous/wikis/ajax-autocompleter).
Plone-like search box in Django?
Plone has a beautiful search box with a "Google suggest" like functionality for its site. It even indexes uploaded documents like PDFs. Does anyone know of a module that can provide this kind of functionality in a Django site?
[ "Plone implements it's LiveSearch feature by maintaining a separate metadata table of indexed attributes (fields such as last modified, creator, title are copied from the content objects into this table). Content objects then send ObjectAdded/ObjectModified/ObjectRemoved events, and an event subscriber listens for these events and is responsible for updating the metadata table (in Django events are named signals). Then there is a Browser View exposed at a fixed URL that searches the metadata and returns the appropriate LiveSearch HTML, and finally each HTML page is sent the appropriate JavaScript to handle the autocomplete AJAX functionality to query this view and slot the resulting HTML results into the DOM.\nIf you want your LiveSearch to query multiple Models/Content Types, you are likely going to need to send your own events and have a subscriber handle them appropriately. This isn't necessary for a smaller data sets or lower traffic sites, where the performance penalty for doing multiple queries for a single search isn't a concern (or you only want to search a single content type) and you can just do several queries from your View.\nAs for the JavaScript side, you can roll-your-own or use an existing JavaScript library. This is usually called autocomplete in the JS library. There is YUI autocomplete and Scriptaculous autocomplete for starters, and likely lots more JavaScript autocomplete implementations out there. Plone uses KSS for it's JavaScript library, the KSS livesearch plugin is a good place to start if looking for example code to pluck from.\nhttp://pypi.python.org/pypi/kss.plugin.livesearch\nAnd a tutorial on using KSS with Django:\nhttp://kssproject.org/docs/tutorial/kss-in-django-with-kss-django-application\nKSS is quite nice since it cleanly separates behaviour from content on the client side (without needing to write JavaScript), but Scriptaculous is conceptually a little simpler and has somewhat better documentation (http://github.com/madrobby/scriptaculous/wikis/ajax-autocompleter).\n" ]
[ 2 ]
[]
[]
[ "django", "search" ]
stackoverflow_0000068136_django_search.txt
Q: How can you require a constructor with no parameters for types implementing an interface? Is there a way? I need all types that implement a specific interface to have a parameterless constructor, can it be done? I am developing the base code for other developers in my company to use in a specific project. There's a proccess which will create instances of types (in different threads) that perform certain tasks, and I need those types to follow a specific contract (ergo, the interface). The interface will be internal to the assembly If you have a suggestion for this scenario without interfaces, I'll gladly take it into consideration... A: Not to be too blunt, but you've misunderstood the purpose of interfaces. An interface means that several people can implement it in their own classes, and then pass instances of those classes to other classes to be used. Creation creates an unnecessary strong coupling. It sounds like you really need some kind of registration system, either to have people register instances of usable classes that implement the interface, or of factories that can create said items upon request. A: You can use type parameter constraint interface ITest<T> where T: new() { //... } class Test: ITest<Test> { //... } A: Juan, Unfortunately there is no way to get around this in a strongly typed language. You won't be able to ensure at compile time that the classes will be able to be instantiated by your Activator-based code. (ed: removed an erroneous alternative solution) The reason is that, unfortunately, it's not possible to use interfaces, abstract classes, or virtual methods in combination with either constructors or static methods. The short reason is that the former contain no explicit type information, and the latter require explicit type information. Constructors and static methods must have explicit (right there in the code) type information available at the time of the call. This is required because there is no instance of the class involved which can be queried by the runtime to obtain the underlying type, which the runtime needs to determine which actual concrete method to call. The entire point of an interface, abstract class, or virtual method is to be able to make a function call without explicit type information, and this is enabled by the fact that there is an instance being referenced, which has "hidden" type information not directly available to the calling code. So these two mechanisms are quite simply mutually exclusive. They can't be used together because when you mix them, you end up with no concrete type information at all anywhere, which means the runtime has no idea where to find the function you're asking it to call. A: Juan Manuel said: that's one of the reasons I don't understand why it cannot be a part of the contract in the interface It's an indirect mechanism. The generic allows you to "cheat" and send type information along with the interface. The critical thing to remember here is that the constraint isn't on the interface that you are working with directly. It's not a constraint on the interface itself, but on some other type that will "ride along" on the interface. This is the best explanation I can offer, I'm afraid. By way of illustration of this fact, I'll point out a hole that I have noticed in aku's code. It's possible to write a class that would compile fine but fail at runtime when you try to instantiate it: public class Something : ITest<String> { private Something() { } } Something derives from ITest<T>, but implements no parameterless constructor. It will compile fine, because String does implement a parameterless constructor. Again, the constraint is on T, and therefore String, rather than ITest or Something. Since the constraint on T is satisfied, this will compile. But it will fail at runtime. To prevent some instances of this problem, you need to add another constraint to T, as below: public interface ITest<T> where T : ITest<T>, new() { } Note the new constraint: T : ITest<T>. This constraint specifies that what you pass into the argument parameter of ITest<T> must also derive from ITest<T>. Even so this will not prevent all cases of the hole. The code below will compile fine, because A has a parameterless constructor. But since B's parameterless constructor is private, instantiating B with your process will fail at runtime. public class A : ITest<A> { } public class B : ITest<A> { private B() { } } A: So you need a thing that can create instances of an unknown type that implements an interface. You've got basically three options: a factory object, a Type object, or a delegate. Here's the givens: public interface IInterface { void DoSomething(); } public class Foo : IInterface { public void DoSomething() { /* whatever */ } } Using Type is pretty ugly, but makes sense in some scenarios: public IInterface CreateUsingType(Type thingThatCreates) { ConstructorInfo constructor = thingThatCreates.GetConstructor(Type.EmptyTypes); return (IInterface)constructor.Invoke(new object[0]); } public void Test() { IInterface thing = CreateUsingType(typeof(Foo)); } The biggest problem with it, is that at compile time, you have no guarantee that Foo actually has a default constructor. Also, reflection is a bit slow if this happens to be performance critical code. The most common solution is to use a factory: public interface IFactory { IInterface Create(); } public class Factory<T> where T : IInterface, new() { public IInterface Create() { return new T(); } } public IInterface CreateUsingFactory(IFactory factory) { return factory.Create(); } public void Test() { IInterface thing = CreateUsingFactory(new Factory<Foo>()); } In the above, IFactory is what really matters. Factory is just a convenience class for classes that do provide a default constructor. This is the simplest and often best solution. The third currently-uncommon-but-likely-to-become-more-common solution is using a delegate: public IInterface CreateUsingDelegate(Func<IInterface> createCallback) { return createCallback(); } public void Test() { IInterface thing = CreateUsingDelegate(() => new Foo()); } The advantage here is that the code is short and simple, can work with any method of construction, and (with closures) lets you easily pass along additional data needed to construct the objects. A: Call a RegisterType method with the type, and constrain it using generics. Then, instead of walking assemblies to find ITest implementors, just store them and create from there. void RegisterType<T>() where T:ITest, new() { } A: I don't think so. You also can't use an abstract class for this. A: I would like to remind everyone that: Writing attributes in .NET is easy Writing static analysis tools in .NET that ensure conformance with company standards is easy Writing a tool to grab all concrete classes that implement a certain interface/have an attribute and verifying that it has a parameterless constructor takes about 5 mins of coding effort. You add it to your post-build step and now you have a framework for whatever other static analyses you need to perform. The language, the compiler, the IDE, your brain - they're all tools. Use them! A: No you can't do that. Maybe for your situation a factory interface would be helpful? Something like: interface FooFactory { Foo createInstance(); } For every implementation of Foo you create an instance of FooFactory that knows how to create it. A: You do not need a parameterless constructor for the Activator to instantiate your class. You can have a parameterized constructor and pass all the parameters from the Activator. Check out MSDN on this.
How can you require a constructor with no parameters for types implementing an interface?
Is there a way? I need all types that implement a specific interface to have a parameterless constructor, can it be done? I am developing the base code for other developers in my company to use in a specific project. There's a proccess which will create instances of types (in different threads) that perform certain tasks, and I need those types to follow a specific contract (ergo, the interface). The interface will be internal to the assembly If you have a suggestion for this scenario without interfaces, I'll gladly take it into consideration...
[ "Not to be too blunt, but you've misunderstood the purpose of interfaces.\nAn interface means that several people can implement it in their own classes, and then pass instances of those classes to other classes to be used. Creation creates an unnecessary strong coupling.\nIt sounds like you really need some kind of registration system, either to have people register instances of usable classes that implement the interface, or of factories that can create said items upon request.\n", "You can use type parameter constraint\ninterface ITest<T> where T: new()\n{\n //...\n}\n\nclass Test: ITest<Test>\n{\n //...\n}\n\n", "Juan,\nUnfortunately there is no way to get around this in a strongly typed language. You won't be able to ensure at compile time that the classes will be able to be instantiated by your Activator-based code.\n(ed: removed an erroneous alternative solution)\nThe reason is that, unfortunately, it's not possible to use interfaces, abstract classes, or virtual methods in combination with either constructors or static methods. The short reason is that the former contain no explicit type information, and the latter require explicit type information.\nConstructors and static methods must have explicit (right there in the code) type information available at the time of the call. This is required because there is no instance of the class involved which can be queried by the runtime to obtain the underlying type, which the runtime needs to determine which actual concrete method to call.\nThe entire point of an interface, abstract class, or virtual method is to be able to make a function call without explicit type information, and this is enabled by the fact that there is an instance being referenced, which has \"hidden\" type information not directly available to the calling code. So these two mechanisms are quite simply mutually exclusive. They can't be used together because when you mix them, you end up with no concrete type information at all anywhere, which means the runtime has no idea where to find the function you're asking it to call.\n", "Juan Manuel said:\n\nthat's one of the reasons I don't understand why it cannot be a part of the contract in the interface\n\nIt's an indirect mechanism. The generic allows you to \"cheat\" and send type information along with the interface. The critical thing to remember here is that the constraint isn't on the interface that you are working with directly. It's not a constraint on the interface itself, but on some other type that will \"ride along\" on the interface. This is the best explanation I can offer, I'm afraid.\nBy way of illustration of this fact, I'll point out a hole that I have noticed in aku's code. It's possible to write a class that would compile fine but fail at runtime when you try to instantiate it:\npublic class Something : ITest<String>\n{\n private Something() { }\n}\n\nSomething derives from ITest<T>, but implements no parameterless constructor. It will compile fine, because String does implement a parameterless constructor. Again, the constraint is on T, and therefore String, rather than ITest or Something. Since the constraint on T is satisfied, this will compile. But it will fail at runtime.\nTo prevent some instances of this problem, you need to add another constraint to T, as below:\npublic interface ITest<T>\n where T : ITest<T>, new()\n{\n}\n\nNote the new constraint: T : ITest<T>. This constraint specifies that what you pass into the argument parameter of ITest<T> must also derive from ITest<T>.\nEven so this will not prevent all cases of the hole. The code below will compile fine, because A has a parameterless constructor. But since B's parameterless constructor is private, instantiating B with your process will fail at runtime.\npublic class A : ITest<A>\n{\n}\n\npublic class B : ITest<A>\n{\n private B() { }\n}\n\n", "So you need a thing that can create instances of an unknown type that implements an interface. You've got basically three options: a factory object, a Type object, or a delegate. Here's the givens:\npublic interface IInterface\n{\n void DoSomething();\n}\n\npublic class Foo : IInterface\n{\n public void DoSomething() { /* whatever */ }\n}\n\nUsing Type is pretty ugly, but makes sense in some scenarios:\npublic IInterface CreateUsingType(Type thingThatCreates)\n{\n ConstructorInfo constructor = thingThatCreates.GetConstructor(Type.EmptyTypes);\n return (IInterface)constructor.Invoke(new object[0]);\n}\n\npublic void Test()\n{\n IInterface thing = CreateUsingType(typeof(Foo));\n}\n\nThe biggest problem with it, is that at compile time, you have no guarantee that Foo actually has a default constructor. Also, reflection is a bit slow if this happens to be performance critical code.\nThe most common solution is to use a factory:\npublic interface IFactory\n{\n IInterface Create();\n}\n\npublic class Factory<T> where T : IInterface, new()\n{\n public IInterface Create() { return new T(); }\n}\n\npublic IInterface CreateUsingFactory(IFactory factory)\n{\n return factory.Create();\n}\n\npublic void Test()\n{\n IInterface thing = CreateUsingFactory(new Factory<Foo>());\n}\n\nIn the above, IFactory is what really matters. Factory is just a convenience class for classes that do provide a default constructor. This is the simplest and often best solution.\nThe third currently-uncommon-but-likely-to-become-more-common solution is using a delegate:\npublic IInterface CreateUsingDelegate(Func<IInterface> createCallback)\n{\n return createCallback();\n}\n\npublic void Test()\n{\n IInterface thing = CreateUsingDelegate(() => new Foo());\n}\n\nThe advantage here is that the code is short and simple, can work with any method of construction, and (with closures) lets you easily pass along additional data needed to construct the objects.\n", "Call a RegisterType method with the type, and constrain it using generics. Then, instead of walking assemblies to find ITest implementors, just store them and create from there.\nvoid RegisterType<T>() where T:ITest, new() {\n}\n\n", "I don't think so. \nYou also can't use an abstract class for this.\n", "I would like to remind everyone that:\n\nWriting attributes in .NET is easy\nWriting static analysis tools in .NET that ensure conformance with company standards is easy\n\nWriting a tool to grab all concrete classes that implement a certain interface/have an attribute and verifying that it has a parameterless constructor takes about 5 mins of coding effort. You add it to your post-build step and now you have a framework for whatever other static analyses you need to perform.\nThe language, the compiler, the IDE, your brain - they're all tools. Use them!\n", "No you can't do that. Maybe for your situation a factory interface would be helpful? Something like:\ninterface FooFactory {\n Foo createInstance();\n}\n\nFor every implementation of Foo you create an instance of FooFactory that knows how to create it.\n", "You do not need a parameterless constructor for the Activator to instantiate your class. You can have a parameterized constructor and pass all the parameters from the Activator. Check out MSDN on this.\n" ]
[ 22, 6, 5, 5, 4, 1, 0, 0, 0, 0 ]
[]
[]
[ ".net", "c#", "constructor", "interface", "oop" ]
stackoverflow_0000026903_.net_c#_constructor_interface_oop.txt
Q: How do I rotate an image at 12 midnight every day? I need to rotate an image at 12 midnight every day from a group of 5-10 images. How can I go about doing this with JavaScript or jQuery or even PHP? A: At a basic level what you want to do is define an array of image names then take the number of days from a given point in time then modulo (remainder after division) by the number of images and access that index in the array and set the image, e.g. (untested code) var images = new Array("image1.gif", "image2.jpg", "sky.jpg", "city.png"); var dateDiff = new Date() - new Date(2008,01,01); var imageIndex = Math.Round(dateDiff/1000/60/60/24) % images.length; document.GetElementById('imageId').setAttribute('src', images[imageIndex]); Bear in mind that any client-side solution will be using the date and time of the client so if your definition of midnight means in your timezone then you'll need to do something similar on your server in PHP. A: Maybe I don't understand the question. If you just want to change the image write a batch file/cron job and have it run every day. If you want to display a certain image on Monday, and a different one of Tuesday then do something like this: <?php switch(date('w')) { case '1': //Monday break; case '2': //Tuesday: break; ... } ?> A: I'd do it on first access after midnight. A: It doesn't even have to be in cron: <?php // starting date for rotation $startDate = '2008-09-15'; // array of image filenames $images = array('file1.jpg','file2.jpg',...); $stamp = strtotime($startDate); $days = (time() - $stamp) / (60*60*24); $imageFilename = $images[$days % count($images)] ?> <img src="<?php echo $imageFilename; ?>"/> A: Edit: I totally misread this question as "without using javascript/PHP". So disregard this response. I'm not deleting it, just in case anyone was crazy enough to want to use this method. Doing it without Javascript, PHP, or any other form of scripting language could be difficult. Well actually, it would just be very contrived, since it would be trivial with even the most basic JS/PHP. Anyway, to actually answer your question, the only way I can think of doing it with vanilla HTML is to set up a shell script to run at midnight. That script would just rename your files. Do this with cron (on linux) or Windows Task Scheduler with a script kinda like this: (dodgy pseudo code follows, convert to whatever you're comfortable with). let number_of_files = 5 rename current.jpg to number_of_files.jpg for (x = 2 to number_of_files) rename x.jpg to (x-1).jpg rename 1.jpg to current.jpg In your HTML, just do this: <img src="path/to/current.jpg" /> And every day, current.jpg should change to something new. If you're using any sort of cache-control, make sure to change it so that it doesn't get cached for longer than a few hours. A: If you are running a linux system you can set a Cron Job or you can use the windows task scheduler if you are on windows A: You have two options. In JavaScript, you basically have it choose an image based on the day of the week or day of the month if you have more than 7 images. Day of the month modulo by the length of the image array should let you pick the right array element. You need something a bit more stateful to track what's going on... using SQL you can track when images are used and pick from the rotating list. You could also use a text file maintained by PHP to track the ordered list. The old school way is to have a cron job rotate the image, but I'd avoid that these days. A: That depends on how you are rotating them - sequentially, randomly, or what? There are a number of options. You can determine which image you want in PHP, and dynamically change your <img> element to point to the correct location. This is best if you are already generating your pages dynamically. You can always point to a single URL, which is a PHP file that determines which image to show and then performs a 302 redirect to it. This is better if you have static pages that you don't want to incur the overhead of dynamic generation for. Don't make the image URL itself serve different images. You'll screw up your cache hit ratio for no good reason and incur unnecessary overhead on what should be a static resource. Martin, "rotate" in this context means "change on a regular basis", not "turn around an axis". A: It depends (TM) on what exactly you want to achieve. Just want to display a "random" image? Maybe this javascript snippet will get you started: var date = new Date(); var day = date.getDate(); // thats the day of month, use getDays() for day of week document.getElementById('someImage').src = '/images/foo/bar_' + (day % 10) + '.gif'; A: I assume by "rotate image" you mean "change the image in use" and not "rotational transformation about an axis" -- a simple way is to have a hash table that maps day modulo X to an image name. $imgs = array("kitten.jpg", "puppy.gif","Bob_Dole.png"); $day_index = 365 * date("Y") + date("Z") ... <img src="<? $imgs[$day_index % count($imgs)] ?>" /> (sorry if I got the syntax wrong, I don't really know PHP)) A: Set up a directory of images. Select seven images and name them 0.jpg, 1.jpg, 2.jpg, 3.jpg, 4.jpg, 5.jpg, 6.jpg, Using mootools Javascript framework with an image tag in HTML with id "rotatingimage": var d=new Date(); var utc = d.getTime() + (d.getTimezoneOffset() * 60000); var offset = -10; // set this to your locale's UTC offset var desiredTime = utc + (3600000*offset); new dd = new Date(desiredTime); $('rotatingimage').setProperty('src', dd.getDay() + '.jpg'); A: Here's a solution which will choose a random image per day from a directory, which might be easier than some other solutions posted here, since all you have to do to get something into the rotation is upload it, rather than edit an array, or name the files in arbitrary ways. function getImageOfTheDay() { $myDir = "path/to/images/"; // get a unique value for the day/year. // 15th Jan 2008 -> 10152008. 3 Feb -> 10342008, 31 Dec -> 13662008 $day = sprintf("1%03d%d", date('z'), date('Y')); // you could of course get gifs/pngs as well. // i'm just doing this for simplicity $jpgs = glob($myDir . "*.jpg"); mt_srand($day); return $jpgs[mt_rand(0, count($jpgs) - 1)]; } The only thing is that there's a possibility of seeing the same image two days in a row, since this picks it randomly. If you wanted to get them sequentially, in alphabetical order, perhaps, then use this. (only the last two lines are changed) function getImageOfTheDay() { $myDir = "path/to/images/"; $day = sprintf("1%03d%d", date('z'), date('Y')); $jpgs = glob($myDir . "*.jpg"); return $jpgs[$day % count($jpgs)]; } A: How to rotate the images has been described in a number of ways. How to detect when to trigger the rotate depends on what you are doing (please clarify and people can provide a better answer). Options include: Trigger the rotate on 'first access after midnight'. Assumes some sort of initiating event (eg user access) Use the OS scheduling capabilities to trigger the rotate (cron on *nix, at/task scheduler on Windows) Write code to check time & rotate Option 3 has the risk that a poorly coded solution could be overly resource intensive.
How do I rotate an image at 12 midnight every day?
I need to rotate an image at 12 midnight every day from a group of 5-10 images. How can I go about doing this with JavaScript or jQuery or even PHP?
[ "At a basic level what you want to do is define an array of image names then take the number of days from a given point in time then modulo (remainder after division) by the number of images and access that index in the array and set the image, e.g. (untested code)\nvar images = new Array(\"image1.gif\", \"image2.jpg\", \"sky.jpg\", \"city.png\");\nvar dateDiff = new Date() - new Date(2008,01,01);\nvar imageIndex = Math.Round(dateDiff/1000/60/60/24) % images.length;\ndocument.GetElementById('imageId').setAttribute('src', images[imageIndex]);\n\nBear in mind that any client-side solution will be using the date and time of the client so if your definition of midnight means in your timezone then you'll need to do something similar on your server in PHP.\n", "Maybe I don't understand the question.\nIf you just want to change the image write a batch file/cron job and have it run every day.\nIf you want to display a certain image on Monday, and a different one of Tuesday then do something like this:\n\n<?php\nswitch(date('w'))\n {\n case '1':\n //Monday\n break;\n case '2':\n //Tuesday:\n break;\n...\n}\n?>\n\n", "I'd do it on first access after midnight.\n", "It doesn't even have to be in cron:\n<?php\n// starting date for rotation\n$startDate = '2008-09-15';\n// array of image filenames\n$images = array('file1.jpg','file2.jpg',...);\n\n$stamp = strtotime($startDate);\n$days = (time() - $stamp) / (60*60*24);\n$imageFilename = $images[$days % count($images)]\n?>\n\n<img src=\"<?php echo $imageFilename; ?>\"/>\n\n", "Edit: I totally misread this question as \"without using javascript/PHP\". So disregard this response. I'm not deleting it, just in case anyone was crazy enough to want to use this method.\nDoing it without Javascript, PHP, or any other form of scripting language could be difficult. Well actually, it would just be very contrived, since it would be trivial with even the most basic JS/PHP.\nAnyway, to actually answer your question, the only way I can think of doing it with vanilla HTML is to set up a shell script to run at midnight. That script would just rename your files. Do this with cron (on linux) or Windows Task Scheduler with a script kinda like this: (dodgy pseudo code follows, convert to whatever you're comfortable with).\nlet number_of_files = 5\n\nrename current.jpg to number_of_files.jpg\n\nfor (x = 2 to number_of_files)\n rename x.jpg to (x-1).jpg\n\nrename 1.jpg to current.jpg\n\nIn your HTML, just do this:\n<img src=\"path/to/current.jpg\" />\n\nAnd every day, current.jpg should change to something new. If you're using any sort of cache-control, make sure to change it so that it doesn't get cached for longer than a few hours.\n", "If you are running a linux system you can set a Cron Job or you can use the windows task scheduler if you are on windows\n", "You have two options.\n\nIn JavaScript, you basically have it choose an image based on the day of the week or day of the month if you have more than 7 images. Day of the month modulo by the length of the image array should let you pick the right array element.\nYou need something a bit more stateful to track what's going on... using SQL you can track when images are used and pick from the rotating list. You could also use a text file maintained by PHP to track the ordered list.\n\nThe old school way is to have a cron job rotate the image, but I'd avoid that these days.\n", "That depends on how you are rotating them - sequentially, randomly, or what?\nThere are a number of options. You can determine which image you want in PHP, and dynamically change your <img> element to point to the correct location. This is best if you are already generating your pages dynamically.\nYou can always point to a single URL, which is a PHP file that determines which image to show and then performs a 302 redirect to it. This is better if you have static pages that you don't want to incur the overhead of dynamic generation for.\nDon't make the image URL itself serve different images. You'll screw up your cache hit ratio for no good reason and incur unnecessary overhead on what should be a static resource.\nMartin, \"rotate\" in this context means \"change on a regular basis\", not \"turn around an axis\".\n", "It depends (TM) on what exactly you want to achieve. Just want to display a \"random\" image? Maybe this javascript snippet will get you started:\nvar date = new Date();\nvar day = date.getDate(); // thats the day of month, use getDays() for day of week\n\ndocument.getElementById('someImage').src = '/images/foo/bar_' + (day % 10) + '.gif';\n\n", "I assume by \"rotate image\" you mean \"change the image in use\" and not \"rotational transformation about an axis\" -- a simple way is to have a hash table that maps day modulo X to an image name.\n$imgs = array(\"kitten.jpg\", \"puppy.gif\",\"Bob_Dole.png\"); \n$day_index = 365 * date(\"Y\") + date(\"Z\")\n\n...\n\n<img src=\"<? $imgs[$day_index % count($imgs)] ?>\" />\n\n(sorry if I got the syntax wrong, I don't really know PHP))\n", "Set up a directory of images. \nSelect seven images and name them 0.jpg, 1.jpg, 2.jpg, 3.jpg, 4.jpg, 5.jpg, 6.jpg, \nUsing mootools Javascript framework with an image tag in HTML with id \"rotatingimage\":\nvar d=new Date();\nvar utc = d.getTime() + (d.getTimezoneOffset() * 60000);\nvar offset = -10; // set this to your locale's UTC offset\nvar desiredTime = utc + (3600000*offset);\nnew dd = new Date(desiredTime); \n$('rotatingimage').setProperty('src', dd.getDay() + '.jpg'); \n\n", "Here's a solution which will choose a random image per day from a directory, which might be easier than some other solutions posted here, since all you have to do to get something into the rotation is upload it, rather than edit an array, or name the files in arbitrary ways.\nfunction getImageOfTheDay() {\n $myDir = \"path/to/images/\";\n\n // get a unique value for the day/year.\n // 15th Jan 2008 -> 10152008. 3 Feb -> 10342008, 31 Dec -> 13662008\n $day = sprintf(\"1%03d%d\", date('z'), date('Y'));\n\n // you could of course get gifs/pngs as well.\n // i'm just doing this for simplicity\n $jpgs = glob($myDir . \"*.jpg\");\n mt_srand($day);\n return $jpgs[mt_rand(0, count($jpgs) - 1)];\n}\n\nThe only thing is that there's a possibility of seeing the same image two days in a row, since this picks it randomly. If you wanted to get them sequentially, in alphabetical order, perhaps, then use this. (only the last two lines are changed)\nfunction getImageOfTheDay() {\n $myDir = \"path/to/images/\";\n $day = sprintf(\"1%03d%d\", date('z'), date('Y'));\n $jpgs = glob($myDir . \"*.jpg\");\n return $jpgs[$day % count($jpgs)];\n}\n\n", "How to rotate the images has been described in a number of ways. How to detect when to trigger the rotate depends on what you are doing (please clarify and people can provide a better answer).\nOptions include:\n\nTrigger the rotate on 'first access after midnight'. Assumes some sort of initiating event (eg user access)\nUse the OS scheduling capabilities to trigger the rotate (cron on *nix, at/task scheduler on Windows)\nWrite code to check time & rotate\n\nOption 3 has the risk that a poorly coded solution could be overly resource intensive.\n" ]
[ 5, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "javascript", "jquery", "php" ]
stackoverflow_0000066422_javascript_jquery_php.txt
Q: How to actually use a source control system? So I get that most of you are frowning at me for not currently using any source control. I want to, I really do, now that I've spent some time reading the questions / answers here. I am a hobby programmer and really don't do much more than tinker, but I've been bitten a couple of times now not having the 'time machine' handy... I still have to decide which product I'll go with, but that's not relevant to this question. I'm really struggling with the flow of files under source control, so much so I'm not even sure how to pose the question sensibly. Currently I have a directory hierarchy where all my PHP files live in a Linux Environment. I edit them there and can hit refresh on my browser to see what happens. As I understand it, my files now live in a different place. When I want to edit, I check it out and edit away. But what is my substitute for F5? How do I test it? Do I have to check it back in, then hit F5? I admit to a good bit of trial and error in my work. I suspect I'm going to get tired of checking in and out real quick for the frequent small changes I tend to make. I have to be missing something, right? Can anyone step me through where everything lives and how I test along the way, while keeping true to the goal of having a 'time machine' handy? A: Eric Sink has a great series of posts on source control basics. His company (Sourcegear) makes a source control tool called Vault, but the how-to is generally pretty system agnostic. A: Don't edit your code on production. Create a development environment, with the appropriate services (apache w/mod_php). The application directory within your dev environment is where you do your work. Put your current production app in there. Commit this directory to the source control tool. (now you have populated source control with your application) Make changes in your new development environment, hitting F5 when you want to see/test what you've changed. Merge/Commit your changes to source control. A: Actually, your files, while stored in a source repository (big word for another place on your hard drive, or a hard drive somewhere else), can also exist on your local machine, too, just where they exist now. So, all files that aren't checked out would be marked as "read only", if you are using VSS (not sure about SVN, CVS, etc). So, you could still run your website by hitting "F5" and it will reload the files where they currently are. If you check one out and are editing it, it becomes NOT read only, and you can change it. Regardless, the web server that you are running will load readonly/writable files with the same effect. A: You still have all the files on your hard drive, ready for F5! The difference is that you can "checkpoint" your files into the repository. Your daily life doesn't have to change at all. A: You can do a "checkout" to the same directory where you currently work so that doesn't have to change. Basically your working directory doesn't need to change. A: This is a wildly open ended question because how you use a SCM depends heavily on which SCM you choose. A distributed SCM like git works very differently from a centralized one like Subversion. svn is way easier to digest for the "new user", but git can be a little more powerful and improve your workflow. Subversion also has really great docs and tool support (like trac), and an online book that you should read: http://svnbook.red-bean.com/ It will cover the basics of source control management which will help you in some way no matter which SCM you ultimately choose, so I recommend skimming the first few chapters. edit: Let me point out why people are frowning on you, by the way: SCM is more than simply a "backup of your code". Having "timemachine" is nothing like an SCM. With an SCM you can go back in your change history and see what you actually changed and when which is something you'll never get with blobs of code. I'm sure you've asked yourself on more than one occasion: "how did this code get here?" or "I thought I fixed that bug"-- if you did, thats why you need SCM. A: You don't "have" to change your workflow in a drastic way. You could, and in some cases you should, but that's not something version control dictates. You just use the files as you would normally. Only under version control, once you reach a certain state of "finished" or at least "working" (solved an issue in your issue tracker, finished a certain method, tweaked something, etc), you check it in. If you have more than one developer working on your codebase, be sure to update regularly, so you're always working against a recent (merged) version of the code. A: Here is the general workflow that you'd use with a non-centralized source control system like CVS or Subversion: At first you import your current project into the so-called repository, a versioned storage of all your files. Take care only to import hand-generated files (source, data files, makefiles, project files). Generated files (object files, executables, generated documentation) should not be put into the repository. Then you have to check out your working copy. As the name implies, this is where you will do all your local edits, where you will compile and where you will point your test server at. It's basically the replacement to where you worked at before. You only need to do these steps once per project (although you could check out multiple working copies, of course.) This is the basic work cycle: At first you check out all changes made in the repository into your local working copy. When working in a team, this would bring in any changes other team members made since your last check out. Then you do your work. When you've finished with a set of work, you should check out the current version again and resolve possible conflicts due to changes by other team members. (In a disciplined team, this is usually not a problem.) Test, and when everything works as expected you commit (check in) your changes. Then you can continue working, and once you've finished again, check out, resolve conflicts, and check in again. Please note that you should only commit changes that were tested and work. How often you check in is a matter of taste, but a general rule says that you should commit your changes at least once at the end of your day. Personally, I commit my changes much more often than that, basically whenever I made a set of related changes that pass all tests. A: Great question. With source control you can still do your "F5" refresh process. But after each edit (or a few minor edits) you want to check your code in so you have a copy backed up. Depending on the source control system, you don't have to explicitly check out the file each time. Just editing the file will check it out. I've written a visual guide to source control that many people have found useful when grokking the basics. A: I would recommend a distributed version control system (mercurial, git, bazaar, darcs) rather than a centralized version control system (cvs, svn). They're much easier to setup and work with. Try mercurial (which is the VCS that I used to understand how version control works) and then if you like you can even move to git. There's a really nice introductory tutorial on Mercurial's homepage: Understanding Mercurial. That will introduce you to the basic concepts on VCS and how things work. It's really great. After that I suggest you move on to the Mercurial tutorials: Mercurial tutorial page, which will teach you how to actually use Mercurial. Finally, you have a free ebook that is a really great reference on how to use Mercurial: Distributed Revision Control with Mercurial If you're feeling more adventurous and want to start off with Git straight away, then this free ebook is a great place to start: Git Magic (Very easy read) In the end, no matter what VCS tool you choose, what you'll end up doing is the following: Have a repository that you don't manually edit, it only for the VCS Have a working directory, where you make your changes as usual. Change what you like, press F5 as many times as you wish. When you like what you've done and think you would like to save the project the way it is at that very moment (much like you would do when you're, for example, writing something in Word) you can then commit your changes to the repository. If you ever need to go back to a certain state in your project you now have the power to do so. And that's pretty much it. A: If you are using Subversion, you check out your files once . Then, whenever you have made big changes (or are going to lunch or whatever), you commit them to the server. That way you can keep your old work flow by pressing F5, but every time you commit you save a copy of all the files in their current state in your SVN-repository. A: Depending on the source control system, 'checkout' may mean different things. In the SVN world, it just means retrieving (could be an update, could be a new file) the latest copy from the repository. In the source-safe world, that generally means updating the existing file and locking it. The text below uses the SVN meaning: Using PHP, what you want to do is checkout your entire project/site to a working folder on a test apache site. You should have the repository set up so this can happen with a single checkout, including any necessary sub folders. You checkout your project to set this up one time. Now you can make your changes and hit F5 to refresh as normal. When you're happy with a set of changes to support a particular fix or feature, you can commit in as a unit (with appropriate comments, of course). This puts the latest version in the repository. Checking out/committing one file at a time would be a hassle. A: Depends on the source control system you use. For example, for subversion and cvs your files can reside in a remote location, but you always check out your own copy of them locally. This local copy (often referred to as the working copy) are just regular files on the filesystem with some meta-data to let you upload your changes back to the server. If you are using Subversion here's a good tutorial. A: A source control system is generally a storage place for your files and their history and usually separate from the files you're currently working on. It depends a bit on the type of version control system but suppose you're using something CVS-like (like subversion), then all your files will live in two (or more) places. You have the files in your local directory, the so called "working copy" and one in the repository, which can be located in another local folder, or on another machine, usually accessed over the network. Usually, after the first import of your files into the repository you check them out under a working folder where you continue working on them. I assume that would be the folder where your PHP files now live. Now what happens when you've checked out a copy and you made some non-trivial changes that you want to "save"? You simply commit those changes in your working copy to the version control system. Now you have a history of your changes. Should you at any point wish to go back to the version at which you committed those changes, then you can simply revert your working copy to an older revision (the name given to the set of changes that you commit at once). Note that this is all very CVS/SVN-specific, as GIT would work slightly different. I'd recommend starting with subversion and reading the first few chapters of the very excellent SVN Book to get you started. A: This is all very subjective depending on the the source control solution that you decide to use. One that you will definitely want to look into is Subversion. You mentioned that you're doing PHP, but are you doing it in a Linux environment or Windows? It's not really important, but what I typically did when I worked in a PHP environment was to have a production branch and a development branch. This allowed me to configure a cron job (a scheduled task in Windows) for automatically pulling from the production-ready branch for the production server, while pulling from the development branch for my dev server. Once you decide on a tool, you should really spend some time learning how it works. The concepts of checking in and checking out don't apply to all source control solutions, for example. Either way, I'd highly recommend that you pick one that permits branching. This article goes over a great (in my opinion) source control model to follow in a production environment. Of course, I state all this having not "tinkered" in years. I've been doing professional development for some time and my techniques might be overkill for somebody in your position. Not to say that there's anything wrong with that, however. A: I just want to add that the system that I think was easiest to set up and work with was Mercurial. If you work alone and not in a team you just initialize it in your normal work folder and then go on from there. The normal flow is to edit any file using your favourite editor and then to a checkin (commit). I havn't tried GIT but I assume it is very similar. Monotone was a little bit harder to get started with. These are all distributed source control systems. A: It sounds like you're asking about how to use source control to manage releases. Here's some general guidance that's not specific to websites: Use a local copy for developing changes Compile (if applicable) and test your changes before checking in Run automated builds and tests as often as possible (at least daily) Version your daily builds (have some way of specifying the exact bits of code corresponding to a particular build and test run) If possible, use separate branches for major releases (or have a development and a release branch) When necessary, stabilize your code base (define a set of tests such that passing all of those tests means you are confident enough in the quality of your product to release it, then drive toward 0 test failures, i.e. ban any checkins to the release branch other than fixes for the outstanding issues) When you have a build which has the features you want and has passed all of the necessary tests, deploy it. If you have a small team, a stable product, a fast build, and efficient, high-quality tests then this entire process might be 100% automated and could take place in minutes. A: I recommend Subversion. Setting up a repository and using it is actually fairly trivial, even from the command line. Here's how it would go: if you haven't setup your repo (repository) 1) Make sure you've got Subversion installed on your server $ which svn /usr/bin/svn which is a tool that tells you the path to another tool. if it returns nothing that tool is not installed on your system 1b) If not, get it $ apt-get install subversion apt-get is a tool that installs other tools onto your system If that's not the right name for subversion in apt, try this $ apt-cache search subversion or this $ apt-cache search svn Find the right package name and install it using apt-get install packagename 2) Create a new repository on your server $ cd /path/to/directory/of/repositories $ svnadmin create my_repository svnadmin create reponame creates a new repository in the present working directory (pwd) with the name reponame You are officially done creating your repository if you have an existing repo, or have finished setting it up 1) Make sure you've got Subversion installed on your local machine per the instructions above 2) Check out the repository to your local machine $ cd /repos/on/your/local/machine $ svn co svn+ssh://www.myserver.com/path/to/directory/of/repositories/my_repository svn co is the command you use to check out a repository 3) Create your initial directory structure (optional) $ cd /repos/on/your/local/machine $ cd my_repository $ svn mkdir branches $ svn mkdir tags $ svn mkdir trunk $ svn commit -m "Initial structure" svn mkdir runs a regular mkdir and creates a directory in the present working directory with the name you supply after typing svn mkdir and then adds it to the repository. svn commit -m "" sends your changes to the repository and updates it. Whatever you place in the quotes after -m is the comment for this commit (make it count!). The "working copy" of your code would go in the trunk directory. branches is used for working on individual projects outside of trunk; each directory in branches is a copy of trunk for a different sub project. tags is used more releases. I suggest just focusing on trunk for a while and getting used to Subversion. working with your repo 1) Add code to your repository $ cd /repos/on/your/local/machine $ svn add my_new_file.ext $ svn add some/new/directory $ svn add some/directory/* $ svn add some/directory/*.ext The second to last line adds every file in that directory. The last line adds every file with the extension .ext. 2) Check the status of your repository $ cd /repos/on/your/local/machine $ svn status That will tell you if there are any new files, and updated files, and files with conflicts (differences between your local version and the version on the server), etc. 3) Update your local copy of your repository $ cd /repos/on/your/local/machine $ svn up Updating pulls any new changes from the server you don't already have svn up does care what directory you're in. If you want to update your entire repository, makre sure you're in the root directory of the repository (above trunk) That's all you really need to know to get started. For more information I recommend you check out the Subversion Book.
How to actually use a source control system?
So I get that most of you are frowning at me for not currently using any source control. I want to, I really do, now that I've spent some time reading the questions / answers here. I am a hobby programmer and really don't do much more than tinker, but I've been bitten a couple of times now not having the 'time machine' handy... I still have to decide which product I'll go with, but that's not relevant to this question. I'm really struggling with the flow of files under source control, so much so I'm not even sure how to pose the question sensibly. Currently I have a directory hierarchy where all my PHP files live in a Linux Environment. I edit them there and can hit refresh on my browser to see what happens. As I understand it, my files now live in a different place. When I want to edit, I check it out and edit away. But what is my substitute for F5? How do I test it? Do I have to check it back in, then hit F5? I admit to a good bit of trial and error in my work. I suspect I'm going to get tired of checking in and out real quick for the frequent small changes I tend to make. I have to be missing something, right? Can anyone step me through where everything lives and how I test along the way, while keeping true to the goal of having a 'time machine' handy?
[ "Eric Sink has a great series of posts on source control basics. His company (Sourcegear) makes a source control tool called Vault, but the how-to is generally pretty system agnostic.\n", "\nDon't edit your code on production.\nCreate a development environment, with the appropriate services (apache w/mod_php).\nThe application directory within your dev environment is where you do your work.\nPut your current production app in there.\nCommit this directory to the source control tool. (now you have populated source control with your application)\nMake changes in your new development environment, hitting F5 when you want to see/test what you've changed.\nMerge/Commit your changes to source control.\n\n", "Actually, your files, while stored in a source repository (big word for another place on your hard drive, or a hard drive somewhere else), can also exist on your local machine, too, just where they exist now.\nSo, all files that aren't checked out would be marked as \"read only\", if you are using VSS (not sure about SVN, CVS, etc). So, you could still run your website by hitting \"F5\" and it will reload the files where they currently are. If you check one out and are editing it, it becomes NOT read only, and you can change it.\nRegardless, the web server that you are running will load readonly/writable files with the same effect.\n", "You still have all the files on your hard drive, ready for F5!\nThe difference is that you can \"checkpoint\" your files into the repository. Your daily life doesn't have to change at all.\n", "You can do a \"checkout\" to the same directory where you currently work so that doesn't have to change. Basically your working directory doesn't need to change.\n", "This is a wildly open ended question because how you use a SCM depends heavily on which SCM you choose. A distributed SCM like git works very differently from a centralized one like Subversion.\nsvn is way easier to digest for the \"new user\", but git can be a little more powerful and improve your workflow. Subversion also has really great docs and tool support (like trac), and an online book that you should read:\nhttp://svnbook.red-bean.com/\nIt will cover the basics of source control management which will help you in some way no matter which SCM you ultimately choose, so I recommend skimming the first few chapters.\nedit: Let me point out why people are frowning on you, by the way: SCM is more than simply a \"backup of your code\". Having \"timemachine\" is nothing like an SCM. With an SCM you can go back in your change history and see what you actually changed and when which is something you'll never get with blobs of code. I'm sure you've asked yourself on more than one occasion: \"how did this code get here?\" or \"I thought I fixed that bug\"-- if you did, thats why you need SCM.\n", "You don't \"have\" to change your workflow in a drastic way. You could, and in some cases you should, but that's not something version control dictates.\nYou just use the files as you would normally. Only under version control, once you reach a certain state of \"finished\" or at least \"working\" (solved an issue in your issue tracker, finished a certain method, tweaked something, etc), you check it in. \nIf you have more than one developer working on your codebase, be sure to update regularly, so you're always working against a recent (merged) version of the code.\n", "Here is the general workflow that you'd use with a non-centralized source control system like CVS or Subversion: At first you import your current project into the so-called repository, a versioned storage of all your files. Take care only to import hand-generated files (source, data files, makefiles, project files). Generated files (object files, executables, generated documentation) should not be put into the repository.\nThen you have to check out your working copy. As the name implies, this is where you will do all your local edits, where you will compile and where you will point your test server at. It's basically the replacement to where you worked at before. You only need to do these steps once per project (although you could check out multiple working copies, of course.)\nThis is the basic work cycle: At first you check out all changes made in the repository into your local working copy. When working in a team, this would bring in any changes other team members made since your last check out. Then you do your work. When you've finished with a set of work, you should check out the current version again and resolve possible conflicts due to changes by other team members. (In a disciplined team, this is usually not a problem.) Test, and when everything works as expected you commit (check in) your changes. Then you can continue working, and once you've finished again, check out, resolve conflicts, and check in again. Please note that you should only commit changes that were tested and work. How often you check in is a matter of taste, but a general rule says that you should commit your changes at least once at the end of your day. Personally, I commit my changes much more often than that, basically whenever I made a set of related changes that pass all tests.\n", "Great question. With source control you can still do your \"F5\" refresh process. But after each edit (or a few minor edits) you want to check your code in so you have a copy backed up.\nDepending on the source control system, you don't have to explicitly check out the file each time. Just editing the file will check it out. I've written a visual guide to source control that many people have found useful when grokking the basics.\n", "I would recommend a distributed version control system (mercurial, git, bazaar, darcs) rather than a centralized version control system (cvs, svn). They're much easier to setup and work with.\nTry mercurial (which is the VCS that I used to understand how version control works) and then if you like you can even move to git.\nThere's a really nice introductory tutorial on Mercurial's homepage: Understanding Mercurial. That will introduce you to the basic concepts on VCS and how things work. It's really great. After that I suggest you move on to the Mercurial tutorials: Mercurial tutorial page, which will teach you how to actually use Mercurial. Finally, you have a free ebook that is a really great reference on how to use Mercurial: Distributed Revision Control with Mercurial\nIf you're feeling more adventurous and want to start off with Git straight away, then this free ebook is a great place to start: Git Magic (Very easy read)\nIn the end, no matter what VCS tool you choose, what you'll end up doing is the following:\n\nHave a repository that you don't manually edit, it only for the VCS\nHave a working directory, where you make your changes as usual.\nChange what you like, press F5 as many times as you wish. When you like what you've done and think you would like to save the project the way it is at that very moment (much like you would do when you're, for example, writing something in Word) you can then commit your changes to the repository.\nIf you ever need to go back to a certain state in your project you now have the power to do so.\n\nAnd that's pretty much it.\n", "If you are using Subversion, you check out your files once . Then, whenever you have made big changes (or are going to lunch or whatever), you commit them to the server. That way you can keep your old work flow by pressing F5, but every time you commit you save a copy of all the files in their current state in your SVN-repository.\n", "Depending on the source control system, 'checkout' may mean different things. In the SVN world, it just means retrieving (could be an update, could be a new file) the latest copy from the repository. In the source-safe world, that generally means updating the existing file and locking it. The text below uses the SVN meaning:\nUsing PHP, what you want to do is checkout your entire project/site to a working folder on a test apache site. You should have the repository set up so this can happen with a single checkout, including any necessary sub folders. You checkout your project to set this up one time. \nNow you can make your changes and hit F5 to refresh as normal. When you're happy with a set of changes to support a particular fix or feature, you can commit in as a unit (with appropriate comments, of course). This puts the latest version in the repository.\nChecking out/committing one file at a time would be a hassle.\n", "Depends on the source control system you use. For example, for subversion and cvs your files can reside in a remote location, but you always check out your own copy of them locally. This local copy (often referred to as the working copy) are just regular files on the filesystem with some meta-data to let you upload your changes back to the server.\nIf you are using Subversion here's a good tutorial.\n", "A source control system is generally a storage place for your files and their history and usually separate from the files you're currently working on. It depends a bit on the type of version control system but suppose you're using something CVS-like (like subversion), then all your files will live in two (or more) places. You have the files in your local directory, the so called \"working copy\" and one in the repository, which can be located in another local folder, or on another machine, usually accessed over the network. Usually, after the first import of your files into the repository you check them out under a working folder where you continue working on them. I assume that would be the folder where your PHP files now live.\nNow what happens when you've checked out a copy and you made some non-trivial changes that you want to \"save\"? You simply commit those changes in your working copy to the version control system. Now you have a history of your changes. Should you at any point wish to go back to the version at which you committed those changes, then you can simply revert your working copy to an older revision (the name given to the set of changes that you commit at once).\nNote that this is all very CVS/SVN-specific, as GIT would work slightly different. I'd recommend starting with subversion and reading the first few chapters of the very excellent SVN Book to get you started.\n", "This is all very subjective depending on the the source control solution that you decide to use. One that you will definitely want to look into is Subversion.\nYou mentioned that you're doing PHP, but are you doing it in a Linux environment or Windows? It's not really important, but what I typically did when I worked in a PHP environment was to have a production branch and a development branch. This allowed me to configure a cron job (a scheduled task in Windows) for automatically pulling from the production-ready branch for the production server, while pulling from the development branch for my dev server.\nOnce you decide on a tool, you should really spend some time learning how it works. The concepts of checking in and checking out don't apply to all source control solutions, for example. Either way, I'd highly recommend that you pick one that permits branching. This article goes over a great (in my opinion) source control model to follow in a production environment.\nOf course, I state all this having not \"tinkered\" in years. I've been doing professional development for some time and my techniques might be overkill for somebody in your position. Not to say that there's anything wrong with that, however.\n", "I just want to add that the system that I think was easiest to set up and work with was Mercurial. If you work alone and not in a team you just initialize it in your normal work folder and then go on from there. The normal flow is to edit any file using your favourite editor and then to a checkin (commit).\nI havn't tried GIT but I assume it is very similar. Monotone was a little bit harder to get started with. These are all distributed source control systems.\n", "It sounds like you're asking about how to use source control to manage releases.\nHere's some general guidance that's not specific to websites:\n\nUse a local copy for developing changes\nCompile (if applicable) and test your changes before checking in\nRun automated builds and tests as often as possible (at least daily)\nVersion your daily builds (have some way of specifying the exact bits of code corresponding to a particular build and test run)\nIf possible, use separate branches for major releases (or have a development and a release branch)\nWhen necessary, stabilize your code base (define a set of tests such that passing all of those tests means you are confident enough in the quality of your product to release it, then drive toward 0 test failures, i.e. ban any checkins to the release branch other than fixes for the outstanding issues)\nWhen you have a build which has the features you want and has passed all of the necessary tests, deploy it.\n\nIf you have a small team, a stable product, a fast build, and efficient, high-quality tests then this entire process might be 100% automated and could take place in minutes.\n", "I recommend Subversion. Setting up a repository and using it is actually fairly trivial, even from the command line. Here's how it would go:\nif you haven't setup your repo (repository)\n1) Make sure you've got Subversion installed on your server\n$ which svn\n/usr/bin/svn\n\nwhich is a tool that tells you the path to another tool. if it returns nothing that tool is not installed on your system\n1b) If not, get it\n$ apt-get install subversion\n\napt-get is a tool that installs other tools onto your system\nIf that's not the right name for subversion in apt, try this\n$ apt-cache search subversion\n\nor this\n$ apt-cache search svn\n\nFind the right package name and install it using apt-get install packagename\n2) Create a new repository on your server\n$ cd /path/to/directory/of/repositories\n$ svnadmin create my_repository\n\nsvnadmin create reponame creates a new repository in the present working directory (pwd) with the name reponame\nYou are officially done creating your repository\n\nif you have an existing repo, or have finished setting it up\n1) Make sure you've got Subversion installed on your local machine per the instructions above\n2) Check out the repository to your local machine\n$ cd /repos/on/your/local/machine\n$ svn co svn+ssh://www.myserver.com/path/to/directory/of/repositories/my_repository\n\nsvn co is the command you use to check out a repository\n3) Create your initial directory structure (optional)\n$ cd /repos/on/your/local/machine\n$ cd my_repository\n$ svn mkdir branches\n$ svn mkdir tags\n$ svn mkdir trunk\n$ svn commit -m \"Initial structure\"\n\nsvn mkdir runs a regular mkdir and creates a directory in the present working directory with the name you supply after typing svn mkdir and then adds it to the repository.\nsvn commit -m \"\" sends your changes to the repository and updates it. Whatever you place in the quotes after -m is the comment for this commit (make it count!).\nThe \"working copy\" of your code would go in the trunk directory. branches is used for working on individual projects outside of trunk; each directory in branches is a copy of trunk for a different sub project. tags is used more releases. I suggest just focusing on trunk for a while and getting used to Subversion.\n\nworking with your repo\n1) Add code to your repository\n$ cd /repos/on/your/local/machine\n$ svn add my_new_file.ext\n$ svn add some/new/directory\n$ svn add some/directory/*\n$ svn add some/directory/*.ext\n\nThe second to last line adds every file in that directory. The last line adds every file with the extension .ext.\n2) Check the status of your repository\n$ cd /repos/on/your/local/machine\n$ svn status\n\nThat will tell you if there are any new files, and updated files, and files with conflicts (differences between your local version and the version on the server), etc.\n3) Update your local copy of your repository\n$ cd /repos/on/your/local/machine\n$ svn up\n\nUpdating pulls any new changes from the server you don't already have\nsvn up does care what directory you're in. If you want to update your entire repository, makre sure you're in the root directory of the repository (above trunk)\n\nThat's all you really need to know to get started. For more information I recommend you check out the Subversion Book.\n" ]
[ 10, 5, 2, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "version_control" ]
stackoverflow_0000067069_version_control.txt
Q: What to use for login ID? We are in the early design stages of a major rewrite of our product. Right now our customers are mostly businesses. We manage accounts. User names for an account are each on their own namespace but it means that we can't move assets between servers. We want to move to a single namespace. But that brings the problem of unique user names. So what's the best idea? Email address (w/verification) ? Unique alpha-numeric string ("johnsmith9234")? Should we look at OpenID? A: EMAIL ADDRESS Rational Users don't change emails very often Removes the step of asking for username and email address, which you'll need anyway Users don't often forget their email address (see number one) Email will be unique unless the user already registered for the site, in which case forward them to a forgot your password screen Almost everyone is using email as the primary login for access to a website, this means the rate of adoption shouldn't be affected by the fact that you're asking for an email address Update After registration, be sure to ask the user to create some kind of username, don't litter a public site with their email address! Also, another benefit of using an email address as a login: you won't need any other information (like password / password confirm), just send them a temp password through the mail, or forgo passwords altogether and send them a one-use URL to their email address every time they'd like to login (see: mugshot.org) A: OpenID is very slick, and something you should seriously consider as it basically removes the requirement to save local usernames and passwords and worry about authentication. A lot of sites nowadays are using both OpenID and their own, giving users the option. If you do decide to roll your own, I'd recommend using the email address. Be careful, though, if you are creating something that groups users by an account (say, a company that has several users). In this case, the email address might be used more than once (if they do work for more than one company, for example), and you should allow that. HTH! A: I like OpenID, but I'd still go with the email address, unless your user community is very technically savvy. It's still much easier for most people to understand and remember. A: If you use an email address for ID, don't require that it be verified. I learned the hard way about this when one day suddenly the number of signups at my site drastically decreased. It turns out that the entire range of IP addresses including my site's IP was blacklisted. It took a long time to resolve it. In other cases, I have seen Gmail marking very legitimate emails as spam, and that can cause trouble too. It's good to verify the email address, but don't make it block signups. A: Right now our customers are mostly businesses. People seem to be missing that line. If it's for a business, requiring them to login via OpenID really isn't very practical. They'd either have to use an external OpenID provider, or their poor tech people would have to setup and configure a company OpenID. If this were "should StackOverflow require OpenID for login" or "Should my blog-comment-system allow you to identify yourself via OpenID", my answer would be "absolutely!", but in this case, I don't think OpenID would be a good fit. A: I personally would say Email w/ Verification, OpenId is a great idea but I find that finding a provider that your already with is a pain, I only had an openId for here cause just 2 days before beta I decided to start a blog on blogspot. But everyone on the internet has an email address, especially when dealing with businesses, people aren't very opt to using there personal blog or whatnot for a business login. A: I think that OpenID is definitely worth looking at. Besides giving you a framework in which to provide a unified id for customers, it can also provide large businesses with the ability to manage their own logins and provide a common login across all products that they use, including your own. This isn't that large of a benefit now when OpenId is still relatively rare, but as more products begin to use it, I suspect that the ability to use a common company OpenId login for each employee could become a good selling point. Since you're mostly catering to businesses, I don't think that it's all that unreasonable to offer to host the OpenId accounts yourself. I just think that the extra flexibility will benefit your customers. A: If most of your customers are mostly businesses then I think that using anything other than email creates problems for your customers. Most people are comfortable with email address login and since they are a business customer will likely want to use their work email rather than a personal account. OpenID creates a situation where there is a third party involved and many businesses don't like a third party involved. A: If you are looking at OpenID you should check out http://eaut.org/ and http://emailtoid.net. Basically you can accept email addresses for a login and behind the scenes translate them to OpenID without the user having to know anything. Its pretty slick stuff... A: OpenID seems to be a very good alternative to writing your own user management/authentication piece. I'm seeing more and more sites using OpenID these days, so the barrier to entry for your users should be relatively low.
What to use for login ID?
We are in the early design stages of a major rewrite of our product. Right now our customers are mostly businesses. We manage accounts. User names for an account are each on their own namespace but it means that we can't move assets between servers. We want to move to a single namespace. But that brings the problem of unique user names. So what's the best idea? Email address (w/verification) ? Unique alpha-numeric string ("johnsmith9234")? Should we look at OpenID?
[ "EMAIL ADDRESS\nRational\n\nUsers don't change emails very often\nRemoves the step of asking for username and email address, which you'll need anyway\nUsers don't often forget their email address (see number one)\nEmail will be unique unless the user already registered for the site, in which case forward them to a forgot your password screen\nAlmost everyone is using email as the primary login for access to a website, this means the rate of adoption shouldn't be affected by the fact that you're asking for an email address\n\n\nUpdate\nAfter registration, be sure to ask the user to create some kind of username, don't litter a public site with their email address! Also, another benefit of using an email address as a login: you won't need any other information (like password / password confirm), just send them a temp password through the mail, or forgo passwords altogether and send them a one-use URL to their email address every time they'd like to login (see: mugshot.org)\n", "OpenID is very slick, and something you should seriously consider as it basically removes the requirement to save local usernames and passwords and worry about authentication.\nA lot of sites nowadays are using both OpenID and their own, giving users the option.\nIf you do decide to roll your own, I'd recommend using the email address. Be careful, though, if you are creating something that groups users by an account (say, a company that has several users). In this case, the email address might be used more than once (if they do work for more than one company, for example), and you should allow that.\nHTH!\n", "I like OpenID, but I'd still go with the email address, unless your user community is very technically savvy. It's still much easier for most people to understand and remember.\n", "If you use an email address for ID, don't require that it be verified. I learned the hard way about this when one day suddenly the number of signups at my site drastically decreased. It turns out that the entire range of IP addresses including my site's IP was blacklisted. It took a long time to resolve it. In other cases, I have seen Gmail marking very legitimate emails as spam, and that can cause trouble too.\nIt's good to verify the email address, but don't make it block signups.\n", "\nRight now our customers are mostly businesses.\n\nPeople seem to be missing that line. If it's for a business, requiring them to login via OpenID really isn't very practical. They'd either have to use an external OpenID provider, or their poor tech people would have to setup and configure a company OpenID.\nIf this were \"should StackOverflow require OpenID for login\" or \"Should my blog-comment-system allow you to identify yourself via OpenID\", my answer would be \"absolutely!\", but in this case, I don't think OpenID would be a good fit.\n", "I personally would say Email w/ Verification, OpenId is a great idea but I find that finding a provider that your already with is a pain, I only had an openId for here cause just 2 days before beta I decided to start a blog on blogspot. But everyone on the internet has an email address, especially when dealing with businesses, people aren't very opt to using there personal blog or whatnot for a business login.\n", "I think that OpenID is definitely worth looking at. Besides giving you a framework in which to provide a unified id for customers, it can also provide large businesses with the ability to manage their own logins and provide a common login across all products that they use, including your own. This isn't that large of a benefit now when OpenId is still relatively rare, but as more products begin to use it, I suspect that the ability to use a common company OpenId login for each employee could become a good selling point.\nSince you're mostly catering to businesses, I don't think that it's all that unreasonable to offer to host the OpenId accounts yourself. I just think that the extra flexibility will benefit your customers.\n", "If most of your customers are mostly businesses then I think that using anything other than email creates problems for your customers. Most people are comfortable with email address login and since they are a business customer will likely want to use their work email rather than a personal account. OpenID creates a situation where there is a third party involved and many businesses don't like a third party involved.\n", "If you are looking at OpenID you should check out http://eaut.org/ and http://emailtoid.net. Basically you can accept email addresses for a login and behind the scenes translate them to OpenID without the user having to know anything. Its pretty slick stuff...\n", "OpenID seems to be a very good alternative to writing your own user management/authentication piece. I'm seeing more and more sites using OpenID these days, so the barrier to entry for your users should be relatively low.\n" ]
[ 42, 6, 4, 2, 2, 1, 1, 1, 1, 0 ]
[]
[]
[ "account", "authentication", "web_applications" ]
stackoverflow_0000006080_account_authentication_web_applications.txt
Q: What level of complexity requires a framework? At what level of complexity is it mandatory to switch to an existing framework for web development? What measurement of complexity is practical for web development? Code length? Feature list? Database Size? A: If you work on several different sites then by using a common framework across all of them you can spend time working on the code rather than trying to remember what is located where and why. I'd always use a framework of some sort, even if it's your own, as the uniformity will help you structure your project. Unless it's a one page static HTML project. There is no mandatory limit however. A: I don't think there is a level of complexity that necessitates a framework. For me whenever I am writing a dynamic site I immediately consider a framework, and if it will save me time, I use it(it almost always does, and I almost always do). A: Consider that the question may be faulty. Many of the most complex websites don't use any popular, preexisting, framework. Google has their own web server and their own custom way of doing things, as does Amazon, and probably lots of other sites. If a framework makes your task easier, or provides added value, go for it. However, when you get that framework you are tied to a new dependancy. I'm starting to essentially recreate a Joel on Software post, so I will redirect you here for more on adding unneeded dependencies to your code: http://www.joelonsoftware.com/articles/fog0000000007.html A: All factors matter. You should measure how much time you can save using 3rd party framework and compare it to the risks of using other's code A: Never "mandatory." Some problems are not well solved by any framework. It would be suggestible to switch to a framework when most of the code you are implementing has already be implemented by the framework in question in a way that suits your particular application. This saves you time, energy, and will most likely be more stable than the fresh code you would have written. A: This is really two questions, you realize. :-) The answer to the first one is that it's never mandatory, but honestly, parsing HTML request parameters directly is pretty horrible right from the start. I don't want to do it even once, so I tend to go toward a framework relatively early on. As far as what measurement is practical, well, what are you worried about? All of the descriptions that you list have value. Database size matters primarily for scaling, in my opinion (you can write a very simple app if you have a very simple schema, even if there are hundreds of thousands of rows in the database). The feature list will probably determine the number and complexity of UI pages, which will in turn help to dictate the code length. A: There are frameworks that are there for getting moving very quickly with a simple blog, django or RoR all the way to enterprise full-stack applications Zope. Not to be tied to just the buzz world, you also have ASP.Net and J2EE, etc. A: All frameworks and libraries are tools at your disposal. Determine which ones will make your life easier for your given project and use them. A: I would say the reverse is true. At some point, your project gets so expansive, that you actually get slowed down by the shortcomings of the framework. For sufficiently large projects you may, in fact, be better off developing your own framework, to meet your own needs. I have seen many times where people were held back in the decisions they could make, or the work they could produce, because they were trying to do something that the framework didn't anticipate. And doing these things that the framework doesn't anticipate can be very troublesome. The nice thing about making your own framework, is that it can evolve with your project, to be a help to you system, instead of a hindrance. So, to conclude, small projects should be use existing frameworks. Large projects should contain their own framework.
What level of complexity requires a framework?
At what level of complexity is it mandatory to switch to an existing framework for web development? What measurement of complexity is practical for web development? Code length? Feature list? Database Size?
[ "If you work on several different sites then by using a common framework across all of them you can spend time working on the code rather than trying to remember what is located where and why.\nI'd always use a framework of some sort, even if it's your own, as the uniformity will help you structure your project. Unless it's a one page static HTML project.\nThere is no mandatory limit however.\n", "I don't think there is a level of complexity that necessitates a framework. For me whenever I am writing a dynamic site I immediately consider a framework, and if it will save me time, I use it(it almost always does, and I almost always do).\n", "Consider that the question may be faulty. Many of the most complex websites don't use any popular, preexisting, framework. Google has their own web server and their own custom way of doing things, as does Amazon, and probably lots of other sites.\nIf a framework makes your task easier, or provides added value, go for it. However, when you get that framework you are tied to a new dependancy. I'm starting to essentially recreate a Joel on Software post, so I will redirect you here for more on adding unneeded dependencies to your code:\nhttp://www.joelonsoftware.com/articles/fog0000000007.html\n", "All factors matter. You should measure how much time you can save using 3rd party framework and compare it to the risks of using other's code\n", "Never \"mandatory.\" Some problems are not well solved by any framework. It would be suggestible to switch to a framework when most of the code you are implementing has already be implemented by the framework in question in a way that suits your particular application. This saves you time, energy, and will most likely be more stable than the fresh code you would have written.\n", "This is really two questions, you realize. :-) The answer to the first one is that it's never mandatory, but honestly, parsing HTML request parameters directly is pretty horrible right from the start. I don't want to do it even once, so I tend to go toward a framework relatively early on.\nAs far as what measurement is practical, well, what are you worried about? All of the descriptions that you list have value. Database size matters primarily for scaling, in my opinion (you can write a very simple app if you have a very simple schema, even if there are hundreds of thousands of rows in the database). The feature list will probably determine the number and complexity of UI pages, which will in turn help to dictate the code length.\n", "There are frameworks that are there for getting moving very quickly with a simple blog, django or RoR all the way to enterprise full-stack applications Zope. Not to be tied to just the buzz world, you also have ASP.Net and J2EE, etc.\n", "All frameworks and libraries are tools at your disposal. Determine which ones will make your life easier for your given project and use them. \n", "I would say the reverse is true. At some point, your project gets so expansive, that you actually get slowed down by the shortcomings of the framework. For sufficiently large projects you may, in fact, be better off developing your own framework, to meet your own needs. I have seen many times where people were held back in the decisions they could make, or the work they could produce, because they were trying to do something that the framework didn't anticipate. And doing these things that the framework doesn't anticipate can be very troublesome. The nice thing about making your own framework, is that it can evolve with your project, to be a help to you system, instead of a hindrance.\nSo, to conclude, small projects should be use existing frameworks. Large projects should contain their own framework.\n" ]
[ 3, 1, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "complexity_theory", "frameworks" ]
stackoverflow_0000068340_complexity_theory_frameworks.txt
Q: Cocoa tips for PHP developers? I'm a PHP developer, and I use the MVC pattern and object-oriented code. I really want to write applications for the iPhone, but to do that I need to know Cocoa, but to do that I need to know Objective-C 2.0, but to do that I need to know C, and to do that I need to know about compiled languages (versus interpreted). Where should I begin? Do I really need to begin with plain old "C", as Joel would recommend? Caveat: I like to produce working widgets, not elegant theories. A: Yes, you're really best off learning C and then Objective-C. There are some resources that will get you over the C and Objective-C language learning curve: Uli Kusterer's online book Masters of the Void Stephen Kochan's book Programming in Objective-C And there are some resources that will get you over the framework learning curve: CocoaLab's online book Become an Xcoder Aaron Hillegass' book Cocoa Programming for Mac OS X Despite what Jeff might say, learning C is important for professional software developers for just this reason. It's sort of a baseline low-level lingua franca that other innovation happens atop. The reason Jeff has been able to get away with not learning C is not because you don't need to know C, but because he learned Pascal which is in many ways isomorphic to C. (It has all the same concepts, including pointers and manual memory management.) A: Get Cocoa Programming For Mac OS X by Aaron Hillegass. This should get you on your way to Cocoa programming. You can look up C-related programming as things come up. K&R C Programming Language is the definitive reference that is still applicable today to C programming. Get the Cocoa book, work though it and if you encounter any snags, just ask your C questions here :) A: Who reads books these days? I have the 1st edition, I forgot to read it. Go to the iPhone Developer Center. Read examples. In case you didn't read any of that, click the pretty picture. A: No need to start with plain C. Start with an excellent book instead: Cocoa Programming for Mac OS X. A: I think starting with C would be a smart thing to do. After all, Objective-C is C language with some extensions. To develop in Cocoa you are required to know well how pointers and memory allocation work (there's no garbage collection on the iPhone), plus you will have to use some standard C libraries, because a lot of the frameworks that are used to develop for the iPhone are C libraries not Cocoa libraries. Take for example CoreGraphics, the library you have to use to draw on the screen on the iPhone. That's a C framework, meaning that it is not written in Objective-C. Of course after learning C to a modest level, you could start reading about Objective-C and Cocoa, and in that case I would start with the Objective-C language specification (link to PDF) and the Aaron Hillegas book on Cocoa. A: The memory management concepts that are (or were, depending on if you like the whole garbage collection thing) central to the Cocoa frameworks can be a little confusing. This is particularly true for those coming over from languages such as PHP, Python, Ruby, or even Java. Knowing C, or C++ for that matter, put you at a great advantage when learning Objective-C and Cocoa.
Cocoa tips for PHP developers?
I'm a PHP developer, and I use the MVC pattern and object-oriented code. I really want to write applications for the iPhone, but to do that I need to know Cocoa, but to do that I need to know Objective-C 2.0, but to do that I need to know C, and to do that I need to know about compiled languages (versus interpreted). Where should I begin? Do I really need to begin with plain old "C", as Joel would recommend? Caveat: I like to produce working widgets, not elegant theories.
[ "Yes, you're really best off learning C and then Objective-C. There are some resources that will get you over the C and Objective-C language learning curve:\n\nUli Kusterer's online book Masters of the Void\nStephen Kochan's book Programming in Objective-C\n\nAnd there are some resources that will get you over the framework learning curve:\n\nCocoaLab's online book Become an Xcoder\nAaron Hillegass' book Cocoa Programming for Mac OS X\n\nDespite what Jeff might say, learning C is important for professional software developers for just this reason. It's sort of a baseline low-level lingua franca that other innovation happens atop. The reason Jeff has been able to get away with not learning C is not because you don't need to know C, but because he learned Pascal which is in many ways isomorphic to C. (It has all the same concepts, including pointers and manual memory management.)\n", "Get Cocoa Programming For Mac OS X by Aaron Hillegass. This should get you on your way to Cocoa programming. You can look up C-related programming as things come up.\nK&R C Programming Language is the definitive reference that is still applicable today to C programming.\nGet the Cocoa book, work though it and if you encounter any snags, just ask your C questions here :)\n", "Who reads books these days? I have the 1st edition, I forgot to read it. Go to the iPhone Developer Center. Read examples.\nIn case you didn't read any of that, click the pretty picture.\n\n", "No need to start with plain C. Start with an excellent book instead: Cocoa Programming for Mac OS X.\n", "I think starting with C would be a smart thing to do. After all, Objective-C is C language with some extensions.\nTo develop in Cocoa you are required to know well how pointers and memory allocation work (there's no garbage collection on the iPhone), plus you will have to use some standard C libraries, because a lot of the frameworks that are used to develop for the iPhone are C libraries not Cocoa libraries. Take for example CoreGraphics, the library you have to use to draw on the screen on the iPhone. That's a C framework, meaning that it is not written in Objective-C.\nOf course after learning C to a modest level, you could start reading about Objective-C and Cocoa, and in that case I would start with the Objective-C language specification (link to PDF) and the Aaron Hillegas book on Cocoa.\n", "The memory management concepts that are (or were, depending on if you like the whole garbage collection thing) central to the Cocoa frameworks can be a little confusing. This is particularly true for those coming over from languages such as PHP, Python, Ruby, or even Java. Knowing C, or C++ for that matter, put you at a great advantage when learning Objective-C and Cocoa.\n" ]
[ 7, 3, 3, 1, 0, 0 ]
[]
[]
[ "c", "cocoa", "objective_c", "php" ]
stackoverflow_0000033696_c_cocoa_objective_c_php.txt
Q: Folder with Extension I'm looking to have windows recognize that certain folders are associated to my application - maybe by naming the folder 'folder.myExt'. Can this be done via the registry? A bit more info - This is for a x-platform app ( that's why I suggested the folder with an extension - mac can handle that ) - The RAD I'm using doesn't read write binary data efficiently enough as the size of this 'folder' will be upwards of 2000 files and 500Mb A: Folders in Windows aren't subject to the name.extension rules at all, there's only 1 entry in the registry's file type handling for "folder" types. (If you try to change it you're going to have very, very rough times ahead) The only simple way to get the effect you're after would be to do what OpenOffice, MS Office 2007, and large video games have been doing for some time, use a ZIP file for a container. (It doesn't have to be a "ZIP" exactly, but some type of readily available container file type is better than writing your own) Like OO.org and Office 2K7 you can just use a custom extension and designate your app as the handler. This will also work on Macs, so it can be cross-platform. It may not be fast however. Using low or no compression may help with that. A: You can have an "extension" on your folder, but as far as I know, windows just treats it all as the folder name and opens the folder like normal when you click on it. The few times I messed with opening a .app on my windows system, it acted like it was a normal folder.
Folder with Extension
I'm looking to have windows recognize that certain folders are associated to my application - maybe by naming the folder 'folder.myExt'. Can this be done via the registry? A bit more info - This is for a x-platform app ( that's why I suggested the folder with an extension - mac can handle that ) - The RAD I'm using doesn't read write binary data efficiently enough as the size of this 'folder' will be upwards of 2000 files and 500Mb
[ "Folders in Windows aren't subject to the name.extension rules at all, there's only 1 entry in the registry's file type handling for \"folder\" types. (If you try to change it you're going to have very, very rough times ahead)\nThe only simple way to get the effect you're after would be to do what OpenOffice, MS Office 2007, and large video games have been doing for some time, use a ZIP file for a container. (It doesn't have to be a \"ZIP\" exactly, but some type of readily available container file type is better than writing your own) Like OO.org and Office 2K7 you can just use a custom extension and designate your app as the handler. This will also work on Macs, so it can be cross-platform. It may not be fast however. Using low or no compression may help with that.\n", "You can have an \"extension\" on your folder, but as far as I know, windows just treats it all as the folder name and opens the folder like normal when you click on it.\nThe few times I messed with opening a .app on my windows system, it acted like it was a normal folder.\n" ]
[ 1, 0 ]
[]
[]
[ "registry", "windows" ]
stackoverflow_0000068015_registry_windows.txt
Q: How can I make a shortcut start in a different directory when running it as an administrator on Windows Vista? I have a shortcut on my desktop which opens a command prompt with many arguments that I need. I set the 'start in' field to d:\ and it works as expected (the prompt starts in d:). When I choose Advanced -> run as administrator and then open the shortcut, it starts in C:\Windows\System32, even though I have not changed the 'start in' field. How can I get it to start in d:\? A: If you use the /k argument, you can add a single line to execute a change drive and change directory. For instance: C:\Windows\System32\cmd.exe /k "d: & cd d:\storage" Using & you can string together many commands on one line. Edit: You can also change drive with the cd command alone "cd /d d:\storage". Thanks to Adam Mitz for the comment.
How can I make a shortcut start in a different directory when running it as an administrator on Windows Vista?
I have a shortcut on my desktop which opens a command prompt with many arguments that I need. I set the 'start in' field to d:\ and it works as expected (the prompt starts in d:). When I choose Advanced -> run as administrator and then open the shortcut, it starts in C:\Windows\System32, even though I have not changed the 'start in' field. How can I get it to start in d:\?
[ "If you use the /k argument, you can add a single line to execute a change drive and change directory. For instance:\nC:\\Windows\\System32\\cmd.exe /k \"d: & cd d:\\storage\"\nUsing & you can string together many commands on one line.\nEdit: You can also change drive with the cd command alone \"cd /d d:\\storage\". Thanks to Adam Mitz for the comment.\n" ]
[ 11 ]
[]
[]
[ "cmd", "shortcut", "uac", "windows", "windows_vista" ]
stackoverflow_0000068307_cmd_shortcut_uac_windows_windows_vista.txt
Q: Change command Method for Tkinter Button in Python I create a new Button object but did not specify the command option upon creation. Is there a way in Tkinter to change the command (onclick) function after the object has been created? A: Though Eli Courtwright's program will work fine¹, what you really seem to want though is just a way to reconfigure after instantiation any attribute which you could have set when you instantiated². How you do so is by way of the configure() method. from Tkinter import Tk, Button def goodbye_world(): print "Goodbye World!\nWait, I changed my mind!" button.configure(text = "Hello World!", command=hello_world) def hello_world(): print "Hello World!\nWait, I changed my mind!" button.configure(text = "Goodbye World!", command=goodbye_world) root = Tk() button = Button(root, text="Hello World!", command=hello_world) button.pack() root.mainloop() ¹ "fine" if you use only the mouse; if you care about tabbing and using [Space] or [Enter] on buttons, then you will have to implement (duplicating existing code) keypress events too. Setting the command option through .configure is much easier. ² the only attribute that can't change after instantiation is name. A: Sure; just use the bind method to specify the callback after the button has been created. I've just written and tested the example below. You can find a nice tutorial on doing this at http://www.pythonware.com/library/tkinter/introduction/events-and-bindings.htm from Tkinter import Tk, Button root = Tk() button = Button(root, text="Click Me!") button.pack() def callback(event): print "Hello World!" button.bind("<Button-1>", callback) root.mainloop()
Change command Method for Tkinter Button in Python
I create a new Button object but did not specify the command option upon creation. Is there a way in Tkinter to change the command (onclick) function after the object has been created?
[ "Though Eli Courtwright's program will work fine¹, what you really seem to want though is just a way to reconfigure after instantiation any attribute which you could have set when you instantiated². How you do so is by way of the configure() method.\nfrom Tkinter import Tk, Button\n\ndef goodbye_world():\n print \"Goodbye World!\\nWait, I changed my mind!\"\n button.configure(text = \"Hello World!\", command=hello_world)\n\ndef hello_world():\n print \"Hello World!\\nWait, I changed my mind!\"\n button.configure(text = \"Goodbye World!\", command=goodbye_world)\n\nroot = Tk()\nbutton = Button(root, text=\"Hello World!\", command=hello_world)\nbutton.pack()\n\nroot.mainloop()\n\n¹ \"fine\" if you use only the mouse; if you care about tabbing and using [Space] or [Enter] on buttons, then you will have to implement (duplicating existing code) keypress events too. Setting the command option through .configure is much easier.\n² the only attribute that can't change after instantiation is name.\n", "Sure; just use the bind method to specify the callback after the button has been created. I've just written and tested the example below. You can find a nice tutorial on doing this at http://www.pythonware.com/library/tkinter/introduction/events-and-bindings.htm\nfrom Tkinter import Tk, Button\n\nroot = Tk()\nbutton = Button(root, text=\"Click Me!\")\nbutton.pack()\n\ndef callback(event):\n print \"Hello World!\"\n\nbutton.bind(\"<Button-1>\", callback)\nroot.mainloop()\n\n" ]
[ 37, 2 ]
[]
[]
[ "python", "tkinter", "user_interface" ]
stackoverflow_0000068327_python_tkinter_user_interface.txt
Q: VMWare Tools for Ubuntu Hardy I am using VMWare tools for Ubuntu Hardy, but for some reason vmware-install.pl finds fault with my LINUX headers. The error message says that the "address space size" doesn't match. To try and remediate, I have resorted to vmware-any-any-update117, and am now getting the following error instead: In file included from include/asm/page.h:3, from /tmp/vmware-config0/vmmon-only/common/hostKernel.h:56, from /tmp/vmware-config0/vmmon-only/common/task.c:30: include/asm/page_32.h: In function ‘pte_t native_make_pte(long unsigned int)’: include/asm/page_32.h:112: error: expected primary-expression before ‘)’ token include/asm/page_32.h:112: error: expected ‘;’ before ‘{’ token include/asm/page_32.h:112: error: expected primary-expression before ‘.’ token include/asm/page_32.h:112: error: expected `;' before ‘}’ token Can anyone help me make some sense of this, please? A: This error ofter occurs because incompatibility of VMWare Tools Version and recent Kernels (You can test it using older Kernels). Sometimes you can fix some thing with patches all over the internet, but I prefer to downgrade my kernel or don't using latest distribution's version in VMWare. It can be really annoying. Another problem you may have is with your mouse pointer in X Windows, like if it was a inch to left or below than it really shows. About vmware-any-any-update117, it's a patch to VMWare running under linux, usually Workstation version. It won't have effect in Tools. A: You're probably best off using the VMWare Tools .rpm file instead of the install script on Ubuntu. Alien is a program that will let you turn a .rpm into a Ubuntu-friendly .deb package. A: Check out this link as it helped me install the tools in one of my vms. http://diamondsw.dyndns.org/Home/Et_Cetera/Entries/2008/4/25_Linux_2.6.24_and_VMWare.html A: I've heard a lot of good things about VirtualBox from Sun. If you get fed up with VMWare, it's worth a look.
VMWare Tools for Ubuntu Hardy
I am using VMWare tools for Ubuntu Hardy, but for some reason vmware-install.pl finds fault with my LINUX headers. The error message says that the "address space size" doesn't match. To try and remediate, I have resorted to vmware-any-any-update117, and am now getting the following error instead: In file included from include/asm/page.h:3, from /tmp/vmware-config0/vmmon-only/common/hostKernel.h:56, from /tmp/vmware-config0/vmmon-only/common/task.c:30: include/asm/page_32.h: In function ‘pte_t native_make_pte(long unsigned int)’: include/asm/page_32.h:112: error: expected primary-expression before ‘)’ token include/asm/page_32.h:112: error: expected ‘;’ before ‘{’ token include/asm/page_32.h:112: error: expected primary-expression before ‘.’ token include/asm/page_32.h:112: error: expected `;' before ‘}’ token Can anyone help me make some sense of this, please?
[ "This error ofter occurs because incompatibility of VMWare Tools Version and recent Kernels (You can test it using older Kernels). Sometimes you can fix some thing with patches all over the internet, but I prefer to downgrade my kernel or don't using latest distribution's version in VMWare. It can be really annoying. Another problem you may have is with your mouse pointer in X Windows, like if it was a inch to left or below than it really shows.\nAbout vmware-any-any-update117, it's a patch to VMWare running under linux, usually Workstation version. It won't have effect in Tools. \n", "You're probably best off using the VMWare Tools .rpm file instead of the install script on Ubuntu. Alien is a program that will let you turn a .rpm into a Ubuntu-friendly .deb package.\n", "Check out this link as it helped me install the tools in one of my vms. http://diamondsw.dyndns.org/Home/Et_Cetera/Entries/2008/4/25_Linux_2.6.24_and_VMWare.html\n", "I've heard a lot of good things about VirtualBox from Sun. If you get fed up with VMWare, it's worth a look.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "ubuntu", "virtualization", "vmware", "vmware_tools" ]
stackoverflow_0000031285_ubuntu_virtualization_vmware_vmware_tools.txt
Q: How to merge from branch to branch and back again (bidirectional merging) in SVN? Using the svnmerge.py tool it is possible to merge between branches, up and down. It is hard to find the details for doing this. Hopefully, v1.5 will have a neat method for doing this without using svnmerge.py - details requested! A: It looks like you're asking about 1.5 merge tracking. Here's a quick overview for doing merges to/from trunk (or another branch): http://blog.red-bean.com/sussman/?p=92 A: With svnmerge.py, you initialize both branches (when going in one direction, you only need to initialize one of the branches). Then merge using the -b (For bidirectional flag). Here is a summary starting from branch one to branch two. $REPO is the protocol and path to your repository. svn copy $REPO/branches/one $REPO/branches/two \ -m "Creating branch two from branch one." svn checkout branches/one one svn checkout branches/two two cd one svnmerge init ../two cd ../two svnmerge init ../one You may now edit both branches. Changes from one to two can be merged by: cd two svnmerge merge -b -S one svn commit -F svnmerge-commit-message.txt Conversely, changes from two to one can be merge by: cd one svnmerge merge -b -S two svn commit -F svnmerge-commit-message.txt Be sure to note the -b flag!
How to merge from branch to branch and back again (bidirectional merging) in SVN?
Using the svnmerge.py tool it is possible to merge between branches, up and down. It is hard to find the details for doing this. Hopefully, v1.5 will have a neat method for doing this without using svnmerge.py - details requested!
[ "It looks like you're asking about 1.5 merge tracking. Here's a quick overview for doing merges to/from trunk (or another branch): http://blog.red-bean.com/sussman/?p=92\n", "With svnmerge.py, you initialize both branches (when going in one direction, you only need to initialize one of the branches). Then merge using the -b (For bidirectional flag). Here is a summary starting from branch one to branch two. $REPO is the protocol and path to your repository.\n\nsvn copy $REPO/branches/one $REPO/branches/two \\\n -m \"Creating branch two from branch one.\"\n svn checkout branches/one one\n svn checkout branches/two two\ncd one\n svnmerge init ../two\n cd ../two\n svnmerge init ../one \n\nYou may now edit both branches. Changes from one to two can be merged by:\n\ncd two\n svnmerge merge -b -S one\n svn commit -F svnmerge-commit-message.txt \n\nConversely, changes from two to one can be merge by:\n\ncd one\n svnmerge merge -b -S two\n svn commit -F svnmerge-commit-message.txt \n\nBe sure to note the -b flag!\n" ]
[ 2, 0 ]
[]
[]
[ "svn", "version_control" ]
stackoverflow_0000068448_svn_version_control.txt
Q: Comparing names Is there any simple algorithm to determine the likeliness of 2 names representing the same person? I'm not asking for something of the level that Custom department might be using. Just a simple algorithm that would tell me if 'James T. Clark' is most likely the same name as 'J. Thomas Clark' or 'James Clerk'. If there is an algorithm in C# that would be great, but I can translate from any language. A: Sounds like you're looking for a phonetic-based algorithms, such as soundex, NYSIIS, or double metaphone. The first actually is what several government departments use, and is trivial to implement (with many implementations readily available). The second is a slightly more complicated and more precise version of the first. The latter-most works with some non-English names and alphabets. Levenshtein distance is a definition of distance between two arbitrary strings. It gives you a distance of 0 between identical strings and non-zero between different strings, which might also be useful if you decide to make a custom algorithm. A: Levenshtein is close, although maybe not exactly what you want. A: I've faced similar problem and tried to use Levenstein distance first, but it did not work well for me. I came up with an algorithm that gives you "similarity" value between two strings (higher value means more similar strings, "1" for identical strings). This value is not very meaningful by itself (if not "1", always 0.5 or less), but works quite well when you throw in Hungarian Matrix to find matching pairs from two lists of strings. Use like this: PartialStringComparer cmp = new PartialStringComparer(); tbResult.Text = cmp.Compare(textBox1.Text, textBox2.Text).ToString(); The code behind: public class SubstringRange { string masterString; public string MasterString { get { return masterString; } set { masterString = value; } } int start; public int Start { get { return start; } set { start = value; } } int end; public int End { get { return end; } set { end = value; } } public int Length { get { return End - Start; } set { End = Start + value;} } public bool IsValid { get { return MasterString.Length >= End && End >= Start && Start >= 0; } } public string Contents { get { if(IsValid) { return MasterString.Substring(Start, Length); } else { return ""; } } } public bool OverlapsRange(SubstringRange range) { return !(End < range.Start || Start > range.End); } public bool ContainsRange(SubstringRange range) { return range.Start >= Start && range.End <= End; } public bool ExpandTo(string newContents) { if(MasterString.Substring(Start).StartsWith(newContents, StringComparison.InvariantCultureIgnoreCase) && newContents.Length > Length) { Length = newContents.Length; return true; } else { return false; } } } public class SubstringRangeList: List<SubstringRange> { string masterString; public string MasterString { get { return masterString; } set { masterString = value; } } public SubstringRangeList(string masterString) { this.MasterString = masterString; } public SubstringRange FindString(string s){ foreach(SubstringRange r in this){ if(r.Contents.Equals(s, StringComparison.InvariantCultureIgnoreCase)) return r; } return null; } public SubstringRange FindSubstring(string s){ foreach(SubstringRange r in this){ if(r.Contents.StartsWith(s, StringComparison.InvariantCultureIgnoreCase)) return r; } return null; } public bool ContainsRange(SubstringRange range) { foreach(SubstringRange r in this) { if(r.ContainsRange(range)) return true; } return false; } public bool AddSubstring(string substring) { bool result = false; foreach(SubstringRange r in this) { if(r.ExpandTo(substring)) { result = true; } } if(FindSubstring(substring) == null) { bool patternfound = true; int start = 0; while(patternfound){ patternfound = false; start = MasterString.IndexOf(substring, start, StringComparison.InvariantCultureIgnoreCase); patternfound = start != -1; if(patternfound) { SubstringRange r = new SubstringRange(); r.MasterString = this.MasterString; r.Start = start++; r.Length = substring.Length; if(!ContainsRange(r)) { this.Add(r); result = true; } } } } return result; } private static bool SubstringRangeMoreThanOneChar(SubstringRange range) { return range.Length > 1; } public float Weight { get { if(MasterString.Length == 0 || Count == 0) return 0; float numerator = 0; int denominator = 0; foreach(SubstringRange r in this.FindAll(SubstringRangeMoreThanOneChar)) { numerator += r.Length; denominator++; } if(denominator == 0) return 0; return numerator / denominator / MasterString.Length; } } public void RemoveOverlappingRanges() { SubstringRangeList l = new SubstringRangeList(this.MasterString); l.AddRange(this);//create a copy of this list foreach(SubstringRange r in l) { if(this.Contains(r) && this.ContainsRange(r)) { Remove(r);//try to remove the range if(!ContainsRange(r)) {//see if the list still contains "superset" of this range Add(r);//if not, add it back } } } } public void AddStringToCompare(string s) { for(int start = 0; start < s.Length; start++) { for(int len = 1; start + len <= s.Length; len++) { string part = s.Substring(start, len); if(!AddSubstring(part)) break; } } RemoveOverlappingRanges(); } } public class PartialStringComparer { public float Compare(string s1, string s2) { SubstringRangeList srl1 = new SubstringRangeList(s1); srl1.AddStringToCompare(s2); SubstringRangeList srl2 = new SubstringRangeList(s2); srl2.AddStringToCompare(s1); return (srl1.Weight + srl2.Weight) / 2; } } Levenstein distance one is much simpler (adapted from http://www.merriampark.com/ld.htm): public class Distance { /// <summary> /// Compute Levenshtein distance /// </summary> /// <param name="s">String 1</param> /// <param name="t">String 2</param> /// <returns>Distance between the two strings. /// The larger the number, the bigger the difference. /// </returns> public static int LD(string s, string t) { int n = s.Length; //length of s int m = t.Length; //length of t int[,] d = new int[n + 1, m + 1]; // matrix int cost; // cost // Step 1 if(n == 0) return m; if(m == 0) return n; // Step 2 for(int i = 0; i <= n; d[i, 0] = i++) ; for(int j = 0; j <= m; d[0, j] = j++) ; // Step 3 for(int i = 1; i <= n; i++) { //Step 4 for(int j = 1; j <= m; j++) { // Step 5 cost = (t.Substring(j - 1, 1) == s.Substring(i - 1, 1) ? 0 : 1); // Step 6 d[i, j] = System.Math.Min(System.Math.Min(d[i - 1, j] + 1, d[i, j - 1] + 1), d[i - 1, j - 1] + cost); } } // Step 7 return d[n, m]; } } A: I doubt there is, considering even the Customs Department doesn't seem to have a satisfactory answer... A: If there is a solution to this problem I seriously doubt it's a part of core C#. Off the top of my head, it would require a database of first, middle and last name frequencies, as well as account for initials, as in your example. This is fairly complex logic that relies on a database of information. A: Second to Levenshtein distance, what language do you want? I was able to find an implementation in C# on codeproject pretty easily. A: In an application I worked on, the Last name field was considered reliable. So presented all the all the records with the same last name to the user. User could sort by the other fields to look for similar names. This solution was good enough to greatly reduce the issue of users creating duplicate records. Basically looks like the issue will require human judgement.
Comparing names
Is there any simple algorithm to determine the likeliness of 2 names representing the same person? I'm not asking for something of the level that Custom department might be using. Just a simple algorithm that would tell me if 'James T. Clark' is most likely the same name as 'J. Thomas Clark' or 'James Clerk'. If there is an algorithm in C# that would be great, but I can translate from any language.
[ "Sounds like you're looking for a phonetic-based algorithms, such as soundex, NYSIIS, or double metaphone. The first actually is what several government departments use, and is trivial to implement (with many implementations readily available). The second is a slightly more complicated and more precise version of the first. The latter-most works with some non-English names and alphabets.\nLevenshtein distance is a definition of distance between two arbitrary strings. It gives you a distance of 0 between identical strings and non-zero between different strings, which might also be useful if you decide to make a custom algorithm.\n", "Levenshtein is close, although maybe not exactly what you want.\n", "I've faced similar problem and tried to use Levenstein distance first, but it did not work well for me. I came up with an algorithm that gives you \"similarity\" value between two strings (higher value means more similar strings, \"1\" for identical strings). This value is not very meaningful by itself (if not \"1\", always 0.5 or less), but works quite well when you throw in Hungarian Matrix to find matching pairs from two lists of strings.\nUse like this:\nPartialStringComparer cmp = new PartialStringComparer();\ntbResult.Text = cmp.Compare(textBox1.Text, textBox2.Text).ToString();\n\nThe code behind:\npublic class SubstringRange {\n string masterString;\n\n public string MasterString {\n get { return masterString; }\n set { masterString = value; }\n }\n int start;\n\n public int Start {\n get { return start; }\n set { start = value; }\n }\n int end;\n\n public int End {\n get { return end; }\n set { end = value; }\n }\n public int Length {\n get { return End - Start; }\n set { End = Start + value;}\n }\n\n public bool IsValid {\n get { return MasterString.Length >= End && End >= Start && Start >= 0; }\n }\n\n public string Contents {\n get {\n if(IsValid) {\n return MasterString.Substring(Start, Length);\n } else {\n return \"\";\n }\n }\n }\n public bool OverlapsRange(SubstringRange range) {\n return !(End < range.Start || Start > range.End);\n }\n public bool ContainsRange(SubstringRange range) {\n return range.Start >= Start && range.End <= End;\n }\n public bool ExpandTo(string newContents) {\n if(MasterString.Substring(Start).StartsWith(newContents, StringComparison.InvariantCultureIgnoreCase) && newContents.Length > Length) {\n Length = newContents.Length;\n return true;\n } else {\n return false;\n }\n }\n}\n\npublic class SubstringRangeList: List<SubstringRange> {\n string masterString;\n\n public string MasterString {\n get { return masterString; }\n set { masterString = value; }\n }\n\n public SubstringRangeList(string masterString) {\n this.MasterString = masterString;\n }\n\n public SubstringRange FindString(string s){\n foreach(SubstringRange r in this){\n if(r.Contents.Equals(s, StringComparison.InvariantCultureIgnoreCase))\n return r;\n }\n return null;\n }\n\n public SubstringRange FindSubstring(string s){\n foreach(SubstringRange r in this){\n if(r.Contents.StartsWith(s, StringComparison.InvariantCultureIgnoreCase))\n return r;\n }\n return null;\n }\n\n public bool ContainsRange(SubstringRange range) {\n foreach(SubstringRange r in this) {\n if(r.ContainsRange(range))\n return true;\n }\n return false;\n }\n\n public bool AddSubstring(string substring) {\n bool result = false;\n foreach(SubstringRange r in this) {\n if(r.ExpandTo(substring)) {\n result = true;\n }\n }\n if(FindSubstring(substring) == null) {\n bool patternfound = true;\n int start = 0;\n while(patternfound){\n patternfound = false;\n start = MasterString.IndexOf(substring, start, StringComparison.InvariantCultureIgnoreCase);\n patternfound = start != -1;\n if(patternfound) {\n SubstringRange r = new SubstringRange();\n r.MasterString = this.MasterString;\n r.Start = start++;\n r.Length = substring.Length;\n if(!ContainsRange(r)) {\n this.Add(r);\n result = true;\n }\n }\n }\n }\n return result;\n }\n\n private static bool SubstringRangeMoreThanOneChar(SubstringRange range) {\n return range.Length > 1;\n }\n\n public float Weight {\n get {\n if(MasterString.Length == 0 || Count == 0)\n return 0;\n float numerator = 0;\n int denominator = 0;\n foreach(SubstringRange r in this.FindAll(SubstringRangeMoreThanOneChar)) {\n numerator += r.Length;\n denominator++;\n }\n if(denominator == 0)\n return 0;\n return numerator / denominator / MasterString.Length;\n }\n }\n\n public void RemoveOverlappingRanges() {\n SubstringRangeList l = new SubstringRangeList(this.MasterString);\n l.AddRange(this);//create a copy of this list\n foreach(SubstringRange r in l) {\n if(this.Contains(r) && this.ContainsRange(r)) {\n Remove(r);//try to remove the range\n if(!ContainsRange(r)) {//see if the list still contains \"superset\" of this range\n Add(r);//if not, add it back\n }\n }\n }\n }\n\n public void AddStringToCompare(string s) {\n for(int start = 0; start < s.Length; start++) {\n for(int len = 1; start + len <= s.Length; len++) {\n string part = s.Substring(start, len);\n if(!AddSubstring(part))\n break;\n }\n }\n RemoveOverlappingRanges();\n }\n}\n\npublic class PartialStringComparer {\n public float Compare(string s1, string s2) {\n SubstringRangeList srl1 = new SubstringRangeList(s1);\n srl1.AddStringToCompare(s2);\n SubstringRangeList srl2 = new SubstringRangeList(s2);\n srl2.AddStringToCompare(s1);\n return (srl1.Weight + srl2.Weight) / 2;\n }\n}\n\nLevenstein distance one is much simpler (adapted from http://www.merriampark.com/ld.htm):\npublic class Distance {\n /// <summary>\n /// Compute Levenshtein distance\n /// </summary>\n /// <param name=\"s\">String 1</param>\n /// <param name=\"t\">String 2</param>\n /// <returns>Distance between the two strings.\n /// The larger the number, the bigger the difference.\n /// </returns>\n public static int LD(string s, string t) {\n int n = s.Length; //length of s\n int m = t.Length; //length of t\n int[,] d = new int[n + 1, m + 1]; // matrix\n int cost; // cost\n // Step 1\n if(n == 0) return m;\n if(m == 0) return n;\n // Step 2\n for(int i = 0; i <= n; d[i, 0] = i++) ;\n for(int j = 0; j <= m; d[0, j] = j++) ;\n // Step 3\n for(int i = 1; i <= n; i++) {\n //Step 4\n for(int j = 1; j <= m; j++) {\n // Step 5\n cost = (t.Substring(j - 1, 1) == s.Substring(i - 1, 1) ? 0 : 1);\n // Step 6\n d[i, j] = System.Math.Min(System.Math.Min(d[i - 1, j] + 1, d[i, j - 1] + 1), d[i - 1, j - 1] + cost);\n }\n }\n // Step 7\n return d[n, m];\n }\n}\n\n", "I doubt there is, considering even the Customs Department doesn't seem to have a satisfactory answer...\n", "If there is a solution to this problem I seriously doubt it's a part of core C#. Off the top of my head, it would require a database of first, middle and last name frequencies, as well as account for initials, as in your example. This is fairly complex logic that relies on a database of information.\n", "Second to Levenshtein distance, what language do you want? I was able to find an implementation in C# on codeproject pretty easily.\n", "In an application I worked on, the Last name field was considered reliable.\nSo presented all the all the records with the same last name to the user.\nUser could sort by the other fields to look for similar names.\nThis solution was good enough to greatly reduce the issue of users creating duplicate records.\nBasically looks like the issue will require human judgement.\n" ]
[ 4, 3, 2, 0, 0, 0, 0 ]
[]
[]
[ "c#", "comparison", "string" ]
stackoverflow_0000068408_c#_comparison_string.txt
Q: What is the best way to build an index to get the fastest read response? I need to index up to 500,000 entries for fastest read. The index needs to be rebuilt periodically , on disk. I am trying to decide between a simple file like a hash on disk or a single table in an embedded database. I have no need for an RDBMS engine. A: I'm assuming you're referring to indexing tables on a relational DBMS (like mySql, Oracle, or Postgres). Indexes are secondary data stores that keep a record of a subset of fields for a table in a specific order. If you create an index, any query that includes the subset of fields that are indexed in its WHERE clause will perform faster. However, adding indexes will reduce INSERT performance. In general, indexes don't need to be rebuilt unless they become corrupted. They should be maintained on the fly by your DBMS. A: Perhaps BDB? It is a high perf. database that doesn't use a DBMS. A: If you've storing state objects by key, how about Berkeley DB. A: cdb if the data does not change. /Allan A: PyTables Pro claims that "for situations that don't require fast updates or deletions, OPSI is probably one of the best indexing engines available". However I've not personally used it, but the F/OSS version of PyTables gives already gives you good performance: http://www.pytables.org/moin/PyTablesPro A: This is what MapReduce was invented for. Hadoop is a cool java implementation. A: If the data doesn't need to be completely up to date, you might also like to think about using a data warehousing tool for OLAP purposes (such as MSOLAP). The can perform lightning fast read-only queries based on pre-calculated data.
What is the best way to build an index to get the fastest read response?
I need to index up to 500,000 entries for fastest read. The index needs to be rebuilt periodically , on disk. I am trying to decide between a simple file like a hash on disk or a single table in an embedded database. I have no need for an RDBMS engine.
[ "I'm assuming you're referring to indexing tables on a relational DBMS (like mySql, Oracle, or Postgres).\nIndexes are secondary data stores that keep a record of a subset of fields for a table in a specific order.\nIf you create an index, any query that includes the subset of fields that are indexed in its WHERE clause will perform faster.\nHowever, adding indexes will reduce INSERT performance.\nIn general, indexes don't need to be rebuilt unless they become corrupted. They should be maintained on the fly by your DBMS.\n", "Perhaps BDB? It is a high perf. database that doesn't use a DBMS.\n", "If you've storing state objects by key, how about Berkeley DB.\n", "cdb if the data does not change.\n/Allan\n", "PyTables Pro claims that \"for situations that don't require fast updates or deletions, OPSI is probably one of the best indexing engines available\". However I've not personally used it, but the F/OSS version of PyTables gives already gives you good performance:\nhttp://www.pytables.org/moin/PyTablesPro\n", "This is what MapReduce was invented for. Hadoop is a cool java implementation. \n", "If the data doesn't need to be completely up to date, you might also like to think about using a data warehousing tool for OLAP purposes (such as MSOLAP). The can perform lightning fast read-only queries based on pre-calculated data.\n" ]
[ 1, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "indexing" ]
stackoverflow_0000068174_indexing.txt
Q: Make the web browser scroll to the top? What is the JavaScript to scroll to the top when a button/link/etc. is clicked? A: <a href="javascript:scroll(0, 0)">Top</a> A: If you had an anchor link at the top of the page, you could do it with anchors too. <a name="top"></a> <a href='#top">top</a> It'll work in browser's with Javascript disabled, but changes the url. :( It also lets you jump to anywhere the name anchor is set. A: actually, this works by itself, no need to define it. <a href="#top">top</a> This is a "magic" hashname value that does not need to be defined in browsers. Just like this will "reload" the page. <a href="/">reload</a>
Make the web browser scroll to the top?
What is the JavaScript to scroll to the top when a button/link/etc. is clicked?
[ "<a href=\"javascript:scroll(0, 0)\">Top</a>\n\n", "If you had an anchor link at the top of the page, you could do it with anchors too.\n<a name=\"top\"></a>\n<a href='#top\">top</a>\n\nIt'll work in browser's with Javascript disabled, but changes the url. :( It also lets you jump to anywhere the name anchor is set.\n", "actually, this works by itself, no need to define it.\n<a href=\"#top\">top</a>\n\nThis is a \"magic\" hashname value that does not need to be defined in browsers.\nJust like this will \"reload\" the page.\n<a href=\"/\">reload</a>\n\n" ]
[ 5, 4, 0 ]
[]
[]
[ "javascript", "scroll" ]
stackoverflow_0000048919_javascript_scroll.txt
Q: Sharepoint 2007 with MS Office 2007 footers We had a need for a document management solution and were hoping SharePoint 2007 would satisfy our needs. We felt our needs were relatively simple. We needed to manage versioning, have searching capabilities, and having an approval workflow. SharePoint handled these three aspects great out of the box. However, we also require that the footer on the Office 2007 (Word, Excel, and PowerPoint) documents reflect the document version, last person to modify, and last modification date. These things can be done with office automation, but we have yet to find a complete solution. We first tried to do it on the checking-in and checked-in events and followed this path for a while, however, the complication we ran into was after we made the changes to the document we had to no way of preventing the save from updating the version number. This resulted in something similar to this: Document checked-in – the document version should be v0.1 however it is v0.2 because we save the document after the footer is replaced. If we look in the document history we there are 2 separate versions v0.1 does not have the footer v0.2 has the footer but it says v0.1 as that is the version the document was when it was replaced. This is an unacceptable solution for us as we want the process to be completely handled on the user side so they would have full control to revert back to a version where the footer would be incorrect and not contain the correct data. When we attempted to create a custom approval/check-in workflow we found that the same problem was present. The footer is necessary so that hard-copies can be traced back to their electronic counterpart. Another solution that was proposed to us was to build plugins for office that would handle the replacement of the footer. This is inadequate for our needs as it requires a client side deployment of our plugins which is undesirable by our clients. What we are looking for is a clean solution to this problem. A: Here is a blog post which seem to be exactly the solution of your problem. Basically they create a custom field in the document library and use event receivers to keep the current version of the document in this field. The "trick" is that on the client side this custom field shows up as a property of the document the value of which you can easily embed into the document's contents. I'm not sure why changing the field won't increase the version of the document, but I guess it is because you're only changing metadata, not the actual document. They do use a little VBA script which runs on the client side, but it doesn't require any client side deployment as it is downloaded with the document. However I'm not sure if any security settings changes on the client side may be needed to allow the script to run. A: Does this information need to be in the footer? A lot of the information is available within the Office 2007 application. If you click on the round button in the upper left, and select "Server", you can view the version history, a lot of the other properties are available by clicking the round button and opening the "Prepare" menu, and selecting Properties. If this information must be displayed in the document footer I would investigate creating a custom Information Management Policy. This may be a good place to start.
Sharepoint 2007 with MS Office 2007 footers
We had a need for a document management solution and were hoping SharePoint 2007 would satisfy our needs. We felt our needs were relatively simple. We needed to manage versioning, have searching capabilities, and having an approval workflow. SharePoint handled these three aspects great out of the box. However, we also require that the footer on the Office 2007 (Word, Excel, and PowerPoint) documents reflect the document version, last person to modify, and last modification date. These things can be done with office automation, but we have yet to find a complete solution. We first tried to do it on the checking-in and checked-in events and followed this path for a while, however, the complication we ran into was after we made the changes to the document we had to no way of preventing the save from updating the version number. This resulted in something similar to this: Document checked-in – the document version should be v0.1 however it is v0.2 because we save the document after the footer is replaced. If we look in the document history we there are 2 separate versions v0.1 does not have the footer v0.2 has the footer but it says v0.1 as that is the version the document was when it was replaced. This is an unacceptable solution for us as we want the process to be completely handled on the user side so they would have full control to revert back to a version where the footer would be incorrect and not contain the correct data. When we attempted to create a custom approval/check-in workflow we found that the same problem was present. The footer is necessary so that hard-copies can be traced back to their electronic counterpart. Another solution that was proposed to us was to build plugins for office that would handle the replacement of the footer. This is inadequate for our needs as it requires a client side deployment of our plugins which is undesirable by our clients. What we are looking for is a clean solution to this problem.
[ "Here is a blog post which seem to be exactly the solution of your problem.\nBasically they create a custom field in the document library and use event receivers to keep the current version of the document in this field.\nThe \"trick\" is that on the client side this custom field shows up as a property of the document the value of which you can easily embed into the document's contents.\nI'm not sure why changing the field won't increase the version of the document, but I guess it is because you're only changing metadata, not the actual document. \nThey do use a little VBA script which runs on the client side, but it doesn't require any client side deployment as it is downloaded with the document. However I'm not sure if any security settings changes on the client side may be needed to allow the script to run.\n", "Does this information need to be in the footer? A lot of the information is available within the Office 2007 application. If you click on the round button in the upper left, and select \"Server\", you can view the version history, a lot of the other properties are available by clicking the round button and opening the \"Prepare\" menu, and selecting Properties.\nIf this information must be displayed in the document footer I would investigate creating a custom Information Management Policy. This may be a good place to start.\n" ]
[ 1, 0 ]
[]
[]
[ "ms_office", "sharepoint" ]
stackoverflow_0000059186_ms_office_sharepoint.txt
Q: Suggestions for migrating ASP.net app from 1.1 forward I am recently in charge of an older app written in C# using asp.net 1.1. Are there any resources to guide me in converting the application to a newer version of of the .NET Framework. My main pause is that there are ton's of customized DataGrids in the app as it is written now and since so much of the code needs to be rewritten to use GridViews ... is it worth trying to convert the grids in the application to use Silverlight in the attempt to move this code into the future. A: I had a similar experience, and the only thing that we had to replace was a third-party control that we were using in the 1.1 app, and the vendor had gone out of business an never released a version that worked with .NET 2.0. We ended up replacing it fairly easily with an AJAX Control Toolkit control. Other than that, the compiler does a pretty good job of telling you what to do with respect to deprecated method calls. I'd suggest making a copy of the code and upgrading the site in Visual Studio and see what happens. Just open the solution in Visual Studio 2005 or 2008, the IDE will walk you through the upgrade automatically. Get it to compile, then if you have any documented tests you should run through them. If not, you'll want to plan testing to make sure all your functionality still works like it did before the upgrade. Migrating to Silverlight sounds like fun, but if you can get it upgraded and working, I'd probably push that off until a later release -- my experience tells me that you might get into trouble if you bite off too much at once if there is no show-stopping technical reason. A: This MSDN document may be useful to you as you upgrade your application, it contains lists of breaking changes between 1.1 and 2.0, and work arounds for resolving them: Breaking Changes in .NET Framework 2.0 A: I would suggest that as part of the upgrade you opt to move to a Web Application Project rather than a Web Site Project, as the former is conceptually similar to the VS2003 web project model. Here's a nice short post summarising the differences: http://maordavid.blogspot.com/2007/06/aspnet-20-web-site-vs-web-application.html As others have said, don't worry too much about the DataGrids, the upgraded site should be backwards-compatible in this respect. A: Regarding DataGrids - I don't think you have too much to worry about, DataGrids still work in current versions. It's just that going forward, you should use GridViews. I am sure there are other things you may want to check into though, deeper framework issues. But I don't know enough about those things to speak to that particular point.
Suggestions for migrating ASP.net app from 1.1 forward
I am recently in charge of an older app written in C# using asp.net 1.1. Are there any resources to guide me in converting the application to a newer version of of the .NET Framework. My main pause is that there are ton's of customized DataGrids in the app as it is written now and since so much of the code needs to be rewritten to use GridViews ... is it worth trying to convert the grids in the application to use Silverlight in the attempt to move this code into the future.
[ "I had a similar experience, and the only thing that we had to replace was a third-party control that we were using in the 1.1 app, and the vendor had gone out of business an never released a version that worked with .NET 2.0. We ended up replacing it fairly easily with an AJAX Control Toolkit control.\nOther than that, the compiler does a pretty good job of telling you what to do with respect to deprecated method calls.\nI'd suggest making a copy of the code and upgrading the site in Visual Studio and see what happens. Just open the solution in Visual Studio 2005 or 2008, the IDE will walk you through the upgrade automatically. Get it to compile, then if you have any documented tests you should run through them. If not, you'll want to plan testing to make sure all your functionality still works like it did before the upgrade.\nMigrating to Silverlight sounds like fun, but if you can get it upgraded and working, I'd probably push that off until a later release -- my experience tells me that you might get into trouble if you bite off too much at once if there is no show-stopping technical reason.\n", "This MSDN document may be useful to you as you upgrade your application, it contains lists of breaking changes between 1.1 and 2.0, and work arounds for resolving them:\nBreaking Changes in .NET Framework 2.0\n", "I would suggest that as part of the upgrade you opt to move to a Web Application Project rather than a Web Site Project, as the former is conceptually similar to the VS2003 web project model.\nHere's a nice short post summarising the differences:\nhttp://maordavid.blogspot.com/2007/06/aspnet-20-web-site-vs-web-application.html\nAs others have said, don't worry too much about the DataGrids, the upgraded site should be backwards-compatible in this respect.\n", "Regarding DataGrids - I don't think you have too much to worry about, DataGrids still work in current versions. It's just that going forward, you should use GridViews.\nI am sure there are other things you may want to check into though, deeper framework issues. But I don't know enough about those things to speak to that particular point.\n" ]
[ 2, 2, 1, 0 ]
[]
[]
[ "asp.net", "silverlight" ]
stackoverflow_0000042879_asp.net_silverlight.txt
Q: subversion tags and branches Has anybody come up with a better technique for managing tags and branches in subversion than what is generally recommended (the parallel directories called 'tags' and 'branches')? A: Using the repository namespace to convey information like branches / tags / etc is fundamentally the SVN model; if what you want is a different model, you probably really want something other than SVN. The lack of metadata like CVS-style labels in SVN is an intentional design decision. No matter what arrangement of branches/tags/projects you choose in your tree, it's all going to reduce to sets of parallel directories for each purpose. What's left is just choosing the right naming strategy for your branches and tags to make things more clear to you. One convention that I'm fond of is a separation between full heavyweight branches and lightweight "twigs". The convention in the group where I work is that long-lived development goes in a branch, and the release engineers must know of and be partially responsible for each branch, but that any engineer can create a short-lived twig to use as scratch space for a problem that's too large to fit into one checkin but not massive enough to require release engineering support. Twigs here live in a separate parallel 'twigs' directory similar to branches, and the naming convention often has the creator's user ID and the bug ID number for the issue the twig is intended to address in it. A: We use a "trunk, tag, branch and stream" strategy. The "trunk" is where the most current version of whatever is out in production is supposed to be placed. A "tag" is where a "copy to" occurs when a stream is complete and we need to store the status of the stream for archival purposes. It also allows development to continue from a specific point. A "branch" is when something completely different than mainstream development is to take place. Usually, branches are very rare. "Streams" are what we most use. A stream of development is a task-based focus, such as a stream for a particular fix or development effort (for instance, a change requests' completion). Streams are allowed to merge with each other, but different streams are ranked based on svn strategy. For instance, we had one stream for a cr release, and another stream for pushing out app support releases. Since the CR stream had to incorporate the app support fixes in addition to its own changes, it was ranked higher. Streams that are higher ranked have lower streams (as needed) merged into them. Finally, a stream becomes production-ready. It is tagged, and then "copied to" trunk, which is then used (normally, although sometimes tags are used) as the base for further streams. The best use of streams, however, are for short tasks that take less than two weeks to complete. These streams can be merged quickly into multiple, higher ranked streams, which then get merged later into any other higher ranked streams. For example, since app support was lower than cr, any app support quick fix could be copied to a stream and then worked on, merged to app support which would then be merged to the cr stream. A: The only other thing you can do besides having parallel directories when you have Branches is to do an SVN Switch between two branches whenever you want to work on one or the other. Perhaps you should clarify what you want to be "better" about this system and people could make suggestions.
subversion tags and branches
Has anybody come up with a better technique for managing tags and branches in subversion than what is generally recommended (the parallel directories called 'tags' and 'branches')?
[ "Using the repository namespace to convey information like branches / tags / etc is fundamentally the SVN model; if what you want is a different model, you probably really want something other than SVN.\nThe lack of metadata like CVS-style labels in SVN is an intentional design decision. No matter what arrangement of branches/tags/projects you choose in your tree, it's all going to reduce to sets of parallel directories for each purpose. What's left is just choosing the right naming strategy for your branches and tags to make things more clear to you.\nOne convention that I'm fond of is a separation between full heavyweight branches and lightweight \"twigs\". The convention in the group where I work is that long-lived development goes in a branch, and the release engineers must know of and be partially responsible for each branch, but that any engineer can create a short-lived twig to use as scratch space for a problem that's too large to fit into one checkin but not massive enough to require release engineering support. Twigs here live in a separate parallel 'twigs' directory similar to branches, and the naming convention often has the creator's user ID and the bug ID number for the issue the twig is intended to address in it.\n", "We use a \"trunk, tag, branch and stream\" strategy.\nThe \"trunk\" is where the most current version of whatever is out in production is supposed to be placed.\nA \"tag\" is where a \"copy to\" occurs when a stream is complete and we need to store the status of the stream for archival purposes. It also allows development to continue from a specific point.\nA \"branch\" is when something completely different than mainstream development is to take place. Usually, branches are very rare.\n\"Streams\" are what we most use. A stream of development is a task-based focus, such as a stream for a particular fix or development effort (for instance, a change requests' completion). Streams are allowed to merge with each other, but different streams are ranked based on svn strategy. For instance, we had one stream for a cr release, and another stream for pushing out app support releases. Since the CR stream had to incorporate the app support fixes in addition to its own changes, it was ranked higher. Streams that are higher ranked have lower streams (as needed) merged into them. Finally, a stream becomes production-ready. It is tagged, and then \"copied to\" trunk, which is then used (normally, although sometimes tags are used) as the base for further streams.\nThe best use of streams, however, are for short tasks that take less than two weeks to complete. These streams can be merged quickly into multiple, higher ranked streams, which then get merged later into any other higher ranked streams. For example, since app support was lower than cr, any app support quick fix could be copied to a stream and then worked on, merged to app support which would then be merged to the cr stream.\n", "The only other thing you can do besides having parallel directories when you have Branches is to do an SVN Switch between two branches whenever you want to work on one or the other. Perhaps you should clarify what you want to be \"better\" about this system and people could make suggestions.\n" ]
[ 3, 3, 0 ]
[]
[]
[ "branch", "svn", "tags" ]
stackoverflow_0000066633_branch_svn_tags.txt
Q: how to get through spam filters? I sent 3 emails last week as replies from our website. None received them! One was yahoo, hotmail and an overseas domain. I am wondering if it's not a good idea to open a yahoo account with our domain name as the user just to reply to prospective buyers. A: Your mail server's IP may have been black listed. This is common on shared servers. http://www.mxtoolbox.com/blacklists.aspx A: First, check dnsbl.info to see if your mailserver's IP is blocked by any of the blacklists. If they are, contact the blacklist administrator to investigate removing the block. A: If your email is business critical, then you need to get a dedicated server with a white-hat hosting company, control over DNS to set up your SPF/SenderID record, and to register with the Hotmail, AOL and Yahoo postmasters for whitelisting and feedback loops. Most of these will only accept requests for dedicated servers, where you have 100% control over the email they send. If you are using an online contact form, make people double-enter their email address and check the entries match - otherwise you'll have no end of typos, which are naturally undeliverable and frustrating for both you and your customers. A: You could also try looking at gmail for domains. It's what I use and so far I haven't had a problem withany spam filters. Also make sure that you are not writing the content of the message to where a spam filter could flag it as spam. There's some guides on the net somewhere. I found out that by removing the word "free" from the message the emails started going though (before I was on gmail).
how to get through spam filters?
I sent 3 emails last week as replies from our website. None received them! One was yahoo, hotmail and an overseas domain. I am wondering if it's not a good idea to open a yahoo account with our domain name as the user just to reply to prospective buyers.
[ "Your mail server's IP may have been black listed. This is common on shared servers. \nhttp://www.mxtoolbox.com/blacklists.aspx \n", "First, check dnsbl.info to see if your mailserver's IP is blocked by any of the blacklists. If they are, contact the blacklist administrator to investigate removing the block.\n", "If your email is business critical, then you need to get a dedicated server with a white-hat hosting company, control over DNS to set up your SPF/SenderID record, and to register with the Hotmail, AOL and Yahoo postmasters for whitelisting and feedback loops. Most of these will only accept requests for dedicated servers, where you have 100% control over the email they send.\nIf you are using an online contact form, make people double-enter their email address and check the entries match - otherwise you'll have no end of typos, which are naturally undeliverable and frustrating for both you and your customers.\n", "You could also try looking at gmail for domains. It's what I use and so far I haven't had a problem withany spam filters. Also make sure that you are not writing the content of the message to where a spam filter could flag it as spam. There's some guides on the net somewhere. I found out that by removing the word \"free\" from the message the emails started going though (before I was on gmail).\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "email", "email_bounces" ]
stackoverflow_0000068495_email_email_bounces.txt
Q: Drawing a custom label on a pie chart in Yahoo's Flash Library ASTRA Has anyone looked at Yahoo's ASTRA? It's fairly nifty, but I had some issues creating a custom label for a pie chart. They have an example for a line chart, which overrides an axis's series's label renderer. My solution was to override the myPieChart.dataTipFunction. For data that looks like: myPieChart.dataProvider = [ { category: "Groceries", cost: 50 }, { category: "Transportation", cost: 175} ] myPieChart.dataField = "cost"; myPieChart.categoryField = "category"; I wrote a function like this: import com.yahoo.astra.fl.charts.series.* myPieChart.dataTipFunction = function (obj:Object, index:int, series:ISeries):String { return obj.category + "\n$" + obj.cost; }; There's ceil(2.718281828459045) problems with this: I'm directly calling the category and cost properties of the data provider. The names are actually configurable when setting up the chart, I'd like to maintain that flexibility. The default data tip would show the category, the cost (without a dollar sign), and the percentage it makes up in the pie chart. So here, I've lost the percentage. I just have no idea which property of what would hold that. It might be part of the series. I probably only need to override the dataItemRenderer for the cost part of the series, but I don't know how to access it. The documentation is a little ... lacking there. Normally I would just look at the default implementation of the dataTipFunction but it's all inside a compiled shm that's part of the components distributed from yahoo. Can anyone help me complete this overridden function with percentage information and the flexibility mentioned in point 1? A: Okay... so no-one's tried Astra, or people just avoid Flash questions. After a lot of guess work it turns out I needed to cast the series to a PieSeries and then work with those member functions, as the ISeries was useless on it's own. myPieChart.dataTipFunction = function (item:Object, index:int, series:ISeries):String { var oPieSeries:PieSeries = series as PieSeries; return oPieSeries.itemToCategory(item,index) + "\n$" + oPieSeries.itemToData(item) + "\n" + Number(oPieSeries.itemToPercentage(item)).toFixed(2) + "%"; }; A: The Astra components are distributed with the complete source code. Flash CS3 components use compiled shims because otherwise you'd need to manually add the raw source files to your classpath. As a bonus, they also improve compile times because they're already built for you. Look in the "Source" folder in the Astra zip file, and you'll find all the ActionScript classes for the Astra components.
Drawing a custom label on a pie chart in Yahoo's Flash Library ASTRA
Has anyone looked at Yahoo's ASTRA? It's fairly nifty, but I had some issues creating a custom label for a pie chart. They have an example for a line chart, which overrides an axis's series's label renderer. My solution was to override the myPieChart.dataTipFunction. For data that looks like: myPieChart.dataProvider = [ { category: "Groceries", cost: 50 }, { category: "Transportation", cost: 175} ] myPieChart.dataField = "cost"; myPieChart.categoryField = "category"; I wrote a function like this: import com.yahoo.astra.fl.charts.series.* myPieChart.dataTipFunction = function (obj:Object, index:int, series:ISeries):String { return obj.category + "\n$" + obj.cost; }; There's ceil(2.718281828459045) problems with this: I'm directly calling the category and cost properties of the data provider. The names are actually configurable when setting up the chart, I'd like to maintain that flexibility. The default data tip would show the category, the cost (without a dollar sign), and the percentage it makes up in the pie chart. So here, I've lost the percentage. I just have no idea which property of what would hold that. It might be part of the series. I probably only need to override the dataItemRenderer for the cost part of the series, but I don't know how to access it. The documentation is a little ... lacking there. Normally I would just look at the default implementation of the dataTipFunction but it's all inside a compiled shm that's part of the components distributed from yahoo. Can anyone help me complete this overridden function with percentage information and the flexibility mentioned in point 1?
[ "Okay... so no-one's tried Astra, or people just avoid Flash questions.\nAfter a lot of guess work it turns out I needed to cast the series to a PieSeries and then work with those member functions, as the ISeries was useless on it's own.\nmyPieChart.dataTipFunction = \n function (item:Object, index:int, series:ISeries):String {\n var oPieSeries:PieSeries = series as PieSeries;\n return oPieSeries.itemToCategory(item,index) + \"\\n$\" + \n oPieSeries.itemToData(item) + \"\\n\" + \n Number(oPieSeries.itemToPercentage(item)).toFixed(2) + \"%\";\n };\n\n", "The Astra components are distributed with the complete source code. Flash CS3 components use compiled shims because otherwise you'd need to manually add the raw source files to your classpath. As a bonus, they also improve compile times because they're already built for you. Look in the \"Source\" folder in the Astra zip file, and you'll find all the ActionScript classes for the Astra components.\n" ]
[ 2, 0 ]
[]
[]
[ "actionscript_3", "datatipfunction", "flash" ]
stackoverflow_0000033590_actionscript_3_datatipfunction_flash.txt
Q: How do I do a string replacement in a PowerShell function? How do I convert function input parameters to the right type? I want to return a string that has part of the URL passed into it removed. This works, but it uses a hard-coded string: function CleanUrl($input) { $x = "http://google.com".Replace("http://", "") return $x } $SiteName = CleanUrl($HostHeader) echo $SiteName This fails: function CleanUrl($input) { $x = $input.Replace("http://", "") return $x } Method invocation failed because [System.Array+SZArrayEnumerator] doesn't contain a method named 'Replace'. At M:\PowerShell\test.ps1:13 char:21 + $x = $input.Replace( <<<< "http://", "") A: Steve's answer works. The problem with your attempt to reproduce ESV's script is that you're using $input, which is a reserved variable (it automatically collects multiple piped input into a single variable). You should, however, use .Replace() unless you need the extra feature(s) of -replace (it handles regular expressions, etc). function CleanUrl([string]$url) { $url.Replace("http://","") } That will work, but so would: function CleanUrl([string]$url) { $url -replace "http://","" } Also, when you invoke a PowerShell function, don't use parenthesis: $HostHeader = "http://google.com" $SiteName = CleanUrl $HostHeader Write-Host $SiteName Hope that helps. By the way, to demonstrate $input: function CleanUrls { $input -replace "http://","" } # Notice these are arrays ... $HostHeaders = @("http://google.com","http://stackoverflow.com") $SiteNames = $HostHeader | CleanUrls Write-Output $SiteNames A: The concept here is correct. The problem is with the variable name you have chosen. $input is a reserved variable used by PowerShell to represent an array of pipeline input. If you change your variable name, you should not have any problem. PowerShell does have a replace operator, so you could make your function into function CleanUrl($url) { return $url -replace 'http://' } A: function CleanUrl([string] $url) { return $url.Replace("http://", "") }
How do I do a string replacement in a PowerShell function?
How do I convert function input parameters to the right type? I want to return a string that has part of the URL passed into it removed. This works, but it uses a hard-coded string: function CleanUrl($input) { $x = "http://google.com".Replace("http://", "") return $x } $SiteName = CleanUrl($HostHeader) echo $SiteName This fails: function CleanUrl($input) { $x = $input.Replace("http://", "") return $x } Method invocation failed because [System.Array+SZArrayEnumerator] doesn't contain a method named 'Replace'. At M:\PowerShell\test.ps1:13 char:21 + $x = $input.Replace( <<<< "http://", "")
[ "Steve's answer works. The problem with your attempt to reproduce ESV's script is that you're using $input, which is a reserved variable (it automatically collects multiple piped input into a single variable).\nYou should, however, use .Replace() unless you need the extra feature(s) of -replace (it handles regular expressions, etc).\nfunction CleanUrl([string]$url)\n{\n $url.Replace(\"http://\",\"\")\n}\n\nThat will work, but so would:\nfunction CleanUrl([string]$url)\n{\n $url -replace \"http://\",\"\"\n}\n\nAlso, when you invoke a PowerShell function, don't use parenthesis:\n$HostHeader = \"http://google.com\"\n$SiteName = CleanUrl $HostHeader\nWrite-Host $SiteName\n\nHope that helps. By the way, to demonstrate $input:\nfunction CleanUrls\n{\n $input -replace \"http://\",\"\"\n}\n\n# Notice these are arrays ...\n$HostHeaders = @(\"http://google.com\",\"http://stackoverflow.com\")\n$SiteNames = $HostHeader | CleanUrls\nWrite-Output $SiteNames\n\n", "The concept here is correct.\nThe problem is with the variable name you have chosen. $input is a reserved variable used by PowerShell to represent an array of pipeline input. If you change your variable name, you should not have any problem.\nPowerShell does have a replace operator, so you could make your function into\nfunction CleanUrl($url)\n{\n return $url -replace 'http://'\n}\n\n", "function CleanUrl([string] $url)\n{\n return $url.Replace(\"http://\", \"\")\n}\n\n" ]
[ 19, 16, 5 ]
[ "This worked for me:\nfunction CleanUrl($input)\n{\n return $input.Replace(\"http://\", \"\")\n}\n\n" ]
[ -4 ]
[ "function", "powershell", "replace", "string" ]
stackoverflow_0000015062_function_powershell_replace_string.txt
Q: Top tips for secure web applications I am looking for easy steps that are simple and effective in making a web application more secure. What are your top tips for secure web applications, and what kind of attack will they stop? A: Microsoft Technet has en excellent article: Ten Tips for Designing, Building, and Deploying More Secure Web Applications Here are the topics for the tips answered in that article: Never Directly Trust User Input Services Should Have Neither System nor Administrator Access Follow SQL Server Best Practices Protect the Assets Include Auditing, Logging, and Reporting Features Analyze the Source Code Deploy Components Using Defense in Depth Turn Off In-Depth Error Messages for End Users Know the 10 Laws of Security Administration Have a Security Incident Response Plan A: Do not trust user input. Validation of expected data types and formatting is essential to avaoiding SQL injection and Cross-Site Scripting (XSS) attacks. A: Escape user provided content to avoid XSS attacks. Using paremeterised SQL or stored procedures to avoid SQL Injections attacks. Running the webserver as an unprivileged account to minimise attacks on the OS. Setting the webserver directories to an unprivileged account, again, to minimise attacks on the OS. Setting up unprivileged accounts on the SQL server and using them for the application to minimise attacks on the DB. For more in depth information, there is always the OWASP Guide to Building Secure Web Applications and Web Services A: Some of my favourites: Filter Input, Escape Output to help guard against XSS or SQL injection attacks Use prepared statements for database queries (SQL injection attacks) Disable unused user accounts on your server to prevent brute force password attacks Remove Apache version info from HTTP header (ServerSignature=Off, ServerTokens=ProductOnly) Run your web server in a chroot jail to limit damage if compromised A: OWASP is your friend. Their Top Ten List of web application security vulnerabilities includes a description of each problem and how to defend against it. The site is a good resource for learning more about web application security and is a wealth of tools and and testing techniques as well. A: Set the secure flag on cookies for SSL applications. Otherwise there is always a highjacking attack that is much easier to conduct than breaking the crypto. This is the essence of CVE-2002-1152.
Top tips for secure web applications
I am looking for easy steps that are simple and effective in making a web application more secure. What are your top tips for secure web applications, and what kind of attack will they stop?
[ "Microsoft Technet has en excellent article:\nTen Tips for Designing, Building, and Deploying More Secure Web Applications\nHere are the topics for the tips answered in that article:\n\nNever Directly Trust User Input\nServices Should Have Neither System nor Administrator Access\nFollow SQL Server Best Practices\nProtect the Assets\nInclude Auditing, Logging, and Reporting Features\nAnalyze the Source Code\nDeploy Components Using Defense in Depth\nTurn Off In-Depth Error Messages for End Users\nKnow the 10 Laws of Security Administration\nHave a Security Incident Response Plan\n\n", "Do not trust user input.\nValidation of expected data types and formatting is essential to avaoiding SQL injection and Cross-Site Scripting (XSS) attacks.\n", "\nEscape user provided content to avoid XSS attacks.\nUsing paremeterised SQL or stored procedures to avoid SQL Injections attacks.\nRunning the webserver as an unprivileged account to minimise attacks on the OS.\nSetting the webserver directories to an unprivileged account, again, to minimise attacks on the OS.\nSetting up unprivileged accounts on the SQL server and using them for the application to minimise attacks on the DB.\n\nFor more in depth information, there is always the OWASP Guide to Building Secure Web Applications and Web Services\n", "Some of my favourites:\n\nFilter Input, Escape Output to help guard against XSS or SQL injection attacks\nUse prepared statements for database queries (SQL injection attacks)\nDisable unused user accounts on your server to prevent brute force password attacks\nRemove Apache version info from HTTP header (ServerSignature=Off, ServerTokens=ProductOnly)\nRun your web server in a chroot jail to limit damage if compromised\n\n", "OWASP is your friend. Their Top Ten List of web application security vulnerabilities includes a description of each problem and how to defend against it. The site is a good resource for learning more about web application security and is a wealth of tools and and testing techniques as well.\n", "Set the secure flag on cookies for SSL applications. Otherwise there is always a highjacking attack that is much easier to conduct than breaking the crypto. This is the essence of CVE-2002-1152.\n" ]
[ 10, 6, 4, 1, 1, 0 ]
[]
[]
[ "security", "web_applications" ]
stackoverflow_0000047323_security_web_applications.txt
Q: How To Generate A Javascript File From The Server I'm using BlogEngine.NET (a fine, fine tool) and I was playing with the TinyMCE editor and noticed that there's a place for me to create a list of external links, but it has to be a javascript file: external_link_list_url : "example_link_list.js" this is great, of course, but the list of links I want to use needs to be generated dynamically from the database. This means that I need to create this JS file from the server on page load. Does anyone know of a way to do this? Ideally, I'd like to just overwrite this file each time the editor is accessed. Thanks! A: I would create an HTTPHandler that responds with the desired data read from the db. Just associate the HTTPHandler with the particular filename 'example_link_list.js' in your web-config. Make sure you set context.Response.ContentType = "text/javascript"; then just context.Response.Write(); your list of external links A: if your 3rd party code doesn't require that the javascript file has the .js extension, then you can create your HTTPHandler and map it to either .axd or .ashx extension in web.config only - no need to change IIS settings as these extensions are automatically configured by IIS to be handled by asp.net. <system.web> <httpHandlers> <add verb="*" path="example_link_list.axd" type= "MyProject.MyTinyMCE, MyAssembly" /> </httpHandlers> </system.web> This instructs IIS to pass all requests for 'example_link_list.axd' (via POST and GET) to the ProcessRequest method of MyProject.MyTinyMCE class in MyAssembly assembly (the name of your .dll) You could alternatively use Visual Studio's 'Generic Handler' template instead - this will create an .ashx file and code-behind class for you. No need to edit web.config either. using an HTTPHandler is preferrable to using an .aspx page as .aspx requests have a lot more overheads associated (all of the page events etc.) A: If you can't change the file extension (and just return plain text, the caller shouldn't care about the file extension, js is plain text) then you can set up a handler on IIS (assuming it's IIS) to handle javascript files. See this link - http://msdn.microsoft.com/en-us/library/bb515343.aspx - for how to setup IIS 6 within windows to handle any file extension. Then setup a HttpHandler to receive requests for .js (Just google httphandler and see any number of good tutorials like this one: http://www.devx.com/dotnet/Article/6962/0/page/3 ) A: Just point it at an aspx file and have that file spit out whatever javascript you need. I did this recently with TinyMCE in PHP and it worked like a charm. external_link_list_url : "example_link_list.aspx" In your aspx file: <%@ Page Language="C#" AutoEventWireup="false" CodeFile="Default.aspx.cs" Inherits="Default" %> in your code-behind (C#): using System; public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("var tinyMCELinkList = new Array("); // put all of your links here in the right format.. Response.Write(string.Format("['{0}', '{1}']", "name", "url")); Response.Write(");"); } }
How To Generate A Javascript File From The Server
I'm using BlogEngine.NET (a fine, fine tool) and I was playing with the TinyMCE editor and noticed that there's a place for me to create a list of external links, but it has to be a javascript file: external_link_list_url : "example_link_list.js" this is great, of course, but the list of links I want to use needs to be generated dynamically from the database. This means that I need to create this JS file from the server on page load. Does anyone know of a way to do this? Ideally, I'd like to just overwrite this file each time the editor is accessed. Thanks!
[ "I would create an HTTPHandler that responds with the desired data read from the db. Just associate the HTTPHandler with the particular filename 'example_link_list.js' in your web-config. Make sure you set \ncontext.Response.ContentType = \"text/javascript\";\n\nthen just context.Response.Write(); your list of external links\n", "if your 3rd party code doesn't require that the javascript file has the .js extension, then you can create your HTTPHandler and map it to either .axd or .ashx extension in web.config only - no need to change IIS settings as these extensions are automatically configured by IIS to be handled by asp.net.\n<system.web>\n <httpHandlers>\n <add verb=\"*\" path=\"example_link_list.axd\" type= \"MyProject.MyTinyMCE, MyAssembly\" />\n </httpHandlers>\n</system.web>\n\nThis instructs IIS to pass all requests for 'example_link_list.axd' (via POST and GET) to the ProcessRequest method of MyProject.MyTinyMCE class in MyAssembly assembly (the name of your .dll)\nYou could alternatively use Visual Studio's 'Generic Handler' template instead - this will create an .ashx file and code-behind class for you. No need to edit web.config either.\nusing an HTTPHandler is preferrable to using an .aspx page as .aspx requests have a lot more overheads associated (all of the page events etc.)\n", "If you can't change the file extension (and just return plain text, the caller shouldn't care about the file extension, js is plain text) then you can set up a handler on IIS (assuming it's IIS) to handle javascript files.\nSee this link - http://msdn.microsoft.com/en-us/library/bb515343.aspx - for how to setup IIS 6 within windows to handle any file extension. Then setup a HttpHandler to receive requests for .js (Just google httphandler and see any number of good tutorials like this one: http://www.devx.com/dotnet/Article/6962/0/page/3 )\n", "Just point it at an aspx file and have that file spit out whatever javascript you need. I did this recently with TinyMCE in PHP and it worked like a charm.\nexternal_link_list_url : \"example_link_list.aspx\"\nIn your aspx file:\n<%@ Page Language=\"C#\" AutoEventWireup=\"false\" CodeFile=\"Default.aspx.cs\" Inherits=\"Default\" %>\nin your code-behind (C#):\n\nusing System;\n\npublic partial class Default : System.Web.UI.Page\n{\n protected void Page_Load(object sender, EventArgs e)\n {\n Response.Write(\"var tinyMCELinkList = new Array(\");\n // put all of your links here in the right format..\n Response.Write(string.Format(\"['{0}', '{1}']\", \"name\", \"url\"));\n Response.Write(\");\");\n }\n}\n\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "asp.net", "blogengine.net", "c#", "javascript", "tinymce" ]
stackoverflow_0000068067_asp.net_blogengine.net_c#_javascript_tinymce.txt
Q: How can you tell if viewstate in an ASP.Net application has been tampered with? During a discussion about security, a developer on my team asked if there was a way to tell if viewstate has been tampered with. I'm embarrassed to say that I didnt know the answer. I told him I would find out, but thought I would give someone on here a chance to answer first. I know there is some automatic validation, but is there a way to do it manually if event validation is not enabled? A: EnableViewStateMac page directive A: ViewState by default is MIME encoded and hashed with a MAC key (either from the machine or from the web.config file), which helps prevent tampering (i.e. decoding blows up). You can also encrypt and compress ViewState if you like for further protection and less overhead, respectively. See MS ViewState and CodeProject.com A: You might be able to do it manually, but you'd just be implementing the same algorithm that's already there for you. It's generally a bad idea to disable the ViewState validation on a page.
How can you tell if viewstate in an ASP.Net application has been tampered with?
During a discussion about security, a developer on my team asked if there was a way to tell if viewstate has been tampered with. I'm embarrassed to say that I didnt know the answer. I told him I would find out, but thought I would give someone on here a chance to answer first. I know there is some automatic validation, but is there a way to do it manually if event validation is not enabled?
[ "EnableViewStateMac page directive\n", "ViewState by default is MIME encoded and hashed with a MAC key (either from the machine or from the web.config file), which helps prevent tampering (i.e. decoding blows up). You can also encrypt and compress ViewState if you like for further protection and less overhead, respectively. See MS ViewState and CodeProject.com\n", "You might be able to do it manually, but you'd just be implementing the same algorithm that's already there for you. It's generally a bad idea to disable the ViewState validation on a page.\n" ]
[ 6, 2, 0 ]
[]
[]
[ "asp.net", "viewstate" ]
stackoverflow_0000068764_asp.net_viewstate.txt
Q: DefaultButton in ASP.NET forms What is the best solution of defaultButton and "Enter key pressed" for ASP.NET 2.0-3.5 forms? A: Just add the "defaultbutton" attribute to the form and set it to the ID of the button you want to be the default. <form defaultbutton="button1" runat="server"> <asp:textbox id="textbox1" runat="server"/> <asp:button id="button1" text="Button1" runat="server"/> </form> NOTE: This only works in ASP.NET 2.0+ A: Since form submission on hitting the enter key is a part of life with HTML, you'll have to trap the Enter key using javascript and only allow it to go through when it's valid (such as within textareas). Check out http://brennan.offwhite.net/blog/2004/08/04/the-single-form-problem-with-aspnet/ for a good explanation.
DefaultButton in ASP.NET forms
What is the best solution of defaultButton and "Enter key pressed" for ASP.NET 2.0-3.5 forms?
[ "Just add the \"defaultbutton\" attribute to the form and set it to the ID of the button you want to be the default. \n<form defaultbutton=\"button1\" runat=\"server\">\n <asp:textbox id=\"textbox1\" runat=\"server\"/>\n <asp:button id=\"button1\" text=\"Button1\" runat=\"server\"/>\n</form> \nNOTE: This only works in ASP.NET 2.0+\n", "Since form submission on hitting the enter key is a part of life with HTML, you'll have to trap the Enter key using javascript and only allow it to go through when it's valid (such as within textareas). Check out http://brennan.offwhite.net/blog/2004/08/04/the-single-form-problem-with-aspnet/ for a good explanation.\n" ]
[ 4, 2 ]
[]
[]
[ "asp.net", "defaultbutton", "enter", "pressed" ]
stackoverflow_0000068688_asp.net_defaultbutton_enter_pressed.txt
Q: What is the best approach to moving a preexisting project from Flash 7/AS2 to Flex/AS3? I have a large codebase that targetted Flash 7, with a lot of AS2 classes. I'm hoping that I'll be able to use Flex for any new projects, but a lot of new stuff in our roadmap is additions to the old code. The syntax for AS2 and AS3 is generally the same, so I'm starting to wonder how hard it would be to port the current codebase to Flex/AS3. I know all the UI-related stuff would be iffy (currently the UI is generated at runtime with a lot of createEmptyMovieClip() and attachMovie() stuff), but the UI and controller/model stuff is mostly separated. Has anyone tried porting a large codebase of AS2 code to AS3? How difficult is it? What kinds of pitfalls did you run into? Any recommendations for approaches to doing this kind of project? A: Some notable problems I saw when attempting to convert a large number of AS2 classes to AS3: Package naming class your.package.YourClass { } becomes package your.package { class YourClass { } } Imports are required You must explicitly import any outside classes used -- referring to them by their fully qualified name is no longer enough. Interface methods can't be labelled 'public' This makes total sense, but AS2 will let you do it so if you have any they'll need to be removed. Explicit 'override' keyword Any functions that override a parent class function must be declared with the override keyword, much like C#. Along the same lines, if you have interfaces that extend other interfaces and redeclare functions, those overrides must be removed (again, as with public, this notation didn't make sense anyway but AS2 let you do it). All the Flash builtin stuff changed You alluded to this above, but it's now flash.display.MovieClip instead of just MovieClip, for example. There are a lot of specifics in this category, and I didn't get far enough to find them all, but there's going to be a lot of annoyance here. Conclusion I didn't get to work on this conversion to the point of success, but I was able in a matter of hours to write a quick C# tool that handled every aspect of this except the override keyword. Automating the imports can be tricky -- in my case the packages we use all start with a few root-level packages so they're easy to detect. A: First off, I hope you're not using eval() in your projects, since there is no equivalent in AS3. One of the things I would do is go through Adobe's migration guide (which is basically just an itemized list of what has changed) item by item and try to figure out if each item can be changed via a simple search and replace operation (possibly using a regex) or whether it's easier to just manually edit the occurrences to correspond to AS3. Probably in a lot of cases (especially if, as you said, the amount of code to be migrated is quite high) you'll be best off scripting the changes (i.e. using regex search & replace) and manually fixing any border cases where the automated changes have failed. Be prepared to set some time aside for a bit of debugging and running through some test cases as well. Also, as others have already mentioned, trying to combine AS2 SWFs with AS3 SWFs is not a good idea and doesn't really even work, so you'll definitely have to migrate all of the code in one project at once. A: Here are some additional references for moving from AS2 to AS3: Grant Skinners Introductory AS3 Workshop slidedeck http://gskinner.com/talks/as3workshop/ Lee Brimelow : 6 Reasons to learn ActionScript 3 http://www.adobe.com/devnet/actionscript/articles/six_reasons_as3.html Colin Moock : Essential ActionScript 3 (considered the "bible" for ActionScript developers): http://www.amazon.com/Essential-ActionScript-3-0/dp/0596526946 mike chambers [email protected] A: My experience has been that the best way to migrate to AS3 is in two phases - first structurally, and second syntactically. First, do rounds of refactoring where you stay in AS2, but get as close to AS3 architecture as you can. Naturally this includes moving all your frame scripts and #include scripts into packages and classes, but you can do more subtle things like changing all your event listeners and dispatchers to follow the AS3 flow (using static class properties for event types, and registering by method rather than by object). You'll also want to get rid of all your "built-in" events (such as onEnterFrame), and you'll want to take a close look at nontrivial mouse interaction (such as dragging) and keyboard interaction (such as detecting whether a key is pressed). This phase can be done incrementally. The second phase is to convert from AS2 to AS3 - changing "_x" to "x", and all the APIs, and so on. This can't be done incrementally, you have to just do as much as you can in one fell swoop and then start fixing all the compile errors. For this reason, the more you can do in the first phase, the more pain you avoid in the second phase. This process has worked for me on a reasonably large project, but I should note that the first phase requires a solid understanding of how AS3 is structured. If you're new to AS3, then you'll probably need to try building some of the functionality you'll need to be porting. For example, if your legacy code uses dragging and drop targets, you'll want to try implementing that in AS3 to understand how your code will have to change structurally. If you then refactor your AS2 with that in mind, the final syntax changes should go smoothly. The biggest pitfalls for me were the parts that involved a lot of attaching, duplicating and moving MovieClips, changing their depths, and so on. All that stuff can't really be rearchitected to look like AS3; you have to just mash it all into the newer way of thinking and then start fixing the bugs. One final note - I really wouldn't worry about stuff like import and override statements, at least not to the point of automating it. If you miss any, it will be caught by the compiler. But if you miss structural problems, you'll have a lot more pain. A: Migrating a bigger project like this from as2 will be more than a simple search and replace. The new syntax is fairly similar and simple to adapt (as lilserf mentioned) but if nothing else the fact that as3 is more strict and the new event model will mostly likely cause a lot of problems. You'll probably be better off by more or less rewriting almost everything from scratch, possibly using the old code as a guide. Migrating from as2 -> as3 in terms of knowledge is fairly simple though. If you know object oriented as2, moving on to as3 won't be a problem at all. You still don't have to use mxml for your UI unless you specifically want to. Mxml just provides a quick way to build the UI (etc) but if you want to do it yourself with actionscript there's nothing stopping you (this would also probably be easier if you already have that UI in as2 code). Flex (Builder) is just a quick way to do stuff you may not want to do yourself, such as building the UI and binding data but essentially it's just creating a part of the .swf for you -- there's no magic to it ;)
What is the best approach to moving a preexisting project from Flash 7/AS2 to Flex/AS3?
I have a large codebase that targetted Flash 7, with a lot of AS2 classes. I'm hoping that I'll be able to use Flex for any new projects, but a lot of new stuff in our roadmap is additions to the old code. The syntax for AS2 and AS3 is generally the same, so I'm starting to wonder how hard it would be to port the current codebase to Flex/AS3. I know all the UI-related stuff would be iffy (currently the UI is generated at runtime with a lot of createEmptyMovieClip() and attachMovie() stuff), but the UI and controller/model stuff is mostly separated. Has anyone tried porting a large codebase of AS2 code to AS3? How difficult is it? What kinds of pitfalls did you run into? Any recommendations for approaches to doing this kind of project?
[ "Some notable problems I saw when attempting to convert a large number of AS2 classes to AS3:\nPackage naming\nclass your.package.YourClass\n{\n}\n\nbecomes\npackage your.package\n{\n class YourClass\n {\n }\n}\n\nImports are required\nYou must explicitly import any outside classes used -- referring to them by their fully qualified name is no longer enough.\nInterface methods can't be labelled 'public'\nThis makes total sense, but AS2 will let you do it so if you have any they'll need to be removed.\nExplicit 'override' keyword\nAny functions that override a parent class function must be declared with the override keyword, much like C#. Along the same lines, if you have interfaces that extend other interfaces and redeclare functions, those overrides must be removed (again, as with public, this notation didn't make sense anyway but AS2 let you do it).\nAll the Flash builtin stuff changed\nYou alluded to this above, but it's now flash.display.MovieClip instead of just MovieClip, for example. There are a lot of specifics in this category, and I didn't get far enough to find them all, but there's going to be a lot of annoyance here.\nConclusion\nI didn't get to work on this conversion to the point of success, but I was able in a matter of hours to write a quick C# tool that handled every aspect of this except the override keyword. Automating the imports can be tricky -- in my case the packages we use all start with a few root-level packages so they're easy to detect.\n", "First off, I hope you're not using eval() in your projects, since there is no equivalent in AS3.\nOne of the things I would do is go through Adobe's migration guide (which is basically just an itemized list of what has changed) item by item and try to figure out if each item can be changed via a simple search and replace operation (possibly using a regex) or whether it's easier to just manually edit the occurrences to correspond to AS3. Probably in a lot of cases (especially if, as you said, the amount of code to be migrated is quite high) you'll be best off scripting the changes (i.e. using regex search & replace) and manually fixing any border cases where the automated changes have failed.\nBe prepared to set some time aside for a bit of debugging and running through some test cases as well.\nAlso, as others have already mentioned, trying to combine AS2 SWFs with AS3 SWFs is not a good idea and doesn't really even work, so you'll definitely have to migrate all of the code in one project at once.\n", "Here are some additional references for moving from AS2 to AS3:\nGrant Skinners Introductory AS3 Workshop slidedeck\nhttp://gskinner.com/talks/as3workshop/\nLee Brimelow : 6 Reasons to learn ActionScript 3\nhttp://www.adobe.com/devnet/actionscript/articles/six_reasons_as3.html\nColin Moock : Essential ActionScript 3 (considered the \"bible\" for ActionScript developers):\nhttp://www.amazon.com/Essential-ActionScript-3-0/dp/0596526946\nmike chambers\[email protected]\n", "My experience has been that the best way to migrate to AS3 is in two phases - first structurally, and second syntactically.\nFirst, do rounds of refactoring where you stay in AS2, but get as close to AS3 architecture as you can. Naturally this includes moving all your frame scripts and #include scripts into packages and classes, but you can do more subtle things like changing all your event listeners and dispatchers to follow the AS3 flow (using static class properties for event types, and registering by method rather than by object). You'll also want to get rid of all your \"built-in\" events (such as onEnterFrame), and you'll want to take a close look at nontrivial mouse interaction (such as dragging) and keyboard interaction (such as detecting whether a key is pressed). This phase can be done incrementally.\nThe second phase is to convert from AS2 to AS3 - changing \"_x\" to \"x\", and all the APIs, and so on. This can't be done incrementally, you have to just do as much as you can in one fell swoop and then start fixing all the compile errors. For this reason, the more you can do in the first phase, the more pain you avoid in the second phase.\nThis process has worked for me on a reasonably large project, but I should note that the first phase requires a solid understanding of how AS3 is structured. If you're new to AS3, then you'll probably need to try building some of the functionality you'll need to be porting. For example, if your legacy code uses dragging and drop targets, you'll want to try implementing that in AS3 to understand how your code will have to change structurally. If you then refactor your AS2 with that in mind, the final syntax changes should go smoothly.\nThe biggest pitfalls for me were the parts that involved a lot of attaching, duplicating and moving MovieClips, changing their depths, and so on. All that stuff can't really be rearchitected to look like AS3; you have to just mash it all into the newer way of thinking and then start fixing the bugs.\nOne final note - I really wouldn't worry about stuff like import and override statements, at least not to the point of automating it. If you miss any, it will be caught by the compiler. But if you miss structural problems, you'll have a lot more pain.\n", "Migrating a bigger project like this from as2 will be more than a simple search and replace. The new syntax is fairly similar and simple to adapt (as lilserf mentioned) but if nothing else the fact that as3 is more strict and the new event model will mostly likely cause a lot of problems. You'll probably be better off by more or less rewriting almost everything from scratch, possibly using the old code as a guide.\nMigrating from as2 -> as3 in terms of knowledge is fairly simple though. If you know object oriented as2, moving on to as3 won't be a problem at all.\nYou still don't have to use mxml for your UI unless you specifically want to. Mxml just provides a quick way to build the UI (etc) but if you want to do it yourself with actionscript there's nothing stopping you (this would also probably be easier if you already have that UI in as2 code). Flex (Builder) is just a quick way to do stuff you may not want to do yourself, such as building the UI and binding data but essentially it's just creating a part of the .swf for you -- there's no magic to it ;)\n" ]
[ 6, 1, 1, 1, 0 ]
[]
[]
[ "actionscript_3", "apache_flex", "flash", "porting" ]
stackoverflow_0000046136_actionscript_3_apache_flex_flash_porting.txt
Q: How do you build a multi-language web site? A friend of mine is now building a web application with J2EE and Struts, and it's going to be prepared to display pages in several languages. I was told that the best way to support a multi-language site is to use a properties file where you store all the strings of your pages, something like: welcome.english = "Welcome!" welcome.spanish = "¡Bienvenido!" ... This solution is ok, but what happens if your site displays news or something like that (a blog)? I mean, content that is not static, that is updated often... The people that keep the site have to write every new entry in each supported language, and store each version of the entry in the database. The application loads only the entries in the user's chosen language. How do you design the database to support this kind of implementation? Thanks. A: Warning: I'm not a java hacker, so YMMV but... The problem with using a list of "properties" is that you need a lot of discipline. Every time you add a string that should be output to the user you will need to open your properties file, look to see if that string (or something roughly equivalent to it) is already in the file, and then go and add the new property if it isn't. On top of this, you'd have to hope the properties file was fairly human readable / editable if you wanted to give it to an external translation team to deal with. The database based approach is useful for all your database based content. Ideally you want to make it easy to tie pieces of content together with their translations. It only really falls down for all the places you may want to output something that isn't out of a database (error messages etc.). One fairly old technology which we find still works really well, is to use gettext. Gettext or some variant seems to be available for most languages and platforms. The basic premise is that you wrap your output in a special function call like so: echo _("Please do not press this button again"); Then running the gettext tools over your source code will extract all the instances wrapped like that into a "po" file. This will contain entries such as: #: myfolder/my.source:239 msgid "Please do not press this button again" msgstr "" And you can add your translation to the appropriate place: #: myfolder/my.source:239 msgid "Please do not press this button again" msgstr "s’il vous plaît ne pas appuyer sur le bouton ci-dessous à nouveau" Subsequent runs of the gettext tools simply update your po files. You don't even need to extract the po file from your source. If you know you may want to translate your site down the line, then you can just use the format shown above (the underscored function) with all your output. If you don't provide a po file it will just return whatever you put in the quotes. gettext is designed to work with locales so the users locale is used to retrieve the appropriate po file. This makes it really easy to add new translations. Gettext Pros Doesn't get in your way while coding Very easy to add translations PO files can be compiled down for speed There are libraries available for most languages / platforms There are good cross platform tools for dealing with translations. It is actually possible to get your translation team set up with a tool such as poEdit to make it very easy for them to manage translation projects Gettext Cons Solves your site "furniture" needs, but you would usually still want a database based approach for your database driven content For more info on gettext see this wikipedia page A: They way I have designed the database before is to have an News-table containing basic info like NewsID (int), NewsPubDate (datetime), NewsAuthor (varchar/int) and then have a linked table NewsText that has these columns: NewsID(int), NewsText(text), NewsLanguageID(int). And at last you have a Language-table that has LanguageID(int) and LanguageName(varchar). Then, when you want to show your users the news-page you do: SELECT NewsText FROM News INNER JOIN NewsText ON News.NewsID = NewsText.NewsID WHERE NewsText.NewsLanguageID = <<Session["UserLanguageID"]>> That Session-bit is a local variable where you store the users language when they log in or enters the site for the first time. A: Java web applications support internationalization using the java standard tag library. You've really got 2 problems. Static content and dynamic content. for static content you can use jstl. It uses java ResourceBundles to accomplish this. I managed to get a Databased backed bundle working with the help of this site. The second problem is dynamic content. To solve this problem you'll need to store the data so that you can retrieve different translations based on the user's Locale. (Locale includes Country and Language). It's not trivial, but it is something you can do with a little planning up front. A: @Auron thats what we apply it to. Our apps are all PHP, but gettext has a long heritage. Looks like there is a good Java implementation A: Tag libraries are fine if you're using JSP, but you can also achieve I18N using a template-based technology such as FreeMarker.
How do you build a multi-language web site?
A friend of mine is now building a web application with J2EE and Struts, and it's going to be prepared to display pages in several languages. I was told that the best way to support a multi-language site is to use a properties file where you store all the strings of your pages, something like: welcome.english = "Welcome!" welcome.spanish = "¡Bienvenido!" ... This solution is ok, but what happens if your site displays news or something like that (a blog)? I mean, content that is not static, that is updated often... The people that keep the site have to write every new entry in each supported language, and store each version of the entry in the database. The application loads only the entries in the user's chosen language. How do you design the database to support this kind of implementation? Thanks.
[ "Warning: I'm not a java hacker, so YMMV but...\nThe problem with using a list of \"properties\" is that you need a lot of discipline. Every time you add a string that should be output to the user you will need to open your properties file, look to see if that string (or something roughly equivalent to it) is already in the file, and then go and add the new property if it isn't. On top of this, you'd have to hope the properties file was fairly human readable / editable if you wanted to give it to an external translation team to deal with. \nThe database based approach is useful for all your database based content. Ideally you want to make it easy to tie pieces of content together with their translations. It only really falls down for all the places you may want to output something that isn't out of a database (error messages etc.). \nOne fairly old technology which we find still works really well, is to use gettext. Gettext or some variant seems to be available for most languages and platforms. The basic premise is that you wrap your output in a special function call like so: \necho _(\"Please do not press this button again\");\n\nThen running the gettext tools over your source code will extract all the instances wrapped like that into a \"po\" file. This will contain entries such as:\n#: myfolder/my.source:239\nmsgid \"Please do not press this button again\"\nmsgstr \"\"\n\nAnd you can add your translation to the appropriate place:\n#: myfolder/my.source:239\nmsgid \"Please do not press this button again\"\nmsgstr \"s’il vous plaît ne pas appuyer sur le bouton ci-dessous à nouveau\"\n\nSubsequent runs of the gettext tools simply update your po files. You don't even need to extract the po file from your source. If you know you may want to translate your site down the line, then you can just use the format shown above (the underscored function) with all your output. If you don't provide a po file it will just return whatever you put in the quotes. gettext is designed to work with locales so the users locale is used to retrieve the appropriate po file. This makes it really easy to add new translations.\nGettext Pros\n\nDoesn't get in your way while coding\nVery easy to add translations\nPO files can be compiled down for speed\nThere are libraries available for most languages / platforms\nThere are good cross platform tools for dealing with translations. It is actually possible to get your translation team set up with a tool such as poEdit to make it very easy for them to manage translation projects\n\nGettext Cons\n\nSolves your site \"furniture\" needs, but you would usually still want a database based approach for your database driven content\n\nFor more info on gettext see this wikipedia page\n", "They way I have designed the database before is to have an News-table containing basic info like NewsID (int), NewsPubDate (datetime), NewsAuthor (varchar/int) and then have a linked table NewsText that has these columns: NewsID(int), NewsText(text), NewsLanguageID(int). And at last you have a Language-table that has LanguageID(int) and LanguageName(varchar).\nThen, when you want to show your users the news-page you do:\nSELECT NewsText FROM News INNER JOIN NewsText ON News.NewsID = NewsText.NewsID\nWHERE NewsText.NewsLanguageID = <<Session[\"UserLanguageID\"]>>\n\nThat Session-bit is a local variable where you store the users language when they log in or enters the site for the first time.\n", "Java web applications support internationalization using the java standard tag library.\nYou've really got 2 problems. Static content and dynamic content.\nfor static content you can use jstl. It uses java ResourceBundles to accomplish this. I managed to get a Databased backed bundle working with the help of this site.\nThe second problem is dynamic content.\nTo solve this problem you'll need to store the data so that you can retrieve different translations based on the user's Locale. (Locale includes Country and Language).\nIt's not trivial, but it is something you can do with a little planning up front.\n", "@Auron\nthats what we apply it to. Our apps are all PHP, but gettext has a long heritage. \nLooks like there is a good Java implementation\n", "Tag libraries are fine if you're using JSP, but you can also achieve I18N using a template-based technology such as FreeMarker.\n" ]
[ 13, 6, 2, 1, 1 ]
[]
[]
[ "database", "jakarta_ee", "multilingual", "web_applications" ]
stackoverflow_0000039562_database_jakarta_ee_multilingual_web_applications.txt
Q: Prevent Visual Studio from crashing (sometimes) How can I stop Visual Studio (both 2005 and 2008) from crashing (sometimes) when I select the "Close All But This" option?This does not happen all the time either. A: First, check Windows Update and make sure both VS environments are up to date. If that doesn't help, uninstall them both completely, reinstall only 2005, update and test it. If 2005 doesn't crash, install 2008, update and test them both. Don't install any add-ons you may have been using until you've reinstalled and tested both editions of VS. If one or the other does crash, you should try filing a bug against Visual Studio. If they didn't crash, install any add-ons that you use one at a time and continue to test both editions after each one. (This will take ages, but that's how it has to be) When they start crashing, remove the offending add-on, and file a bug with the add-on developer. (be sure to tell them what other add-ons you're using, in case it only happens when 2 conflicting add-ons are installed.) A: I would highly consider uninstalling and then installing Visual Studio again. Afterwards make sure you have installed available service packs for your VS version. A: Does it happen on all projects or a specific one? Does it only occurs when a specific file is open? Try re-installing visual studio and any/all service packs. A: Try to reset the Visual Studio settings (Tools->Import and Export Settings->Reset All Settings). A: Maybe you can try to reproduce this using a specific solution and csproject file and report it to Microsoft? That's the best shot you can ever have.
Prevent Visual Studio from crashing (sometimes)
How can I stop Visual Studio (both 2005 and 2008) from crashing (sometimes) when I select the "Close All But This" option?This does not happen all the time either.
[ "First, check Windows Update and make sure both VS environments are up to date.\nIf that doesn't help, uninstall them both completely, reinstall only 2005, update and test it. If 2005 doesn't crash, install 2008, update and test them both. Don't install any add-ons you may have been using until you've reinstalled and tested both editions of VS. \nIf one or the other does crash, you should try filing a bug against Visual Studio.\nIf they didn't crash, install any add-ons that you use one at a time and continue to test both editions after each one. (This will take ages, but that's how it has to be) When they start crashing, remove the offending add-on, and file a bug with the add-on developer. (be sure to tell them what other add-ons you're using, in case it only happens when 2 conflicting add-ons are installed.)\n", "I would highly consider uninstalling and then installing Visual Studio again. Afterwards make sure you have installed available service packs for your VS version. \n", "\nDoes it happen on all projects or a specific one?\nDoes it only occurs when a specific file is open?\nTry re-installing visual studio and any/all service packs.\n\n", "Try to reset the Visual Studio settings (Tools->Import and Export Settings->Reset All Settings).\n", "Maybe you can try to reproduce this using a specific solution and csproject file and report it to Microsoft?\nThat's the best shot you can ever have.\n" ]
[ 1, 0, 0, 0, 0 ]
[ "Another alternative:\n\nStudy for 10 years to become a really good programmer\nApply for (and get) a job at Microsoft in the Visual Studio team\nFix the bug\n\n" ]
[ -1 ]
[ "visual_studio" ]
stackoverflow_0000068832_visual_studio.txt
Q: Best way to open a socket in Python I want to open a TCP client socket in Python. Do I have to go through all the low-level BSD create-socket-handle / connect-socket stuff or is there a simpler one-line way? A: Opening sockets in python is pretty simple. You really just need something like this: import socket sock = socket.socket() sock.connect((address, port)) and then you can send() and recv() like any other socket A: OK, this code worked s = socket.socket() s.connect((ip,port)) s.send("my request\r") print s.recv(256) s.close() It was quite difficult to work that out from the Python socket module documentation. So I'll accept The.Anti.9's answer. A: For developing portable network programs of any sort in Python, Twisted is quite useful. One of its benefits is providing a convenient layer above low-level socket APIs.
Best way to open a socket in Python
I want to open a TCP client socket in Python. Do I have to go through all the low-level BSD create-socket-handle / connect-socket stuff or is there a simpler one-line way?
[ "Opening sockets in python is pretty simple. You really just need something like this:\nimport socket\nsock = socket.socket()\nsock.connect((address, port))\n\nand then you can send() and recv() like any other socket\n", "OK, this code worked\ns = socket.socket()\ns.connect((ip,port))\ns.send(\"my request\\r\")\nprint s.recv(256)\ns.close()\n\nIt was quite difficult to work that out from the Python socket module documentation. So I'll accept The.Anti.9's answer.\n", "For developing portable network programs of any sort in Python, Twisted is quite useful. One of its benefits is providing a convenient layer above low-level socket APIs.\n" ]
[ 81, 21, 10 ]
[]
[]
[ "networking", "python", "tcp" ]
stackoverflow_0000068774_networking_python_tcp.txt
Q: Automatic Internationalization Testing For Web Does there exist a website service or set of scripts that will tell you whether your web page badly configured if your goal is to be internationally friendly? To be more precise, I'm wondering if something like this exists: Checking URL: http://www.example.com GET / HTTP/1.0 Accept-Charset: utf8 ... HTTP/1.0 200 OK Charset: iso-8859-1 ..<?xml version="1.0" charset="utf8" ?> WARNING: Header document conflict, your server claims to return iso-8859-1, but includes octet values outside the legal range. This can happen when your documents are saved with a different character set than your web server is configured to serve. From my understanding its unlikely that this will help me make a website that will allow people to post in Japanese or Hebrew, but it might be able to help my English websites reach a larger international audience. A: I believe the W3C validator does it, but maybe not to the extent you are looking for...
Automatic Internationalization Testing For Web
Does there exist a website service or set of scripts that will tell you whether your web page badly configured if your goal is to be internationally friendly? To be more precise, I'm wondering if something like this exists: Checking URL: http://www.example.com GET / HTTP/1.0 Accept-Charset: utf8 ... HTTP/1.0 200 OK Charset: iso-8859-1 ..<?xml version="1.0" charset="utf8" ?> WARNING: Header document conflict, your server claims to return iso-8859-1, but includes octet values outside the legal range. This can happen when your documents are saved with a different character set than your web server is configured to serve. From my understanding its unlikely that this will help me make a website that will allow people to post in Japanese or Hebrew, but it might be able to help my English websites reach a larger international audience.
[ "I believe the W3C validator does it, but maybe not to the extent you are looking for...\n" ]
[ 0 ]
[]
[]
[ "internationalization", "testing" ]
stackoverflow_0000068898_internationalization_testing.txt