content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Running multiple sites from a single Python web framework I know you can do redirection based on the domain or path to rewrite the URI to point at a site-specific location and I've also seen some brutish if and elif statements for every site as shown in the following code, which I would like to avoid. if site == 'site1': ... elif site == 'site2: ... What are some good and clever ways of running multiple sites from a single, common Python web framework (i.e., Pylons, TurboGears, etc)? A: Django has this built in. See the sites framework. As a general technique, include a 'host' column in your database schema attached to the data you want to be host-specific, then include the Host HTTP header in the query when you are retrieving data. A: Using Django on apache with mod_python, I host multiple (unrelated) django sites simply with the following apache config: <VirtualHost 1.2.3.4> DocumentRoot /www/site1 ServerName site1.com <Location /> SetHandler python-program SetEnv DJANGO_SETTINGS_MODULE site1.settings PythonPath "['/www'] + sys.path" PythonDebug On PythonInterpreter site1 </Location> </VirtualHost> <VirtualHost 1.2.3.4> DocumentRoot /www/site2 ServerName site2.com <Location /> SetHandler python-program SetEnv DJANGO_SETTINGS_MODULE site2.settings PythonPath "['/www'] + sys.path" PythonDebug On PythonInterpreter site2 </Location> </VirtualHost> No need for multiple apache instances or proxy servers. Using a different PythonInterpreter directive for each site (the name you enter is arbitrary) keeps the namespaces separate. A: I use CherryPy as my web server (which comes bundled with Turbogears), and I simply run multiple instances of the CherryPy web server on different ports bound to localhost. Then I configure Apache with mod_proxy and mod_rewrite to transparently forward requests to the proper port based on the HTTP request. A: Using multiple server instances on local ports is a good idea, but you don't need a full featured web server to redirect HTTP requests. I would use pound as a reverse proxy to do the job. It is small, fast, simple and does exactly what we need here. WHAT POUND IS: a reverse-proxy: it passes requests from client browsers to one or more back-end servers. a load balancer: it will distribute the requests from the client browsers among several back-end servers, while keeping session information. an SSL wrapper: Pound will decrypt HTTPS requests from client browsers and pass them as plain HTTP to the back-end servers. an HTTP/HTTPS sanitizer: Pound will verify requests for correctness and accept only well-formed ones. a fail over-server: should a back-end server fail, Pound will take note of the fact and stop passing requests to it until it recovers. a request redirector: requests may be distributed among servers according to the requested URL.
Running multiple sites from a single Python web framework
I know you can do redirection based on the domain or path to rewrite the URI to point at a site-specific location and I've also seen some brutish if and elif statements for every site as shown in the following code, which I would like to avoid. if site == 'site1': ... elif site == 'site2: ... What are some good and clever ways of running multiple sites from a single, common Python web framework (i.e., Pylons, TurboGears, etc)?
[ "Django has this built in. See the sites framework.\nAs a general technique, include a 'host' column in your database schema attached to the data you want to be host-specific, then include the Host HTTP header in the query when you are retrieving data.\n", "Using Django on apache with mod_python, I host multiple (unrelated) django sites simply with the following apache config:\n<VirtualHost 1.2.3.4>\n DocumentRoot /www/site1\n ServerName site1.com\n <Location />\n SetHandler python-program\n SetEnv DJANGO_SETTINGS_MODULE site1.settings\n PythonPath \"['/www'] + sys.path\"\n PythonDebug On\n PythonInterpreter site1\n </Location>\n</VirtualHost>\n\n<VirtualHost 1.2.3.4>\n DocumentRoot /www/site2\n ServerName site2.com\n <Location />\n SetHandler python-program\n SetEnv DJANGO_SETTINGS_MODULE site2.settings\n PythonPath \"['/www'] + sys.path\"\n PythonDebug On\n PythonInterpreter site2\n </Location>\n</VirtualHost>\n\nNo need for multiple apache instances or proxy servers. Using a different PythonInterpreter directive for each site (the name you enter is arbitrary) keeps the namespaces separate.\n", "I use CherryPy as my web server (which comes bundled with Turbogears), and I simply run multiple instances of the CherryPy web server on different ports bound to localhost. Then I configure Apache with mod_proxy and mod_rewrite to transparently forward requests to the proper port based on the HTTP request.\n", "Using multiple server instances on local ports is a good idea, but you don't need a full featured web server to redirect HTTP requests. \nI would use pound as a reverse proxy to do the job. It is small, fast, simple and does exactly what we need here.\n\nWHAT POUND IS:\n\na reverse-proxy: it passes requests from client browsers to one or more back-end servers.\na load balancer: it will distribute the requests from the client browsers among several back-end servers, while keeping session information.\nan SSL wrapper: Pound will decrypt HTTPS requests from client browsers and pass them as plain HTTP to the back-end servers.\nan HTTP/HTTPS sanitizer: Pound will verify requests for correctness and accept only well-formed ones.\na fail over-server: should a back-end server fail, Pound will take note of the fact and stop passing requests to it until it recovers.\na request redirector: requests may be distributed among servers according to the requested URL.\n\n\n" ]
[ 11, 7, 3, 3 ]
[]
[]
[ "frameworks", "python" ]
stackoverflow_0000085119_frameworks_python.txt
Q: Multiple return values to indicate success/failure. I'm kind of interested in getting some feedback about this technique I picked up from somewhere. I use this when a function can either succeed or fail, but you'd like to get more information about why it failed. A standard way to do this same thing would be with exception handling, but I often find it a bit over the top for this sort of thing, plus PHP4 does not offer this. Basically the technique involves returning true for success, and something which equates to false for failure. Here's an example to show what I mean: define ('DUPLICATE_USERNAME', false); define ('DATABASE_ERROR', 0); define ('INSUFFICIENT_DETAILS', 0.0); define ('OK', true); function createUser($username) { // create the user and return the appropriate constant from the above } The beauty of this is that in your calling code, if you don't care WHY the user creation failed, you can write simple and readable code: if (createUser('fred')) { // yay, it worked! } else { // aww, it didn't work. } If you particularly want to check why it didn't work (for logging, display to the user, or do whatever), use identity comparison with === $status = createUser('fred'); if ($status) { // yay, it worked! } else if ($status === DUPLICATE_USERNAME) { // tell the user about it and get them to try again. } else { // aww, it didn't work. log it and show a generic error message? whatever. } The way I see it, the benefits of this are that it is a normal expectation that a successful execution of a function like that would return true, and failure return false. The downside is that you can only have 7 "error" return values: false, 0, 0.0, "0", null, "", and (object) null. If you forget to use identity checking you could get your program flow all wrong. Someone else has told me that using constants like an enum where they all equate to false is "ick". So, to restate the question: how acceptable is a practise like this? Would you recommend a different way to achieve the same thing? A: I agree with the others who have stated that this is a little on the WTFy side. If it's clearly documented functionality, then it's less of an issue, but I think it'd be safer to take an alternate route of returning 0 for success and integers for error codes. If you don't like that idea or the idea of a global last error variable, consider redefining your function as: function createUser($username, &$error) Then you can use: if (createUser('fred', $error)) { echo 'success'; } else { echo $error; } Inside createUser, just populate $error with any error you encounter and it'll be accessible outside of the function scope due to the reference. A: As long as it's documented and contracted, and not too WTFy, then there shouldn't be a problem. Then again, I would recommend using exceptions for something like this. It makes more sense. If you can use PHP5, then that would be the way to go. Otherwise you don't have much choice. A: A more common approach I have seen when exceptions aren't available is to store the error type in a 'last_error' variable somewhere and then when a failure happens (ie it returns false) look up the error. Another approach is to use the venerable unix tool approach numbered error codes - return 0 for success and any integer (that maps to some error) for the various error conditions. Most of these suffer in comparison to exceptions when I've seen them used however. Just to respond to Andrew's comment - I agree that the last_error should not be a global and perhaps the 'somewhere' in my answer was a little vague - other people have suggested better places already so I won't bother to repeat them A: Often you will return 0 to indicate success, and 1, 2, 3, etc. to indicate different failures. Your way of doing it is kind of hackish, because you can only have so many errors, and this kind of coding will bite you sooner or later. I like defining a struct/object that includes a Boolean to indicate success, and an error message or other value indicate what kind of error occurred. You can also include other fields to indicate what kind of action was executed. This makes logging very easy, since you can then just pass the status-struct into the logger, and it will then insert the appropriate log entry. A: how acceptable is a practice like this? I'd say it's unacceptable. Requires the === operator, which is very dangerous. If the user used ==, it leads to a very hard to find bug. Using "0" and "" to denote false may change in future PHP versions. Plus in a lot of other languages "0" and "" does not evaluate to false which leads to great confusion Using getLastError() type of global function is probably the best practice in PHP because it ties in well with the language, since PHP is still mostly a procedural langauge. I think another problem with the approach you just gave is that very few other systems work like that. The programmer has to learn this way of error checking which is the source of errors. It's best to make things work like how most people expect. if ( makeClient() ) { // happy scenario goes here } else { // error handling all goes inside this block switch ( getMakeClientError() ) { case: // .. } } A: When exceptions aren't available, I'd use the PEAR model and provide isError() functionality in all your classes. A: Reinventing the wheel here. Using squares. OK, you don't have exceptions in PHP 4. Welcome in the year 1982, take a look at C. You can have error codes. Consider negative values, they seem more intuitive, so you would just have to check if (createUser() > 0). You can have an error log if you want, with error messages (or just arbitrary error codes) pushed onto an array, dealt with elegance afterwards. But PHP is a loosely typed language for a reason, and throwing error codes that have different types but evaluate to the same "false" is something that shouldn't be done. What happens when you run out of built-in types? What happens when you get a new coder and have to explain how this thing works? Say, in 6 months, you won't remember. Is PHP === operator fast enough to get through it? Is it faster than error codes? or any other method? Just drop it. A: Ick. In Unix pre-exception this is done with errno. You return 0 for success or -1 for failure, then you have a value you can retrieve with an integer error code to get the actual error. This works in all cases, because you don't have a (realistic) limit to the number of error codes. INT_MAX is certainly more than 7, and you don't have to worry about the type (errno). I vote against the solution proposed in the question. A: If you really want to do this kind of thing, you should have different values for each error, and check for success. Something like define ('OK', 0); define ('DUPLICATE_USERNAME', 1); define ('DATABASE_ERROR', 2); define ('INSUFFICIENT_DETAILS', 3); And check: if (createUser('fred') == OK) { //OK } else { //Fail } A: It does make sense that a successful execution returns true. Handling generic errors will be much easier: if (!createUser($username)) { // the dingo ate my user. // deal with it. } But it doesn't make sense at all to associate meaning with different types of false. False should mean one thing and one thing only, regardless of the type or how the programming language treats it. If you're going to define error status constants anyway, better stick with switch/case define(DUPLICATE_USERNAME, 4) define(USERNAME_NOT_ALPHANUM, 8) switch ($status) { case DUPLICATE_USERNAME: // sorry hun, there's someone else break; case USERNAME_NOT_ALPHANUM: break; default: // yay, it worked } Also with this technique, you'll be able to bitwise AND and OR status messages, so you can return status messages that carry more than one meaning like DUPLICATE_USERNAME & USERNAME_NOT_ALPHANUM and treat it appropriately. This isn't always a good idea, it depends on how you use it. A: I like the way COM can handle both exception and non-exception capable callers. The example below show how a HRESULT is tested and an exception is thrown in case of failure. (usually autogenerated in tli files) inline _bstr_t IMyClass::GetName ( ) { BSTR _result; HRESULT _hr = get_name(&_result); if (FAILED(_hr)) _com_issue_errorex(_hr, this, __uuidof(this)); return _bstr_t(_result, false); } Using return values will affect readability by having error handling scattered and worst case, the return values are never checked by the code. That's why I prefer exception when a contract is breached. A: Other ways include exceptions: throw new Validation_Exception_SQLDuplicate("There's someone else, hun");), returning structures, return new Result($status, $stuff); if ($result->status == 0) { $stuff = $result->data; } else { die('Oh hell'); } I would hate to be the person who came after you for using the code pattern you suggested originally. And I mean "Came after you" as in "followed you in employment and had to maintain the code" rather than "came after you" "with a wedgiematic", though both are options.
Multiple return values to indicate success/failure.
I'm kind of interested in getting some feedback about this technique I picked up from somewhere. I use this when a function can either succeed or fail, but you'd like to get more information about why it failed. A standard way to do this same thing would be with exception handling, but I often find it a bit over the top for this sort of thing, plus PHP4 does not offer this. Basically the technique involves returning true for success, and something which equates to false for failure. Here's an example to show what I mean: define ('DUPLICATE_USERNAME', false); define ('DATABASE_ERROR', 0); define ('INSUFFICIENT_DETAILS', 0.0); define ('OK', true); function createUser($username) { // create the user and return the appropriate constant from the above } The beauty of this is that in your calling code, if you don't care WHY the user creation failed, you can write simple and readable code: if (createUser('fred')) { // yay, it worked! } else { // aww, it didn't work. } If you particularly want to check why it didn't work (for logging, display to the user, or do whatever), use identity comparison with === $status = createUser('fred'); if ($status) { // yay, it worked! } else if ($status === DUPLICATE_USERNAME) { // tell the user about it and get them to try again. } else { // aww, it didn't work. log it and show a generic error message? whatever. } The way I see it, the benefits of this are that it is a normal expectation that a successful execution of a function like that would return true, and failure return false. The downside is that you can only have 7 "error" return values: false, 0, 0.0, "0", null, "", and (object) null. If you forget to use identity checking you could get your program flow all wrong. Someone else has told me that using constants like an enum where they all equate to false is "ick". So, to restate the question: how acceptable is a practise like this? Would you recommend a different way to achieve the same thing?
[ "I agree with the others who have stated that this is a little on the WTFy side. If it's clearly documented functionality, then it's less of an issue, but I think it'd be safer to take an alternate route of returning 0 for success and integers for error codes. If you don't like that idea or the idea of a global last error variable, consider redefining your function as:\nfunction createUser($username, &$error)\n\nThen you can use:\nif (createUser('fred', $error)) {\n echo 'success';\n}\nelse {\n echo $error;\n}\n\nInside createUser, just populate $error with any error you encounter and it'll be accessible outside of the function scope due to the reference.\n", "As long as it's documented and contracted, and not too WTFy, then there shouldn't be a problem.\nThen again, I would recommend using exceptions for something like this. It makes more sense. If you can use PHP5, then that would be the way to go. Otherwise you don't have much choice.\n", "A more common approach I have seen when exceptions aren't available is to store the error type in a 'last_error' variable somewhere and then when a failure happens (ie it returns false) look up the error. \nAnother approach is to use the venerable unix tool approach numbered error codes - return 0 for success and any integer (that maps to some error) for the various error conditions. \nMost of these suffer in comparison to exceptions when I've seen them used however.\nJust to respond to Andrew's comment - \nI agree that the last_error should not be a global and perhaps the 'somewhere' in my answer was a little vague - other people have suggested better places already so I won't bother to repeat them\n", "Often you will return 0 to indicate success, and 1, 2, 3, etc. to indicate different failures. Your way of doing it is kind of hackish, because you can only have so many errors, and this kind of coding will bite you sooner or later.\nI like defining a struct/object that includes a Boolean to indicate success, and an error message or other value indicate what kind of error occurred. You can also include other fields to indicate what kind of action was executed. \nThis makes logging very easy, since you can then just pass the status-struct into the logger, and it will then insert the appropriate log entry.\n", "\nhow acceptable is a practice like this?\n\nI'd say it's unacceptable.\n\nRequires the === operator, which is very dangerous. If the user used ==, it leads to a very hard to find bug.\nUsing \"0\" and \"\" to denote false may change in future PHP versions. Plus in a lot of other languages \"0\" and \"\" does not evaluate to false which leads to great confusion\n\nUsing getLastError() type of global function is probably the best practice in PHP because it ties in well with the language, since PHP is still mostly a procedural langauge. I think another problem with the approach you just gave is that very few other systems work like that. The programmer has to learn this way of error checking which is the source of errors. It's best to make things work like how most people expect.\nif ( makeClient() )\n{ // happy scenario goes here }\n\nelse\n{\n // error handling all goes inside this block\n switch ( getMakeClientError() )\n { case: // .. }\n}\n\n", "When exceptions aren't available, I'd use the PEAR model and provide isError() functionality in all your classes.\n", "Reinventing the wheel here. Using squares.\nOK, you don't have exceptions in PHP 4. Welcome in the year 1982, take a look at C.\nYou can have error codes. Consider negative values, they seem more intuitive, so you would just have to check if (createUser() > 0).\nYou can have an error log if you want, with error messages (or just arbitrary error codes) pushed onto an array, dealt with elegance afterwards.\nBut PHP is a loosely typed language for a reason, and throwing error codes that have different types but evaluate to the same \"false\" is something that shouldn't be done. \nWhat happens when you run out of built-in types?\nWhat happens when you get a new coder and have to explain how this thing works? Say, in 6 months, you won't remember.\nIs PHP === operator fast enough to get through it? Is it faster than error codes? or any other method?\nJust drop it.\n", "Ick.\nIn Unix pre-exception this is done with errno. You return 0 for success or -1 for failure, then you have a value you can retrieve with an integer error code to get the actual error. This works in all cases, because you don't have a (realistic) limit to the number of error codes. INT_MAX is certainly more than 7, and you don't have to worry about the type (errno).\nI vote against the solution proposed in the question.\n", "If you really want to do this kind of thing, you should have different values for each error, and check for success. Something like\ndefine ('OK', 0);\ndefine ('DUPLICATE_USERNAME', 1);\ndefine ('DATABASE_ERROR', 2);\ndefine ('INSUFFICIENT_DETAILS', 3);\n\nAnd check:\nif (createUser('fred') == OK) {\n //OK\n\n}\nelse {\n //Fail\n}\n\n", "It does make sense that a successful execution returns true. Handling generic errors will be much easier:\nif (!createUser($username)) {\n// the dingo ate my user.\n// deal with it.\n}\n\nBut it doesn't make sense at all to associate meaning with different types of false. False should mean one thing and one thing only, regardless of the type or how the programming language treats it. If you're going to define error status constants anyway, better stick with switch/case\ndefine(DUPLICATE_USERNAME, 4)\ndefine(USERNAME_NOT_ALPHANUM, 8)\n\nswitch ($status) {\ncase DUPLICATE_USERNAME:\n // sorry hun, there's someone else\n break;\ncase USERNAME_NOT_ALPHANUM:\n break;\ndefault:\n // yay, it worked\n}\n\nAlso with this technique, you'll be able to bitwise AND and OR status messages, so you can return status messages that carry more than one meaning like DUPLICATE_USERNAME & USERNAME_NOT_ALPHANUM and treat it appropriately. This isn't always a good idea, it depends on how you use it.\n", "I like the way COM can handle both exception and non-exception capable callers. The example below show how a HRESULT is tested and an exception is thrown in case of failure. (usually autogenerated in tli files)\ninline _bstr_t IMyClass::GetName ( ) {\n BSTR _result;\n HRESULT _hr = get_name(&_result);\n if (FAILED(_hr)) _com_issue_errorex(_hr, this, __uuidof(this));\n return _bstr_t(_result, false);\n}\n\nUsing return values will affect readability by having error handling scattered and worst case, the return values are never checked by the code. That's why I prefer exception when a contract is breached.\n", "Other ways include exceptions:\nthrow new Validation_Exception_SQLDuplicate(\"There's someone else, hun\");),\n\nreturning structures,\nreturn new Result($status, $stuff);\nif ($result->status == 0) {\n $stuff = $result->data;\n}\nelse {\n die('Oh hell');\n}\n\nI would hate to be the person who came after you for using the code pattern you suggested originally.\nAnd I mean \"Came after you\" as in \"followed you in employment and had to maintain the code\" rather than \"came after you\" \"with a wedgiematic\", though both are options.\n" ]
[ 13, 2, 2, 2, 2, 1, 1, 0, 0, 0, 0, 0 ]
[ "In my opinion, you should use this technique only if failure is a \"normal part of operation\" of your method / function. For example, it's as probable that a call suceeds as that it fails. If failure is a exceptional event, then you should use exception handling so your program can terminate as early and gracefully as possible.\nAs for your use of different \"false\" values, I'd better return an instance of a custom \"Result\"-class with an proper error code. Something like:\nclass Result\n{\n var $_result;\n var $_errormsg;\n\n function Result($res, $error)\n {\n $this->_result = $res;\n $ths->_errorMsg = $error\n }\n\n function getResult()\n {\n return $this->_result;\n }\n\n function isError()\n {\n return ! ((boolean) $this->_result);\n }\n\n function getErrorMessage()\n {\n return $this->_errorMsg;\n }\n\n", "Look at COM HRESULT for a correct way to do it.\nBut exceptions are generally better.\nUpdate: the correct way is: define as many error values as you want, not only \"false\" ones. Use function succeeded() to check if function succeeded.\nif (succeeded(result = MyFunction()))\n ...\nelse\n ...\n\n" ]
[ -1, -2 ]
[ "php" ]
stackoverflow_0000072564_php.txt
Q: IndexOutOfRangeException in the Ajax.Net extensions framework For some reason when I attempt to make a request to an Ajax.net web service with the ScriptService attribute set, an exception occurs deep inside the protocol class which I have no control over. Anyone seen this before? Here is the exact msg: System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Web.Services.Protocols.HttpServerType..ctor(Type type) at System.Web.Services.Protocols.HttpServerProtocol.Initialize() at System.Web.Services.Protocols.ServerProtocol.SetContext(Type type, HttpContext ontext, HttpRequest request, HttpResponse response) at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest request, HttpResponse response, Boolean& abortProcessing) thx Trev A: This is usually an exception while reading parameters into the web service method...are you sure you're passing the number/type of parameters the method is expecting? A: Also make sure your web.config is setup properly for asp.net ajax: http://www.asp.net/AJAX/Documentation/Live/ConfiguringASPNETAJAX.aspx
IndexOutOfRangeException in the Ajax.Net extensions framework
For some reason when I attempt to make a request to an Ajax.net web service with the ScriptService attribute set, an exception occurs deep inside the protocol class which I have no control over. Anyone seen this before? Here is the exact msg: System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Web.Services.Protocols.HttpServerType..ctor(Type type) at System.Web.Services.Protocols.HttpServerProtocol.Initialize() at System.Web.Services.Protocols.ServerProtocol.SetContext(Type type, HttpContext ontext, HttpRequest request, HttpResponse response) at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest request, HttpResponse response, Boolean& abortProcessing) thx Trev
[ "This is usually an exception while reading parameters into the web service method...are you sure you're passing the number/type of parameters the method is expecting?\n", "Also make sure your web.config is setup properly for asp.net ajax:\nhttp://www.asp.net/AJAX/Documentation/Live/ConfiguringASPNETAJAX.aspx\n" ]
[ 3, 0 ]
[]
[]
[ "ajax.net", "c#" ]
stackoverflow_0000086549_ajax.net_c#.txt
Q: Worse sin: side effects or passing massive objects? I have a function inside a loop inside a function. The inner function acquires and stores a large vector of data in memory (as a global variable... I'm using "R" which is like "S-Plus"). The loop loops through a long list of data to be acquired. The outer function starts the process and passes in the list of datasets to be acquired. for (dataset in list_of_datasets) { for (datachunk in dataset) { <process datachunk> <store result? as vector? where?> } } I programmed the inner function to store each dataset before moving to the next, so all the work of the outer function occurs as side effects on global variables... a big no-no. Is this better or worse than collecting and returning a giant, memory-hogging vector of vectors? Is there a superior third approach? Would the answer change if I were storing the data vectors in a database rather than in memory? Ideally, I'd like to be able to terminate the function (or have it fail due to network timeouts) without losing all the information processed prior to termination. A: use variables in the outer function instead of global variables. This gets you the best of both approaches: you're not mutating global state, and you're not copying a big wad of data. If you have to exit early, just return the partial results. (See the "Scope" section in the R manual: http://cran.r-project.org/doc/manuals/R-intro.html#Scope) A: Remember your Knuth. "Premature optimization is the root of all programming evil." Try the side effect free version. See if it meets your performance goals. If it does, great, you don't have a problem in the first place; if it doesn't, then use the side effects, and make a note for the next programmer that your hand was forced. A: It's not going to make much difference to memory use, so you might as well make the code clean. Since R has copy-on-modify for variables, modifying the global object will have the same memory implications as passing something up in return values. If you store the outputs in a database (or even in a file) you won't have the memory use issues, and the data will be incrementally available as it is created, rather than just at the end. Whether it's faster with the database depends primarily on how much memory you are using: is the reduction is garbage collection going to pay for the cost of writing to disk. There are both time and memory profilers in R, so you can see empirically what the impacts are. A: I'm not sure I understand the question, but I have a couple of solutions. Inside the function, create a list of the vectors and return that. Inside the function, create an environment and store all the vectors inside of that. Just make sure that you return the environment in case of errors. in R: help(environment) # You might do something like this: outer <- function(datasets) { # create the return environment ret.env <- new.env() for(set in dataset) { tmp <- inner(set) # check for errors however you like here. You might have inner return a list, and # have the list contain an error component assign(set, tmp, envir=ret.env) } return(ret.env) } #The inner function might be defined like this inner <- function(dataset) { # I don't know what you are doing here, but lets pretend you are reading a data file # that is named by dataset filedata <- read.table(dataset, header=T) return(filedata) } leif A: FYI, here's a full sample toy solution that avoids side effects: outerfunc <- function(names) { templist <- list() for (aname in names) { templist[[aname]] <- innerfunc(aname) } templist } innerfunc <- function(aname) { retval <- NULL if ("one" %in% aname) retval <- c(1) if ("two" %in% aname) retval <- c(1,2) if ("three" %in% aname) retval <- c(1,2,3) retval } names <- c("one","two","three") name_vals <- outerfunc(names) for (name in names) assign(name, name_vals[[name]]) A: Third approach: inner function returns a reference to the large array, which the next statement inside the loop then dereferences and stores wherever it's needed (ideally with a single pointer store and not by having to memcopy the entire array). This gets rid of both the side effect and the passing of large datastructures.
Worse sin: side effects or passing massive objects?
I have a function inside a loop inside a function. The inner function acquires and stores a large vector of data in memory (as a global variable... I'm using "R" which is like "S-Plus"). The loop loops through a long list of data to be acquired. The outer function starts the process and passes in the list of datasets to be acquired. for (dataset in list_of_datasets) { for (datachunk in dataset) { <process datachunk> <store result? as vector? where?> } } I programmed the inner function to store each dataset before moving to the next, so all the work of the outer function occurs as side effects on global variables... a big no-no. Is this better or worse than collecting and returning a giant, memory-hogging vector of vectors? Is there a superior third approach? Would the answer change if I were storing the data vectors in a database rather than in memory? Ideally, I'd like to be able to terminate the function (or have it fail due to network timeouts) without losing all the information processed prior to termination.
[ "use variables in the outer function instead of global variables. This gets you the best of both approaches: you're not mutating global state, and you're not copying a big wad of data. If you have to exit early, just return the partial results.\n(See the \"Scope\" section in the R manual: http://cran.r-project.org/doc/manuals/R-intro.html#Scope)\n", "Remember your Knuth. \"Premature optimization is the root of all programming evil.\"\nTry the side effect free version. See if it meets your performance goals. If it does, great, you don't have a problem in the first place; if it doesn't, then use the side effects, and make a note for the next programmer that your hand was forced.\n", "It's not going to make much difference to memory use, so you might as well make the code clean.\nSince R has copy-on-modify for variables, modifying the global object will have the same memory implications as passing something up in return values.\nIf you store the outputs in a database (or even in a file) you won't have the memory use issues, and the data will be incrementally available as it is created, rather than just at the end. Whether it's faster with the database depends primarily on how much memory you are using: is the reduction is garbage collection going to pay for the cost of writing to disk.\nThere are both time and memory profilers in R, so you can see empirically what the impacts are.\n", "I'm not sure I understand the question, but I have a couple of solutions.\n\nInside the function, create a list of the vectors and return that.\nInside the function, create an environment and store all the vectors inside of that. Just make sure that you return the environment in case of errors.\n\nin R:\nhelp(environment)\n\n# You might do something like this:\n\nouter <- function(datasets) {\n # create the return environment\n ret.env <- new.env()\n for(set in dataset) {\n tmp <- inner(set)\n # check for errors however you like here. You might have inner return a list, and\n # have the list contain an error component\n assign(set, tmp, envir=ret.env)\n }\n return(ret.env)\n}\n\n#The inner function might be defined like this\n\ninner <- function(dataset) {\n # I don't know what you are doing here, but lets pretend you are reading a data file\n # that is named by dataset\n filedata <- read.table(dataset, header=T)\n return(filedata)\n}\n\nleif\n", "FYI, here's a full sample toy solution that avoids side effects:\nouterfunc <- function(names) {\n templist <- list()\n for (aname in names) {\n templist[[aname]] <- innerfunc(aname)\n }\n templist\n}\n\ninnerfunc <- function(aname) {\n retval <- NULL\n if (\"one\" %in% aname) retval <- c(1)\n if (\"two\" %in% aname) retval <- c(1,2)\n if (\"three\" %in% aname) retval <- c(1,2,3)\n retval\n}\n\nnames <- c(\"one\",\"two\",\"three\")\n\nname_vals <- outerfunc(names)\n\nfor (name in names) assign(name, name_vals[[name]])\n\n", "Third approach: inner function returns a reference to the large array, which the next statement inside the loop then dereferences and stores wherever it's needed (ideally with a single pointer store and not by having to memcopy the entire array).\nThis gets rid of both the side effect and the passing of large datastructures.\n" ]
[ 9, 6, 4, 1, 1, 0 ]
[ "It's tough to say definitively without knowing the language/compiler used. However, if you can simply pass a pointer/reference to the object that you're creating, then the size of the object itself has nothing to do with the speed of the function calls. Manipulating this data down the road could be a different story.\n" ]
[ -1 ]
[ "function", "global_variables", "memory", "r", "side_effects" ]
stackoverflow_0000079709_function_global_variables_memory_r_side_effects.txt
Q: How can I monitor trace output of a .Net app? I'm working on some code that uses the System.Diagnostics.Trace class and I'm wondering how to monitor what is written via calls to Trace.WriteLine() both when running in debug mode in Visual Studio and when running outside the debugger. A: Try Debug View. It works quite nicely. A: I use a simple little program called 'BareTail' which displays plain text files, updating it's display as the file gets written to and follows (or wraps) to the bottom of the file. When running outside the debugger you'll need to attach a file-writer to write out the trace information, which you can do by adding a few lines to the .exe.config file Hope that Helps ;o) A: Have a look at DevTracer. It also allows monitoring a .NET application remotely. DISCLAIMER: I am the developer of DevTracer and therfore my opinion may not be neutral.
How can I monitor trace output of a .Net app?
I'm working on some code that uses the System.Diagnostics.Trace class and I'm wondering how to monitor what is written via calls to Trace.WriteLine() both when running in debug mode in Visual Studio and when running outside the debugger.
[ "Try Debug View. It works quite nicely.\n", "I use a simple little program called 'BareTail' which displays plain text files, updating it's display as the file gets written to and follows (or wraps) to the bottom of the file.\nWhen running outside the debugger you'll need to attach a file-writer to write out the trace information, which you can do by adding a few lines to the .exe.config file\nHope that Helps ;o)\n", "Have a look at DevTracer. It also allows monitoring a .NET application remotely.\nDISCLAIMER: I am the developer of DevTracer and therfore my opinion may not be neutral.\n" ]
[ 11, 1, 1 ]
[]
[]
[ ".net", "visual_studio", "windows" ]
stackoverflow_0000054836_.net_visual_studio_windows.txt
Q: Pass by reference not returning in RMI for ArrayList I've got an RMI call defined as: public void remoteGetCustomerNameNumbers(ArrayList<String> customerNumberList, ArrayList<String> customerNameList) throws java.rmi.RemoteException; The function does a database lookup and populates the two ArrayLists. The calling function gets nothing. I believe this works with Vector types. Do I need to use the Vector, or is there a way to get this to work without making two calls. I've got some other ideas that I'd probably use, like returning a key/value pair, but I'd like to know if I can get this to work. Update: I would accept all of the answers given so far if I could. I hadn't known the network cost, so It makes sense to rework the function to return a LinkedHashMap instead of the two ArrayLists. A: Arguments in RMI calls a serialised. Deserialisation on the server creates a copy of the lists. If the lists remained on the client side, then the number of network calls would be quite high. You can pass remote objects, but beware of the performance implications. A: You lose your references when you make the remote call. You'll need to return the lists rather than expect them to be populated by the remote call. A: As Tom mentions, you can pass remote objects. You'd have to create a class to hold your list that implements Remote. Anytime you pass something that implements Remote as an argument, whenever the receiving side uses it, it turns around and makes a remote call back to the caller to work with that object. A: As others have already mentioned, when passing objects as parameters to an RMI method, the object will get serialized, then deserialized on the other end inside the target object containing the RMI method. This breaks the reference from the original objects passed in, as you now have two distinct objects: one in the client code calling the method, and one on the remote side. In this specific example, a better approach would be to break up your method calls (since you appear to be doing two things in one method: getting customer names and getting customer numbers) and instead have your results returned to the caller rather than passing in a collection...like this: public ArrayList<String> getCustomerNames() throws java.rmi.RemoteException; public ArrayList<String> getCustomerNumbers() throws java.rmi.RemoteException; Since both ArrayList and String implement Serializable, the results in the collection will be serialized and sent over the wire to the client code calling the method, at which point you can work with the data however you need. If instead you need to use a custom object in the collection, as long as your class implements the java.io.Serializable interface, and follows the specification for that interface you should have no problems. This would result in two separate calls over the wire, but is a much cleaner and simpler interaction, and avoids the reference breaking problem in your original example.
Pass by reference not returning in RMI for ArrayList
I've got an RMI call defined as: public void remoteGetCustomerNameNumbers(ArrayList<String> customerNumberList, ArrayList<String> customerNameList) throws java.rmi.RemoteException; The function does a database lookup and populates the two ArrayLists. The calling function gets nothing. I believe this works with Vector types. Do I need to use the Vector, or is there a way to get this to work without making two calls. I've got some other ideas that I'd probably use, like returning a key/value pair, but I'd like to know if I can get this to work. Update: I would accept all of the answers given so far if I could. I hadn't known the network cost, so It makes sense to rework the function to return a LinkedHashMap instead of the two ArrayLists.
[ "Arguments in RMI calls a serialised. Deserialisation on the server creates a copy of the lists. If the lists remained on the client side, then the number of network calls would be quite high. You can pass remote objects, but beware of the performance implications.\n", "You lose your references when you make the remote call. You'll need to return the lists rather than expect them to be populated by the remote call.\n", "As Tom mentions, you can pass remote objects. You'd have to create a class to hold your list that implements Remote. Anytime you pass something that implements Remote as an argument, whenever the receiving side uses it, it turns around and makes a remote call back to the caller to work with that object. \n", "As others have already mentioned, when passing objects as parameters to an RMI method, the object will get serialized, then deserialized on the other end inside the target object containing the RMI method. This breaks the reference from the original objects passed in, as you now have two distinct objects: one in the client code calling the method, and one on the remote side.\nIn this specific example, a better approach would be to break up your method calls (since you appear to be doing two things in one method: getting customer names and getting customer numbers) and instead have your results returned to the caller rather than passing in a collection...like this:\npublic ArrayList<String> getCustomerNames() throws java.rmi.RemoteException;\n\npublic ArrayList<String> getCustomerNumbers() throws java.rmi.RemoteException;\n\nSince both ArrayList and String implement Serializable, the results in the collection will be serialized and sent over the wire to the client code calling the method, at which point you can work with the data however you need. If instead you need to use a custom object in the collection, as long as your class implements the java.io.Serializable interface, and follows the specification for that interface you should have no problems.\nThis would result in two separate calls over the wire, but is a much cleaner and simpler interaction, and avoids the reference breaking problem in your original example.\n" ]
[ 2, 1, 1, 1 ]
[]
[]
[ "arraylist", "java", "rmi" ]
stackoverflow_0000085085_arraylist_java_rmi.txt
Q: Is the syntax for the Wordpress style.css template element available anywhere? I've recently embarked upon the grand voyage of Wordpress theming and I've been reading through the Wordpress documentation for how to write a theme. One thing I came across here was that the style.css file must contain a specific header in order to be used by the Wordpress engine. They give a brief example but I haven't been able to turn up any formal description of what must be in the style.css header portion. Does this exist on the Wordpress site? If it doesn't could we perhaps describe it here? A: Based on http://codex.wordpress.org/Theme_Development: The following is an example of the first few lines of the stylesheet, called the style sheet header, for the Theme "Rose": /* Theme Name: Rose Theme URI: the-theme's-homepage Description: a-brief-description Author: your-name Author URI: your-URI Template: use-this-to-define-a-parent-theme--optional Version: a-number--optional Tags: a-comma-delimited-list--optional . General comments/License Statement if any. . */ The simplest Theme includes only a style.css file, plus images if any. To create such a Theme, you must specify a set of templates to inherit for use with the Theme by editing the Template: line in the style.css header comments. For example, if you wanted the Theme "Rose" to inherit the templates from another Theme called "test", you would include Template: test in the comments at the beginning of Rose's style.css. Now "test" is the parent Theme for "Rose", which still consists only of a style.css file and the concomitant images, all located in the directory wp-content/themes/Rose. (Note that specifying a parent Theme will inherit all of the template files from that Theme — meaning that any template files in the child Theme's directory will be ignored.) The comment header lines in style.css are required for WordPress to be able to identify a Theme and display it in the Administration Panel under Design > Themes as an available Theme option along with any other installed Themes. The Theme Name, Version, Author, and Author URI fields are parsed by WordPress and used to display that data in the Current Theme area on the top line of the current theme information, where the Author's Name is hyperlinked to the Author URI. The Description and Tag fields are parsed and displayed in the body of the theme's information, and if the theme has a parent theme, that information is placed in the information body as well. In the Available Themes section, only the Theme Name, Description, and Tags fields are used. None of these fields have any restrictions - all are parsed as strings. In addition, none of them are required in the code, though in practice the fields not marked as optional in the list above are all used to provide contextual information to the WordPress administrator and should be included for all themes. A: You are probably thinking about this: /* THEME NAME: Parallax THEME URI: http://parallaxdenigrate.net VERSION: .1 AUTHOR: Martin Jacobsen AUTHOR URI: http://martinjacobsen.no */ If I'm not way off, Wordpress uses this info to display in the "Activate Design" dialog in the admin backend.
Is the syntax for the Wordpress style.css template element available anywhere?
I've recently embarked upon the grand voyage of Wordpress theming and I've been reading through the Wordpress documentation for how to write a theme. One thing I came across here was that the style.css file must contain a specific header in order to be used by the Wordpress engine. They give a brief example but I haven't been able to turn up any formal description of what must be in the style.css header portion. Does this exist on the Wordpress site? If it doesn't could we perhaps describe it here?
[ "Based on http://codex.wordpress.org/Theme_Development:\nThe following is an example of the first few lines of the stylesheet, called the style sheet header, for the Theme \"Rose\":\n/* \nTheme Name: Rose\nTheme URI: the-theme's-homepage\nDescription: a-brief-description\nAuthor: your-name\nAuthor URI: your-URI\nTemplate: use-this-to-define-a-parent-theme--optional\nVersion: a-number--optional\nTags: a-comma-delimited-list--optional\n.\nGeneral comments/License Statement if any.\n.\n*/\n\nThe simplest Theme includes only a style.css file, plus images if any. To create such a Theme, you must specify a set of templates to inherit for use with the Theme by editing the Template: line in the style.css header comments. For example, if you wanted the Theme \"Rose\" to inherit the templates from another Theme called \"test\", you would include Template: test in the comments at the beginning of Rose's style.css. Now \"test\" is the parent Theme for \"Rose\", which still consists only of a style.css file and the concomitant images, all located in the directory wp-content/themes/Rose. (Note that specifying a parent Theme will inherit all of the template files from that Theme — meaning that any template files in the child Theme's directory will be ignored.)\nThe comment header lines in style.css are required for WordPress to be able to identify a Theme and display it in the Administration Panel under Design > Themes as an available Theme option along with any other installed Themes. \nThe Theme Name, Version, Author, and Author URI fields are parsed by WordPress and used to display that data in the Current Theme area on the top line of the current theme information, where the Author's Name is hyperlinked to the Author URI. The Description and Tag fields are parsed and displayed in the body of the theme's information, and if the theme has a parent theme, that information is placed in the information body as well. In the Available Themes section, only the Theme Name, Description, and Tags fields are used.\nNone of these fields have any restrictions - all are parsed as strings. In addition, none of them are required in the code, though in practice the fields not marked as optional in the list above are all used to provide contextual information to the WordPress administrator and should be included for all themes.\n", "You are probably thinking about this:\n/*\nTHEME NAME: Parallax\nTHEME URI: http://parallaxdenigrate.net\nVERSION: .1\nAUTHOR: Martin Jacobsen\nAUTHOR URI: http://martinjacobsen.no\n*/\n\nIf I'm not way off, Wordpress uses this info to display in the \"Activate Design\" dialog in the admin backend.\n" ]
[ 7, 1 ]
[]
[]
[ "css", "wordpress", "wordpress_theming" ]
stackoverflow_0000086800_css_wordpress_wordpress_theming.txt
Q: Interpreted languages - leveraging the compiled language behind the interpreter If there are any language designers out there (or people simply in the know), I'm curious about the methodology behind creating standard libraries for interpreted languages. Specifically, what seems to be the best approach? Defining standard functions/methods in the interpreted language, or performing the processing of those calls in the compiled language in which the interpreter is written? What got me to thinking about this was the SO question about a stripslashes()-like function in Python. My first thought was "why not define your own and just call it when you need it", but it raised the question: is it preferable, for such a function, to let the interpreted language handle that overhead, or would it be better to write an extension and leverage the compiled language behind the interpreter? A: The line between "interpreted" and "compiled" languages is really fuzzy these days. For example, the first thing Python does when it sees source code is compile it into a bytecode representation, essentially the same as what Java does when compiling class files. This is what *.pyc files contain. Then, the python runtime executes the bytecode without referring to the original source. Traditionally, a purely interpreted language would refer to the source code continuously when executing the program. When building a language, it is a good approach to build a solid foundation on which you can implement the higher level functions. If you've got a solid, fast string handling system, then the language designer can (and should) implement something like stripslashes() outside the base runtime. This is done for at least a few reasons: The language designer can show that the language is flexible enough to handle that kind of task. The language designer actually writes real code in the language, which has tests and therefore shows that the foundation is solid. Other people can more easily read, borrow, and even change the higher level function without having to be able to build or even understand the language core. Just because a language like Python compiles to bytecode and executes that doesn't mean it is slow. There's no reason why somebody couldn't write a Just-In-Time (JIT) compiler for Python, along the lines of what Java and .NET already do, to further increase the performance. In fact, IronPython compiles Python directly to .NET bytecode, which is then run using the .NET system including the JIT. To answer your question directly, the only time a language designer would implement a function in the language behind the runtime (eg. C in the case of Python) would be to maximise the performance of that function. This is why modules such as the regular expression parser are written in C rather than native Python. On the other hand, a module like getopt.py is implemented in pure Python because it can all be done there and there's no benefit to using the corresponding C library. A: There's also an increasing trend of reimplementing languages that are traditionally considered "interpreted" onto a platform like the JVM or CLR -- and then allowing easy access to "native" code for interoperability. So from Jython and JRuby, you can easily access Java code, and from IronPython and IronRuby, you can easily access .NET code. In cases like these, the ability to "leverage the compiled language behind the interpreter" could be described as the primary motivator for the new implementation. A: See the 'Papers' section at www.lua.org. Especially The Implementation of Lua 5.0 Lua defines all standard functions in the underlying (ANSI C) code. I believe this is mostly for performance reasons. Recently, i.e. the 'string.*' functions got an alternative implementation in pure Lua, which may prove vital for subprojects where Lua is run on top of .NET or Java runtime (where C code cannot be used). A: As long as you are using a portable API for the compiled code base like the ANSI C standard library or STL in C++, then taking advantage of those functions would keep you from reinventing the wheel and likely provide a smaller, faster interpreter. Lua takes this approach and it is definitely small and fast as compared to many others.
Interpreted languages - leveraging the compiled language behind the interpreter
If there are any language designers out there (or people simply in the know), I'm curious about the methodology behind creating standard libraries for interpreted languages. Specifically, what seems to be the best approach? Defining standard functions/methods in the interpreted language, or performing the processing of those calls in the compiled language in which the interpreter is written? What got me to thinking about this was the SO question about a stripslashes()-like function in Python. My first thought was "why not define your own and just call it when you need it", but it raised the question: is it preferable, for such a function, to let the interpreted language handle that overhead, or would it be better to write an extension and leverage the compiled language behind the interpreter?
[ "The line between \"interpreted\" and \"compiled\" languages is really fuzzy these days. For example, the first thing Python does when it sees source code is compile it into a bytecode representation, essentially the same as what Java does when compiling class files. This is what *.pyc files contain. Then, the python runtime executes the bytecode without referring to the original source. Traditionally, a purely interpreted language would refer to the source code continuously when executing the program.\nWhen building a language, it is a good approach to build a solid foundation on which you can implement the higher level functions. If you've got a solid, fast string handling system, then the language designer can (and should) implement something like stripslashes() outside the base runtime. This is done for at least a few reasons:\n\nThe language designer can show that the language is flexible enough to handle that kind of task.\nThe language designer actually writes real code in the language, which has tests and therefore shows that the foundation is solid.\nOther people can more easily read, borrow, and even change the higher level function without having to be able to build or even understand the language core.\n\nJust because a language like Python compiles to bytecode and executes that doesn't mean it is slow. There's no reason why somebody couldn't write a Just-In-Time (JIT) compiler for Python, along the lines of what Java and .NET already do, to further increase the performance. In fact, IronPython compiles Python directly to .NET bytecode, which is then run using the .NET system including the JIT.\nTo answer your question directly, the only time a language designer would implement a function in the language behind the runtime (eg. C in the case of Python) would be to maximise the performance of that function. This is why modules such as the regular expression parser are written in C rather than native Python. On the other hand, a module like getopt.py is implemented in pure Python because it can all be done there and there's no benefit to using the corresponding C library.\n", "There's also an increasing trend of reimplementing languages that are traditionally considered \"interpreted\" onto a platform like the JVM or CLR -- and then allowing easy access to \"native\" code for interoperability. So from Jython and JRuby, you can easily access Java code, and from IronPython and IronRuby, you can easily access .NET code.\nIn cases like these, the ability to \"leverage the compiled language behind the interpreter\" could be described as the primary motivator for the new implementation.\n", "See the 'Papers' section at www.lua.org.\nEspecially The Implementation of Lua 5.0\nLua defines all standard functions in the underlying (ANSI C) code. I believe this is mostly for performance reasons. Recently, i.e. the 'string.*' functions got an alternative implementation in pure Lua, which may prove vital for subprojects where Lua is run on top of .NET or Java runtime (where C code cannot be used).\n", "As long as you are using a portable API for the compiled code base like the ANSI C standard library or STL in C++, then taking advantage of those functions would keep you from reinventing the wheel and likely provide a smaller, faster interpreter. Lua takes this approach and it is definitely small and fast as compared to many others.\n" ]
[ 6, 3, 2, 1 ]
[]
[]
[ "interpreted_language", "language_agnostic", "language_features", "performance" ]
stackoverflow_0000013586_interpreted_language_language_agnostic_language_features_performance.txt
Q: Converting std::vector<>::iterator to .NET interface in C++/CLI I am wrapping a native C++ class, which has the following methods: class Native { public: class Local { std::string m_Str; int m_Int; }; typedef std::vector<Local> LocalVec; typedef LocalVec::iterator LocalIter; LocalIter BeginLocals(); LocalIter EndLocals(); private: LocalVec m_Locals; }; 1) What is the ".NET way" of representing this same kind of interface? A single method returning an array<>? Does the array<> generic have iterators, so that I could implement BeginLocals() and EndLocals()? 2) Should Local be declared as a value struct in the .NET wrapper? I'd really like to represent the wrapped class with a .NET flavor, but I'm very new to the managed world - and this type of information is frustrating to google for... A: Iterators aren't exactly translatable to "the .net way", but they are roughly replaced by IEnumerable < T > and IEnumerator < T >. Rather than vector<int> a_vector; vector<int>::iterator a_iterator; for(int i= 0; i < 100; i++) { a_vector.push_back(i); } int total = 0; a_iterator = a_vector.begin(); while( a_iterator != a_vector.end() ) { total += *a_iterator; a_iterator++; } you would see (in c#) List<int> a_list = new List<int>(); for(int i=0; i < 100; i++) { a_list.Add(i); } int total = 0; foreach( int item in a_list) { total += item; } Or more explicitly (without hiding the IEnumerator behind the foreach syntax sugar): List<int> a_list = new List<int>(); for (int i = 0; i < 100; i++) { a_list.Add(i); } int total = 0; IEnumerator<int> a_enumerator = a_list.GetEnumerator(); while (a_enumerator.MoveNext()) { total += a_enumerator.Current; } As you can see, foreach just hides the .net enumerator for you. So really, the ".net way" would be to simply allow people to create List< Local > items for themselves. If you do want to control iteration or make the collection a bit more custom, have your collection implement the IEnumerable< T > and/or ICollection< T > interfaces as well. A near direct translation to c# would be pretty much what you assumed: public class Native { public class Local { public string m_str; public int m_int; } private List<Local> m_Locals = new List<Local>(); public List<Local> Locals { get{ return m_Locals;} } } Then a user would be able to foreach( Local item in someNative.Locals) { ... } A: @Phillip - Thanks, your answer really got me started in the right direction. After seeing your code, and doing a little more reading in Nish's book C++/CLI in Action, I think using an indexed property that returns a const tracking handle to a Local instance on the managed heap is probably the best approach. I ended up implementing something similar to the following: public ref class Managed { public: ref class Local { String^ m_Str; int m_Int; }; property const Local^ Locals[int] { const Local^ get(int Index) { // error checking here... return m_Locals[Index]; } }; private: List<Local^> m_Locals; };
Converting std::vector<>::iterator to .NET interface in C++/CLI
I am wrapping a native C++ class, which has the following methods: class Native { public: class Local { std::string m_Str; int m_Int; }; typedef std::vector<Local> LocalVec; typedef LocalVec::iterator LocalIter; LocalIter BeginLocals(); LocalIter EndLocals(); private: LocalVec m_Locals; }; 1) What is the ".NET way" of representing this same kind of interface? A single method returning an array<>? Does the array<> generic have iterators, so that I could implement BeginLocals() and EndLocals()? 2) Should Local be declared as a value struct in the .NET wrapper? I'd really like to represent the wrapped class with a .NET flavor, but I'm very new to the managed world - and this type of information is frustrating to google for...
[ "Iterators aren't exactly translatable to \"the .net way\", but they are roughly replaced by IEnumerable < T > and IEnumerator < T >. \nRather than \n vector<int> a_vector;\n vector<int>::iterator a_iterator;\n for(int i= 0; i < 100; i++)\n {\n a_vector.push_back(i);\n }\n\n int total = 0;\n a_iterator = a_vector.begin();\n while( a_iterator != a_vector.end() ) {\n total += *a_iterator;\n a_iterator++;\n }\n\nyou would see (in c#)\nList<int> a_list = new List<int>();\nfor(int i=0; i < 100; i++)\n{\n a_list.Add(i);\n}\nint total = 0;\nforeach( int item in a_list)\n{\n total += item;\n}\n\nOr more explicitly (without hiding the IEnumerator behind the foreach syntax sugar):\nList<int> a_list = new List<int>();\nfor (int i = 0; i < 100; i++)\n{\n a_list.Add(i);\n}\nint total = 0;\nIEnumerator<int> a_enumerator = a_list.GetEnumerator();\nwhile (a_enumerator.MoveNext())\n{\n total += a_enumerator.Current;\n}\n\nAs you can see, foreach just hides the .net enumerator for you.\nSo really, the \".net way\" would be to simply allow people to create List< Local > items for themselves. If you do want to control iteration or make the collection a bit more custom, have your collection implement the IEnumerable< T > and/or ICollection< T > interfaces as well.\nA near direct translation to c# would be pretty much what you assumed:\npublic class Native\n{\n public class Local\n { \n public string m_str;\n public int m_int;\n }\n\n private List<Local> m_Locals = new List<Local>();\n\n public List<Local> Locals\n {\n get{ return m_Locals;}\n }\n}\n\nThen a user would be able to \nforeach( Local item in someNative.Locals) \n{\n ... \n}\n\n", "@Phillip - Thanks, your answer really got me started in the right direction. \nAfter seeing your code, and doing a little more reading in Nish's book C++/CLI in Action, I think using an indexed property that returns a const tracking handle to a Local instance on the managed heap is probably the best approach. I ended up implementing something similar to the following:\npublic ref class Managed\n{\n public:\n ref class Local\n {\n String^ m_Str;\n int m_Int;\n };\n\n property const Local^ Locals[int]\n {\n const Local^ get(int Index)\n {\n // error checking here...\n return m_Locals[Index];\n }\n };\n\n private:\n List<Local^> m_Locals;\n};\n\n" ]
[ 5, 0 ]
[]
[]
[ ".net", "arrays", "c++_cli", "marshalling", "vector" ]
stackoverflow_0000085033_.net_arrays_c++_cli_marshalling_vector.txt
Q: Is there a specific name for the node that coresponds to a subtree? I'm designing a web site navigation hierarchy. It's a tree of nodes. Nodes represent web pages. Some nodes on the tree are special. I need a name for them. There are multiple such nodes. Each is the "root" of a sub-tree with pages that have a distinct logo, style sheet, or layout. Think of different departments. site map with color-coded sub-trees http://img518.imageshack.us/img518/153/subtreesfe1.gif What should I name this type of node? A: How about Root (node with children, but no parent), Node (node with children and parent) and Leaf (node with no children and parent)? You can then distinguish by name and position within the tree structure (E.g. DepartmentRoot, DepartmentNode, DepartmentLeaf) if need be.. Update Following Comment from OP Looking at your question, you said that "some" are special, and in your diagram, you have different nodes looking differently at different levels. The nodes may be different in their design, you can build a tree structure many ways. For example, a single abstract class that can have child nodes, if no children, its a leaf, if no parent, its a root but this can change in its lifetime. Or, a fixed class structure in which leafs are a specific class type that cannot have children added to them in any way. IF your design does not need you to distinguish nodes differently depending on their position (relative to the root) it suggests that you have an abstract class used for them all. In which case, it raises the question, how is it different? If it is simply the same as the standard node everywhere else, but with a bit of styling, how about StyledNode? Do you even need it to be seperate (no style == no big deal, it doesn't render). Since I don't know the mechanics of how the tree is architected, there could possibly be several factors to consider when naming. A: The word you are looking for is "Section". It's part of a whole and has the same stuff inside. So, you have Nodes, which have children and a parent, and you have SectionNodes which are the roots of these special subtrees. A: How about PageTemplate to embody the fact that its children have their own layout, CSS etc? A: AreaNode A: So, it sounds like you are gathering categories. The nodes are the entry points of this categories. How about "TopCategoryNode", "CategoryEntry" then for som,ething that is below them. Or, if you want to divide more, something like "CategoryCSS", "CategoryLayout" etc? This is kind of generic, but you make clear that there are "categories", and that these do consist of more than one subnode, or subtheme. A: Branch ? Keeps the tree analogy and also hints in this case at departments etc Thinking about class heirarchies, Root is probably a special case of Branch, which is a special case of Node, special case of Leaf. The Branch/Node distinction is one you get to make for your special situation.
Is there a specific name for the node that coresponds to a subtree?
I'm designing a web site navigation hierarchy. It's a tree of nodes. Nodes represent web pages. Some nodes on the tree are special. I need a name for them. There are multiple such nodes. Each is the "root" of a sub-tree with pages that have a distinct logo, style sheet, or layout. Think of different departments. site map with color-coded sub-trees http://img518.imageshack.us/img518/153/subtreesfe1.gif What should I name this type of node?
[ "How about Root (node with children, but no parent), Node (node with children and parent) and Leaf (node with no children and parent)?\nYou can then distinguish by name and position within the tree structure (E.g. DepartmentRoot, DepartmentNode, DepartmentLeaf) if need be..\nUpdate Following Comment from OP\nLooking at your question, you said that \"some\" are special, and in your diagram, you have different nodes looking differently at different levels. The nodes may be different in their design, you can build a tree structure many ways. For example, a single abstract class that can have child nodes, if no children, its a leaf, if no parent, its a root but this can change in its lifetime. Or, a fixed class structure in which leafs are a specific class type that cannot have children added to them in any way.\nIF your design does not need you to distinguish nodes differently depending on their position (relative to the root) it suggests that you have an abstract class used for them all.\nIn which case, it raises the question, how is it different?\nIf it is simply the same as the standard node everywhere else, but with a bit of styling, how about StyledNode? Do you even need it to be seperate (no style == no big deal, it doesn't render).\nSince I don't know the mechanics of how the tree is architected, there could possibly be several factors to consider when naming.\n", "The word you are looking for is \"Section\". It's part of a whole and has the same stuff inside.\nSo, you have Nodes, which have children and a parent, and you have SectionNodes which are the roots of these special subtrees.\n", "How about PageTemplate to embody the fact that its children have their own layout, CSS etc?\n", "AreaNode\n", "So, it sounds like you are gathering categories. The nodes are the entry points of this categories. How about \"TopCategoryNode\", \"CategoryEntry\" then for som,ething that is below them. Or, if you want to divide more, something like \"CategoryCSS\", \"CategoryLayout\" etc?\nThis is kind of generic, but you make clear that there are \"categories\", and that these do consist of more than one subnode, or subtheme.\n", "Branch ?\nKeeps the tree analogy and also hints in this case at departments etc\nThinking about class heirarchies, Root is probably a special case of Branch, which is a special case of Node, special case of Leaf. The Branch/Node distinction is one you get to make for your special situation.\n" ]
[ 6, 2, 1, 1, 1, 1 ]
[]
[]
[ "class_design", "data_structures", "naming", "naming_conventions", "tree" ]
stackoverflow_0000086790_class_design_data_structures_naming_naming_conventions_tree.txt
Q: Inheritance in database? Is there any way to use inheritance in database (Specifically in SQL Server 2005)? Suppose I have few field like CreatedOn, CreatedBy which I want to add on all of my entities. I looking for an alternative way instead of adding these fields to every table. A: There is no such thing as inheritance between tables in SQL Server 2005, and as noted by the others, you can get as far as getting help adding the necessary columns to the tables when you create them, but it won't be inheritance as you know it. Think of it more like a template for your source code files. As GateKiller mentions, you can create a table containing the shared data and reference it with a foreign key, but you'll either have to have audit hooks, triggers, or do the update manually. Bottom line: Manual work. A: PostgreSQL has this feature. Just add this to the end of your table definition: INHERITS FROM (tablename[, othertable...]) The child table will have all the columns of its parent, and changes to the parent table will change the child. Also, everything in the child table will come up in queries to the parent table (by default). Unfortunately indices don't cross the parent/child border, which also means you can't make sure that certain columns are unique across both the parent and child. As far as I know, it's not a feature used very often. A: You could create a template in the template pane in Management Studio. And then use that template every time you want to create a new table. Failing that, you could store the CreatedOn and CreatedBy fields in an Audit trail table referencing the original table and id. Failing that, do it manually. A: You could use a data modeling tool such as ER/Studio or ERWin. Both tools have domain columns where you can define a column template that you can apply to any table. When the domain changes so do the associated columns. ER/Studio also has trigger templates that you can build and apply to any table. This is how we update our LastUpdatedBy and LastUpdatedDate columns without having to build and maintain hundreds of trigger scripts. If you do create an audit table you would have one row for every row in every table that uses the audit table. That could get messy. In my opinion, you're better off putting the audit columns in every table. You also may want to put a timestamp column in all of your tables. You never know when concurrency becomes a problem. Our DB audit columns that we put in every table are: CreatedDt, LastUpdatedBy, LastUpdatedDt and Timestamp. Hope this helps. A: We have a SProc that adds audit columns to a given table, and (optionally) creates a history table and associated triggers to track changes to a value. Unfortunately, company policy means I can't share, but it really isn't difficult to achieve. A: If you are using GUIDs you could create a CreateHistory table with columns GUID, CreatedOn, CreatedBy. For populating the table you would still have to create a trigger for every table or handle it in the application logic. A: You do NOT want to use inheritance to do this! When table B, C and D inherits from table A, that means that querying table A will give you records from B, C and D. Now consider... DELETE FROM a; Instead of inheritance, use LIKE instead... CREATE TABLE blah ( blah_id serial PRIMARY KEY , something text NOT NULL , LIKE template_table INCLUDING DEFALUTS ); A: Ramesh - I would implement this using supertype and subtype relationships in my E-R model. There are a few different physical options you have of implementing the relationships as well. A: in O-R mapping, inheritance maps to a parent table where the parent and child tables use the same identifier for example create table Object ( Id int NOT NULL --primary key, auto-increment Name varchar(32) ) create table SubObject ( Id int NOT NULL --primary key and also foreign key to Object Description varchar(32) ) SubObject has a foreign-key relationship to Object. when you create a SubObject row, you must first create an Object row and use the Id in both rows
Inheritance in database?
Is there any way to use inheritance in database (Specifically in SQL Server 2005)? Suppose I have few field like CreatedOn, CreatedBy which I want to add on all of my entities. I looking for an alternative way instead of adding these fields to every table.
[ "There is no such thing as inheritance between tables in SQL Server 2005, and as noted by the others, you can get as far as getting help adding the necessary columns to the tables when you create them, but it won't be inheritance as you know it.\nThink of it more like a template for your source code files.\nAs GateKiller mentions, you can create a table containing the shared data and reference it with a foreign key, but you'll either have to have audit hooks, triggers, or do the update manually.\nBottom line: Manual work.\n", "PostgreSQL has this feature. Just add this to the end of your table definition:\nINHERITS FROM (tablename[, othertable...])\n\nThe child table will have all the columns of its parent, and changes to the parent table will change the child. Also, everything in the child table will come up in queries to the parent table (by default). Unfortunately indices don't cross the parent/child border, which also means you can't make sure that certain columns are unique across both the parent and child.\nAs far as I know, it's not a feature used very often.\n", "You could create a template in the template pane in Management Studio. And then use that template every time you want to create a new table.\nFailing that, you could store the CreatedOn and CreatedBy fields in an Audit trail table referencing the original table and id.\nFailing that, do it manually.\n", "You could use a data modeling tool such as ER/Studio or ERWin. Both tools have domain columns where you can define a column template that you can apply to any table. When the domain changes so do the associated columns. ER/Studio also has trigger templates that you can build and apply to any table. This is how we update our LastUpdatedBy and LastUpdatedDate columns without having to build and maintain hundreds of trigger scripts. \nIf you do create an audit table you would have one row for every row in every table that uses the audit table. That could get messy. In my opinion, you're better off putting the audit columns in every table. You also may want to put a timestamp column in all of your tables. You never know when concurrency becomes a problem. Our DB audit columns that we put in every table are: CreatedDt, LastUpdatedBy, LastUpdatedDt and Timestamp.\nHope this helps.\n", "We have a SProc that adds audit columns to a given table, and (optionally) creates a history table and associated triggers to track changes to a value. Unfortunately, company policy means I can't share, but it really isn't difficult to achieve.\n", "If you are using GUIDs you could create a CreateHistory table with columns GUID, CreatedOn, CreatedBy. For populating the table you would still have to create a trigger for every table or handle it in the application logic.\n", "You do NOT want to use inheritance to do this! When table B, C and D inherits from table A, that means that querying table A will give you records from B, C and D. Now consider...\nDELETE FROM a;\nInstead of inheritance, use LIKE instead...\nCREATE TABLE blah (\n blah_id serial PRIMARY KEY\n , something text NOT NULL\n , LIKE template_table INCLUDING DEFALUTS\n);\n\n", "Ramesh - I would implement this using supertype and subtype relationships in my E-R model. There are a few different physical options you have of implementing the relationships as well.\n", "in O-R mapping, inheritance maps to a parent table where the parent and child tables use the same identifier\nfor example\ncreate table Object (\n Id int NOT NULL --primary key, auto-increment\n Name varchar(32)\n)\ncreate table SubObject (\n Id int NOT NULL --primary key and also foreign key to Object\n Description varchar(32)\n)\n\nSubObject has a foreign-key relationship to Object. when you create a SubObject row, you must first create an Object row and use the Id in both rows\n" ]
[ 3, 2, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "database", "inheritance", "sql", "sql_server_2005" ]
stackoverflow_0000005802_database_inheritance_sql_sql_server_2005.txt
Q: TimeStamp in Control File I have a script that takes a table name and generates a control file by querying all the columns/rows the table. This works fine for numeric and character data but fails on timestamp data so I need to adjust the script to output the timestamp data into the control in such a way that it can be read in properly. So essentially, my question is how to format TimeStamp data in a control file so that it can be inputed into a TimeStamp column. A: You need to use to_date in your column listing as demonstrated here. Something like: LOAD DATA INFILE * INTO TABLE some_table FIELDS TERMINATED BY "," ( col1 col2 "to_date(:col2, 'YYYY-MM-DD HH24:MI:SS')" ) BEGINDATA foo,2008-09-17 13:00:00 bar,2008-09-17 13:30:05
TimeStamp in Control File
I have a script that takes a table name and generates a control file by querying all the columns/rows the table. This works fine for numeric and character data but fails on timestamp data so I need to adjust the script to output the timestamp data into the control in such a way that it can be read in properly. So essentially, my question is how to format TimeStamp data in a control file so that it can be inputed into a TimeStamp column.
[ "You need to use to_date in your column listing as demonstrated here. Something like:\n\n\nLOAD DATA\nINFILE *\nINTO TABLE some_table\nFIELDS TERMINATED BY \",\"\n( col1\n col2 \"to_date(:col2, 'YYYY-MM-DD HH24:MI:SS')\"\n)\nBEGINDATA\nfoo,2008-09-17 13:00:00\nbar,2008-09-17 13:30:05\n\n\n" ]
[ 0 ]
[]
[]
[ "controlfile", "oracle", "sql_loader" ]
stackoverflow_0000086684_controlfile_oracle_sql_loader.txt
Q: Change report data visibility based on rendering format in Reporting Services Is it possible to hide or exclude certain data from a report if it's being rendered in a particular format (csv, xml, excel, pdf, html). The problem is that I want hyperlinks to other reports to not be rendered when the report is generated in Excel format - but they should be there when the report is rendered in HTML format. A: The way I did this w/SSRS 2005 for a web app using the ReportViewer control is I had a hidden boolean report parameter which was used in the report decide if to render text as hyperlinks or not. Then the trick was how to send that parameter value depending on the rendering format. The way I did that was by disabling the ReportViewer export controls (by setting its ShowExportControls property to false) and making my own ASP.NET buttons for each format I wanted to be exportable. The code for those buttons first set the hidden boolean parameter and refreshed the report: ReportViewer1.ServerReport.SetParameters(New ReportParameter() {New ReportParameter("ExportView", "True")}) ReportViewer1.ServerReport.Refresh() Then you need to programmatically export the report. See this page for an example of how to do that (ignore the first few lines of code that create and initialize a ReportViewer). A: I don't think this is possible in the 2000 version, but might be in later versions. If I remember right, we ended up making two versions of the report.
Change report data visibility based on rendering format in Reporting Services
Is it possible to hide or exclude certain data from a report if it's being rendered in a particular format (csv, xml, excel, pdf, html). The problem is that I want hyperlinks to other reports to not be rendered when the report is generated in Excel format - but they should be there when the report is rendered in HTML format.
[ "The way I did this w/SSRS 2005 for a web app using the ReportViewer control is I had a hidden boolean report parameter which was used in the report decide if to render text as hyperlinks or not.\nThen the trick was how to send that parameter value depending on the rendering format. The way I did that was by disabling the ReportViewer export controls (by setting its ShowExportControls property to false) and making my own ASP.NET buttons for each format I wanted to be exportable. The code for those buttons first set the hidden boolean parameter and refreshed the report:\nReportViewer1.ServerReport.SetParameters(New ReportParameter() {New ReportParameter(\"ExportView\", \"True\")})\nReportViewer1.ServerReport.Refresh()\n\nThen you need to programmatically export the report. See this page for an example of how to do that (ignore the first few lines of code that create and initialize a ReportViewer).\n", "I don't think this is possible in the 2000 version, but might be in later versions.\nIf I remember right, we ended up making two versions of the report.\n" ]
[ 3, 0 ]
[]
[]
[ "report", "reporting_services", "sql_server" ]
stackoverflow_0000082417_report_reporting_services_sql_server.txt
Q: Where does CGI.pm normally create temporary files? On all my Windows servers, except for one machine, when I execute the following code to allocate a temporary files folder: use CGI; my $tmpfile = new CGITempFile(1); print "tmpfile='", $tmpfile->as_string(), "'\n"; The variable $tmpfile is assigned the value '.\CGItemp1' and this is what I want. But on one of my servers it's incorrectly set to C:\temp\CGItemp1. All the servers are running Windows 2003 Standard Edition, IIS6 and ActivePerl 5.8.8.822 (upgrading to later version of Perl not an option). The result is always the same when running a script from the command line or in IIS as a CGI script (where scriptmap .pl = c:\perl\bin\perl.exe "%s" %s). How I can fix this Perl installation and force it to return '.\CGItemp1' by default? I've even copied the whole Perl folder from one of the working servers to this machine but no joy. @Hometoast: I checked the 'TMP' and 'TEMP' environment variables and also $ENV{TMP} and $ENV{TEMP} and they're identical. From command line they point to the user profile directory, for example: C:\DOCUME~1\[USERNAME]\LOCALS~1\Temp\1 When run under IIS as a CGI script they both point to: c:\windows\temp In registry key HKEY_USERS/.DEFAULT/Environment, both servers have: %USERPROFILE%\Local Settings\Temp The ActiveState implementation of CGITempFile() is clearly using an alternative mechanism to determine how it should generate the temporary folder. @Ranguard: The real problem is with the CGI.pm module and attachment handling. Whenever a file is uploaded to the site CGI.pm needs to store it somewhere temporary. To do this CGITempFile() is called within CGI.pm to allocate a temporary folder. So unfortunately I can't use File::Temp. Thanks anyway. @Chris: That helped a bunch. I did have a quick scan through the CGI.pm source earlier but your suggestion made me go back and look at it more studiously to understand the underlying algorithm. I got things working, but the oddest thing is that there was originally no c:\temp folder on the server. To obtain a temporary fix I created a c:\temp folder and set the relevant permissions for the website's anonymous user account. But because this is a shared box I couldn't leave things that way, even though the temp files were being deleted. To cut a long story short, I renamed the c:\temp folder to something different and magically the correct '.\' folder path was being returned. I also noticed that the customer had enabled FrontPage extensions on the site, which removes write access for the anonymous user account on the website folders, so this permission needed re-applying. I'm still at a loss as to why at the start of this issue CGITempFile() was returning c:\temp, even though that folder didn't exist, and why it magically started working again. A: The name of the temporary directory is held in $CGITempFile::TMPDIRECTORY and initialised in the find_tempdir function in CGI.pm. The algorithm for choosing the temporary directory is described in the CGI.pm documentation (search for -private_tempfiles). IIUC, if a C:\Temp folder exists on the server, CGI.pm will use it. If none of the directories checked in find_tempdir exist, then the current directory "." is used. I hope this helps. A: Not the direct answer to your question, but have you tried using File::Temp? It is specifically designed to work on any OS. A: If you're running this script as you, check the %TEMP% environment variable to see if if it differs. If IIS is executing, check the values in registry for TMP and TEMP under HKEY_USERS/.DEFAULT/Environment
Where does CGI.pm normally create temporary files?
On all my Windows servers, except for one machine, when I execute the following code to allocate a temporary files folder: use CGI; my $tmpfile = new CGITempFile(1); print "tmpfile='", $tmpfile->as_string(), "'\n"; The variable $tmpfile is assigned the value '.\CGItemp1' and this is what I want. But on one of my servers it's incorrectly set to C:\temp\CGItemp1. All the servers are running Windows 2003 Standard Edition, IIS6 and ActivePerl 5.8.8.822 (upgrading to later version of Perl not an option). The result is always the same when running a script from the command line or in IIS as a CGI script (where scriptmap .pl = c:\perl\bin\perl.exe "%s" %s). How I can fix this Perl installation and force it to return '.\CGItemp1' by default? I've even copied the whole Perl folder from one of the working servers to this machine but no joy. @Hometoast: I checked the 'TMP' and 'TEMP' environment variables and also $ENV{TMP} and $ENV{TEMP} and they're identical. From command line they point to the user profile directory, for example: C:\DOCUME~1\[USERNAME]\LOCALS~1\Temp\1 When run under IIS as a CGI script they both point to: c:\windows\temp In registry key HKEY_USERS/.DEFAULT/Environment, both servers have: %USERPROFILE%\Local Settings\Temp The ActiveState implementation of CGITempFile() is clearly using an alternative mechanism to determine how it should generate the temporary folder. @Ranguard: The real problem is with the CGI.pm module and attachment handling. Whenever a file is uploaded to the site CGI.pm needs to store it somewhere temporary. To do this CGITempFile() is called within CGI.pm to allocate a temporary folder. So unfortunately I can't use File::Temp. Thanks anyway. @Chris: That helped a bunch. I did have a quick scan through the CGI.pm source earlier but your suggestion made me go back and look at it more studiously to understand the underlying algorithm. I got things working, but the oddest thing is that there was originally no c:\temp folder on the server. To obtain a temporary fix I created a c:\temp folder and set the relevant permissions for the website's anonymous user account. But because this is a shared box I couldn't leave things that way, even though the temp files were being deleted. To cut a long story short, I renamed the c:\temp folder to something different and magically the correct '.\' folder path was being returned. I also noticed that the customer had enabled FrontPage extensions on the site, which removes write access for the anonymous user account on the website folders, so this permission needed re-applying. I'm still at a loss as to why at the start of this issue CGITempFile() was returning c:\temp, even though that folder didn't exist, and why it magically started working again.
[ "The name of the temporary directory is held in $CGITempFile::TMPDIRECTORY and initialised in the find_tempdir function in CGI.pm.\nThe algorithm for choosing the temporary directory is described in the CGI.pm documentation (search for -private_tempfiles).\nIIUC, if a C:\\Temp folder exists on the server, CGI.pm will use it. If none of the directories checked in find_tempdir exist, then the current directory \".\" is used.\nI hope this helps.\n", "Not the direct answer to your question, but have you tried using File::Temp? \nIt is specifically designed to work on any OS.\n", "If you're running this script as you, check the %TEMP% environment variable to see if if it differs.\nIf IIS is executing, check the values in registry for TMP and TEMP under\nHKEY_USERS/.DEFAULT/Environment\n" ]
[ 8, 2, 1 ]
[]
[]
[ "activestate", "cgi", "perl" ]
stackoverflow_0000086175_activestate_cgi_perl.txt
Q: What is the inversion of the Shunting Yard algorithm? Dijkstra's Shunting Yard algorithm is used to parse an infix notation and generate RPN output. I am looking for the opposite, a way to turn RPN into highschool-math-class style infix notation, in order to represent RPN expressions from a database to lay users in an understandable way. Please save your time and don't cook up the algorithm yourselves, just point me to textbook examples that I can't seem to find. Working backwards from the Shunting Yard algorithm and using my knowledge about the notations I'll probably be able to work up a solution. I'm just looking for a quick shortcut, so I don't have to reinvent the wheel. Oh, and please don't tag this as "homework", I swear I'm out of school already! ;-) A: Since RPN is also known as postfix notation, I tried googling convert "postfix to infix" and got quite a few results. The first several have code examples, but I found the RubyQuiz entry particularly enlightening. A: If you're not worried about removing redundant parentheses, then the following Lisp code will work: (defun rpn-to-inf (pre) (if (atom pre) pre (cond ((eq (car (last pre)) 'setf) (list (rpn-to-inf (first pre)) '= (rpn-to-inf (second pre)))) ((eq (car (last pre)) 'expt) (list (rpn-to-inf (first pre)) '^ (rpn-to-inf (second pre)))) (t (list (rpn-to-inf (first pre)) (car (last pre)) (rpn-to-inf (second pre)))))))
What is the inversion of the Shunting Yard algorithm?
Dijkstra's Shunting Yard algorithm is used to parse an infix notation and generate RPN output. I am looking for the opposite, a way to turn RPN into highschool-math-class style infix notation, in order to represent RPN expressions from a database to lay users in an understandable way. Please save your time and don't cook up the algorithm yourselves, just point me to textbook examples that I can't seem to find. Working backwards from the Shunting Yard algorithm and using my knowledge about the notations I'll probably be able to work up a solution. I'm just looking for a quick shortcut, so I don't have to reinvent the wheel. Oh, and please don't tag this as "homework", I swear I'm out of school already! ;-)
[ "Since RPN is also known as postfix notation, I tried googling convert \"postfix to infix\" and got quite a few results. The first several have code examples, but I found the RubyQuiz entry particularly enlightening.\n", "If you're not worried about removing redundant parentheses, then the following Lisp code will work:\n(defun rpn-to-inf (pre)\n (if (atom pre)\n pre\n (cond ((eq (car (last pre)) 'setf)\n (list (rpn-to-inf (first pre)) '= (rpn-to-inf (second pre))))\n ((eq (car (last pre)) 'expt)\n (list (rpn-to-inf (first pre)) '^ (rpn-to-inf (second pre))))\n (t (list (rpn-to-inf (first pre)) \n (car (last pre)) \n (rpn-to-inf (second pre)))))))\n\n" ]
[ 8, 7 ]
[]
[]
[ "algorithm", "parsing" ]
stackoverflow_0000086669_algorithm_parsing.txt
Q: Some sort of creational pattern needed in C# I have the following type : // incomplete class definition public class Person { private string name; public string Name { get { return this.name; } } } I want this type to be created and updated with some sort of dedicated controller/builder, but I want it to remain read-only for other types. This object also needs to fire an event every time it is updated by its controller/builder. To summary, according to the previous type definition skeleton : The Person could only be instantiated by a specific controller This controller could update the state of the Person (name field) at any time The Person need to send a notification to the rest of the world when it occurs All other types should only be able to read Person attributes How should I implement this ? I'm talking about a controller/builder here, but all others solutions are welcome. Note : I would be able to rely on the internal modifier, but ideally all my stuff should be in the same assembly. A: Create an interface IReadOnlyPerson which exposes only get accessors. Have Person implement IReadOnlyPerson. Store the reference to Person in your controller. Give other clients only the read only version. This will protect against mistakes, but not fraud, as with most OO features. Clients can runtime cast to Person if they happen to know (or suspect) IReadOnlyPerson is implemented by Person. Update, per the comment: The Read Only interface may also expose an event delegate, just like any other object. The idiom generally used in C# doesn't prevent clients from messing with the list of listeners, but convention is only to add listeners, so that should be adequate. Inside any set accessor or function with state-changing side effects, just call the event delegate with a guard for the null (no listeners) case. A: I like to have a read-only interface. Then the builder/controller/whatever can reference the object directly, but when you expose this object to the outside you show only the interface. A: Use an interface IPerson and a nested class: public class Creator { private class Person : IPerson { public string Name { get; set; } } public IPerson Create(...) ... public void Modify(IPerson person, ...) { Person dude = person as Person; if (dude == null) // wasn't created by this class. else // update the data. } } A: I think internal is the least complex and best approach (this of course involves multiple assemblies). Short of doing some overhead intensive stack walking to determine the caller in the property setter you could try: interface IPerson { Name { get; set; } } and implement this interface explicitly: class Person : IPerson { Name { get; private set; } string IPerson.Name { get { return Name; } set { Name = value; } } } then perform explicit interface casts in your builder for setting properties. This still doesn't protect your implementation and isn't a good solution though it does go some way to emphasize your intention. In your property setters you'll have to implement an event notification. Approaching this problem myself I would not create separate events and event handlers for each property but instead create a single PropertyChanged event and fire it in each property when a change occurs (where the event arguments would include the property name, old value, and new value). A: Seems odd that, though I cannot change the name of the Person object, I can simply grab its controller and change it there. That's not a good way to secure your object's data. But, notwithstanding, here's a way to do it: /// <summary> /// A controlled person. Not production worthy code. /// </summary> public class Person { private string _name; public string Name { get { return _name; } private set { _name = value; OnNameChanged(); } } /// <summary> /// This person's controller /// </summary> public PersonController Controller { get { return _controller ?? (_controller = new PersonController(this)); } } private PersonController _controller; /// <summary> /// Fires when <seealso cref="Name"/> changes. Go get the new name yourself. /// </summary> public event EventHandler NameChanged; private void OnNameChanged() { if (NameChanged != null) NameChanged(this, EventArgs.Empty); } /// <summary> /// A Person controller. /// </summary> public class PersonController { Person _slave; public PersonController(Person slave) { _slave = slave; } /// <summary> /// Sets the name on the controlled person. /// </summary> /// <param name="name">The name to set.</param> public void SetName(string name) { _slave.Name = name; } } } A: Maybe something like that ? public class Person { public class Editor { private readonly Person person; public Editor(Person p) { person = p; } public void SetName(string name) { person.name = name; } public static Person Create(string name) { return new Person(name); } } protected string name; public string Name { get { return this.name; } } protected Person(string name) { this.name = name; } } Person p = Person.Editor.Create("John"); Person.Editor e = new Person.Editor(p); e.SetName("Jane"); Not pretty, but I think it works. Alternatively you can use properties instead of SetX methods on the editor.
Some sort of creational pattern needed in C#
I have the following type : // incomplete class definition public class Person { private string name; public string Name { get { return this.name; } } } I want this type to be created and updated with some sort of dedicated controller/builder, but I want it to remain read-only for other types. This object also needs to fire an event every time it is updated by its controller/builder. To summary, according to the previous type definition skeleton : The Person could only be instantiated by a specific controller This controller could update the state of the Person (name field) at any time The Person need to send a notification to the rest of the world when it occurs All other types should only be able to read Person attributes How should I implement this ? I'm talking about a controller/builder here, but all others solutions are welcome. Note : I would be able to rely on the internal modifier, but ideally all my stuff should be in the same assembly.
[ "Create an interface IReadOnlyPerson which exposes only get accessors. Have Person implement IReadOnlyPerson. Store the reference to Person in your controller. Give other clients only the read only version.\nThis will protect against mistakes, but not fraud, as with most OO features. Clients can runtime cast to Person if they happen to know (or suspect) IReadOnlyPerson is implemented by Person.\nUpdate, per the comment:\nThe Read Only interface may also expose an event delegate, just like any other object. The idiom generally used in C# doesn't prevent clients from messing with the list of listeners, but convention is only to add listeners, so that should be adequate. Inside any set accessor or function with state-changing side effects, just call the event delegate with a guard for the null (no listeners) case.\n", "I like to have a read-only interface. Then the builder/controller/whatever can reference the object directly, but when you expose this object to the outside you show only the interface.\n", "Use an interface IPerson and a nested class:\npublic class Creator\n{\n private class Person : IPerson\n {\n public string Name { get; set; }\n }\n\n public IPerson Create(...) ...\n\n\n public void Modify(IPerson person, ...)\n {\n Person dude = person as Person;\n if (dude == null)\n // wasn't created by this class.\n else\n // update the data.\n }\n}\n\n", "I think internal is the least complex and best approach (this of course involves multiple assemblies). Short of doing some overhead intensive stack walking to determine the caller in the property setter you could try:\ninterface IPerson \n{\n Name { get; set; } \n}\n\nand implement this interface explicitly:\nclass Person : IPerson \n{\n Name { get; private set; }\n string IPerson.Name { get { return Name; } set { Name = value; } } \n}\n\nthen perform explicit interface casts in your builder for setting properties. This still doesn't protect your implementation and isn't a good solution though it does go some way to emphasize your intention.\nIn your property setters you'll have to implement an event notification. Approaching this problem myself I would not create separate events and event handlers for each property but instead create a single PropertyChanged event and fire it in each property when a change occurs (where the event arguments would include the property name, old value, and new value). \n", "Seems odd that, though I cannot change the name of the Person object, I can simply grab its controller and change it there. That's not a good way to secure your object's data.\nBut, notwithstanding, here's a way to do it:\n /// <summary>\n /// A controlled person. Not production worthy code.\n /// </summary>\n public class Person\n {\n private string _name;\n public string Name\n {\n get { return _name; }\n private set\n {\n _name = value;\n OnNameChanged();\n }\n }\n /// <summary>\n /// This person's controller\n /// </summary>\n public PersonController Controller\n {\n get { return _controller ?? (_controller = new PersonController(this)); }\n }\n private PersonController _controller;\n\n /// <summary>\n /// Fires when <seealso cref=\"Name\"/> changes. Go get the new name yourself.\n /// </summary>\n public event EventHandler NameChanged;\n\n private void OnNameChanged()\n {\n if (NameChanged != null)\n NameChanged(this, EventArgs.Empty);\n }\n\n /// <summary>\n /// A Person controller.\n /// </summary>\n public class PersonController\n {\n Person _slave;\n public PersonController(Person slave)\n {\n _slave = slave;\n }\n /// <summary>\n /// Sets the name on the controlled person.\n /// </summary>\n /// <param name=\"name\">The name to set.</param>\n public void SetName(string name) { _slave.Name = name; }\n }\n }\n\n", "Maybe something like that ?\npublic class Person\n{\n public class Editor\n {\n private readonly Person person;\n\n public Editor(Person p)\n {\n person = p;\n }\n\n public void SetName(string name)\n {\n person.name = name;\n }\n\n public static Person Create(string name)\n {\n return new Person(name);\n }\n }\n\n protected string name;\n\n public string Name\n {\n get { return this.name; }\n }\n\n protected Person(string name)\n {\n this.name = name;\n }\n}\n\nPerson p = Person.Editor.Create(\"John\");\nPerson.Editor e = new Person.Editor(p);\ne.SetName(\"Jane\");\n\nNot pretty, but I think it works. Alternatively you can use properties instead of SetX methods on the editor.\n" ]
[ 5, 1, 1, 1, 1, 0 ]
[]
[]
[ ".net", "c#", "design_patterns" ]
stackoverflow_0000086726_.net_c#_design_patterns.txt
Q: FindControl() method throws ArithmeticException? I have a line of C# in my ASP.NET code behind that looks like this: DropDownList ddlStates = (DropDownList)fvAccountSummary.FindControl("ddlStates"); The DropDownList control is explicitly declared in the markup on the page, not dynamically created. It is inside of a FormView control. When my code hits this line, I am getting an ArithmeticException with the message "Value was either too large or too small for an Int32." This code has worked previously, and is in production right now. I fired up VS2008 to make some changes to the site, but before I changed anything, I got this exception from the page. Anyone seen this one before? A: If that's the stacktrace, its comingn from databinding, not from the line you posted. Is it possible that you have some really large data set? I've seen a 6000-page GridView overflow an Int16, although it seems pretty unlikely you'd actually overflow an Int32... Check to make sure you're passing in sane data into, say, the startpageIndex or pageSize of your datasource, for example. A: Are you 100% sure that's the line of code that's throwing the exception? I'm pretty certain that the FindControl method is not capable of throwing an ArithmeticException. Of course, I've been known to be wrong before... :) A: I have seen ArithmeticException being thrown in weird places before in C#/.NET and it was when I was working with p/invoke to an unmanaged .dll talking to an USB device. The crash was consistent, and always at the same place. Of course, the place was totally unrelated to the crash (i think it was a basic value assignment, like int i = 4 or something similarly silly) I'd like to have a happy ending to tell you, but I never managed to fully track down the problem. I strongly believe that the cause was in the unmanaged code, and that it somehow corrupted the memory or maybe even free'd managed memory. (Removing the calls to unmanaged code made the problem go away) The message I'm sending is: are you doing any calls to unmanaged code? If so, my suggestion is you focus your debugging skills there :)
FindControl() method throws ArithmeticException?
I have a line of C# in my ASP.NET code behind that looks like this: DropDownList ddlStates = (DropDownList)fvAccountSummary.FindControl("ddlStates"); The DropDownList control is explicitly declared in the markup on the page, not dynamically created. It is inside of a FormView control. When my code hits this line, I am getting an ArithmeticException with the message "Value was either too large or too small for an Int32." This code has worked previously, and is in production right now. I fired up VS2008 to make some changes to the site, but before I changed anything, I got this exception from the page. Anyone seen this one before?
[ "If that's the stacktrace, its comingn from databinding, not from the line you posted. Is it possible that you have some really large data set? I've seen a 6000-page GridView overflow an Int16, although it seems pretty unlikely you'd actually overflow an Int32...\nCheck to make sure you're passing in sane data into, say, the startpageIndex or pageSize of your datasource, for example.\n", "Are you 100% sure that's the line of code that's throwing the exception? I'm pretty certain that the FindControl method is not capable of throwing an ArithmeticException. Of course, I've been known to be wrong before... :)\n", "I have seen ArithmeticException being thrown in weird places before in C#/.NET and it was when I was working with p/invoke to an unmanaged .dll talking to an USB device.\nThe crash was consistent, and always at the same place. Of course, the place was totally unrelated to the crash (i think it was a basic value assignment, like int i = 4 or something similarly silly) \nI'd like to have a happy ending to tell you, but I never managed to fully track down the problem. I strongly believe that the cause was in the unmanaged code, and that it somehow corrupted the memory or maybe even free'd managed memory. (Removing the calls to unmanaged code made the problem go away)\nThe message I'm sending is: are you doing any calls to unmanaged code? If so, my suggestion is you focus your debugging skills there :)\n" ]
[ 3, 0, 0 ]
[]
[]
[ "asp.net", "c#" ]
stackoverflow_0000085588_asp.net_c#.txt
Q: How do I protect my file data from disk corruption? Recently, I read an article entitled "SATA vs. SCSI reliability". It mostly discusses the very high rate bit flipping in consumer SATA drives and concludes "A 56% chance that you can't read all the data from a particular disk now". Even Raid-5 can't save us as it must be constantly scanned for problems and if a disk does die you are pretty much guaranteed to have some flipped bits on your rebuilt file system. Considerations: I've heard great things about Sun's ZFS with Raid-Z but the Linux and BSD implementations are still experimental. I'm not sure it's ready for prime time yet. I've also read quite a bit about the Par2 file format. It seems like storing some extra % parity along with each file would allow you to recover from most problems. However, I am not aware of a file system that does this internally and it seems like it could be hard to manage the separate files. Backups (Edit): I understand that backups are paramount. However, without some kind of check in place you could easily be sending bad data to people without even knowing it. Also figuring out which backup has a good copy of that data could be difficult. For instance, you have a Raid-5 array running for a year and you find a corrupted file. Now you have to go back checking your backups until you find a good copy. Ideally you would go to the first backup that included the file but that may be difficult to figure out, especially if the file has been edited many times. Even worse, consider if that file was appended to or edited after the corruption occurred. That alone is reason enough for block-level parity such as Par2. A: That article significantly exaggerates the problem by misunderstanding the source. It assumes that data loss events are independent, ie that if I take a thousand disks, and get five hundred errors, that's likely to be one each on five hundred of the disks. But actually, as anyone who has had disk trouble knows, it's probably five hundred errors on one disk (still a tiny fraction of the disk's total capacity), and the other nine hundred and ninety-nine were fine. Thus, in practice it's not that there's a 56% chance that you can't read all of your disk, rather, it's probably more like 1% or less, but most of the people in that 1% will find they've lost dozens or hundreds of sectors even though the disk as a whole hasn't failed. Sure enough, practical experiments reflect this understanding, not the one offered in the article. Basically this is an example of "Chinese whispers". The article linked here refers to another article, which in turn refers indirectly to a published paper. The paper says that of course these events are not independent but that vital fact disappears on the transition to easily digested blog format. A: 56% chance I can't read something, I doubt it. I run a mix of RAID 5 and other goodies and just good backup practices but with Raid 5 and a hot spare I haven't ever had data loss so I'm not sure what all the fuss is about. If you're storing parity information ... well you're creating a RAID system using software, a disk failure in R5 results in a parity like check to get back the lost disk data so ... it is already there. Run Raid, backup your data, you be fine :) A: ZFS is a start. Many storage vendors provide 520B drives with extra data protection available as well. However, this only protects your data as soon as it enters the storage fabric. If it was corrupted at the host level, then you are hosed anyway. On the horizon are some promising standards-based solutions to this very problem. End-to-end data protection. Consider T10 DIF (Data Integrity Field). This is an emerging standard (it was drafted 5 years ago) and a new technology, but it has the lofty goal of solving the problem of data corruption.
How do I protect my file data from disk corruption?
Recently, I read an article entitled "SATA vs. SCSI reliability". It mostly discusses the very high rate bit flipping in consumer SATA drives and concludes "A 56% chance that you can't read all the data from a particular disk now". Even Raid-5 can't save us as it must be constantly scanned for problems and if a disk does die you are pretty much guaranteed to have some flipped bits on your rebuilt file system. Considerations: I've heard great things about Sun's ZFS with Raid-Z but the Linux and BSD implementations are still experimental. I'm not sure it's ready for prime time yet. I've also read quite a bit about the Par2 file format. It seems like storing some extra % parity along with each file would allow you to recover from most problems. However, I am not aware of a file system that does this internally and it seems like it could be hard to manage the separate files. Backups (Edit): I understand that backups are paramount. However, without some kind of check in place you could easily be sending bad data to people without even knowing it. Also figuring out which backup has a good copy of that data could be difficult. For instance, you have a Raid-5 array running for a year and you find a corrupted file. Now you have to go back checking your backups until you find a good copy. Ideally you would go to the first backup that included the file but that may be difficult to figure out, especially if the file has been edited many times. Even worse, consider if that file was appended to or edited after the corruption occurred. That alone is reason enough for block-level parity such as Par2.
[ "That article significantly exaggerates the problem by misunderstanding the source. It assumes that data loss events are independent, ie that if I take a thousand disks, and get five hundred errors, that's likely to be one each on five hundred of the disks. But actually, as anyone who has had disk trouble knows, it's probably five hundred errors on one disk (still a tiny fraction of the disk's total capacity), and the other nine hundred and ninety-nine were fine. Thus, in practice it's not that there's a 56% chance that you can't read all of your disk, rather, it's probably more like 1% or less, but most of the people in that 1% will find they've lost dozens or hundreds of sectors even though the disk as a whole hasn't failed.\nSure enough, practical experiments reflect this understanding, not the one offered in the article.\nBasically this is an example of \"Chinese whispers\". The article linked here refers to another article, which in turn refers indirectly to a published paper. The paper says that of course these events are not independent but that vital fact disappears on the transition to easily digested blog format.\n", "56% chance I can't read something, I doubt it. I run a mix of RAID 5 and other goodies and just good backup practices but with Raid 5 and a hot spare I haven't ever had data loss so I'm not sure what all the fuss is about. If you're storing parity information ... well you're creating a RAID system using software, a disk failure in R5 results in a parity like check to get back the lost disk data so ... it is already there.\nRun Raid, backup your data, you be fine :)\n", "ZFS is a start. Many storage vendors provide 520B drives with extra data protection available as well. However, this only protects your data as soon as it enters the storage fabric. If it was corrupted at the host level, then you are hosed anyway.\nOn the horizon are some promising standards-based solutions to this very problem. End-to-end data protection.\nConsider T10 DIF (Data Integrity Field). This is an emerging standard (it was drafted 5 years ago) and a new technology, but it has the lofty goal of solving the problem of data corruption.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "corruption", "filesystems", "storage" ]
stackoverflow_0000086548_corruption_filesystems_storage.txt
Q: Best way to transfer an xml to SQL Server? I have been hearing the podcast blog for a while, I hope I dont break this. The question is this: I have to insert an xml to a database. This will be for already defined tables and fields. So what is the best way to accomplish this? So far I am leaning toward programatic. I have been seeing varios options, one is Data Transfer Objects (DTO), in the SQL Server there is the sp_xml_preparedocument that is used to get transfer XMLs to an object and throught code. I am using CSharp and SQL Server 2005. The fields are not XML fields, they are the usual SQL datatypes. A: In an attempt to try and help, we may need some clarification. Maybe by restating the problem you can let us know if this is what you're asking: How can one import existing xml into a SQL 2005 database, without relying on the built-in xml type? A fairly straight forward solution that you already mentioned is the sp_xml_preparedocument, combined with openxml. Hopefully the following example illustrates the correct usage. For a more complete example checkout the MSDN docs on Using OPENXML. declare @XmlDocumentHandle int declare @XmlDocument nvarchar(1000) set @XmlDocument = N'<ROOT> <Customer> <FirstName>Will</FirstName> <LastName>Smith</LastName> </Customer> </ROOT>' -- Create temp table to insert data into create table #Customer ( FirstName varchar(20), LastName varchar(20) ) -- Create an internal representation of the XML document. exec sp_xml_preparedocument @XmlDocumentHandle output, @XmlDocument -- Insert using openxml allows us to read the structure insert into #Customer select FirstName = XmlFirstName, LastName = XmlLastName from openxml ( @XmlDocumentHandle, '/ROOT/Customer',2 ) with ( XmlFirstName varchar(20) 'FirstName', XmlLastName varchar(20) 'LastName' ) where ( XmlFirstName = 'Will' and XmlLastName = 'Smith' ) -- Cleanup xml document exec sp_xml_removedocument @XmlDocumentHandle -- Show the data select * from #Customer -- Drop tmp table drop table #Customer If you have an xml file and are using C#, then defining a stored procedure that does something like the above and then passing the entire xml file contents to the stored procedure as a string should give you a fairly straight forward way of importing xml into your existing table(s). A: If your XML conforms to a particular XSD schema, you can look into using the "xsd.exe" command line tool to generate C# object classes that you can bind the XML to, and then form your insert statements using the properties of those objects: MSDN XSD Doc A: Peruse this document and it will give you the options: MSDN: XML Options in Microsoft SQL Server 2005 A: You may want to use XSLT to transfer your XML into SQL statements... ie <xml type="user"> <data>1</data> <data>2</data> <xml> Then the XSLT would look like <xsl:template match="xml"> INSERT INTO <xsl:value-of select="@type" /> (data1, data2) VALUES ( '<xsl:value-of select="data[1]" />', '<xsl:value-of select="data[2]" />'); </xsl:template> The match statement most likely won't be the root node, but hopefully you get the idea. You may also need to wrap the non xsl:value-of parts in xsl:text to prevent extra characters from being dumped into the query. And you'd have to make sure the output of the XSLT was text. That said you could get a list of SQL statements that you could run through the DB. or you could use XSLT to output a T-SQL statement that you could load as a stored procedure.
Best way to transfer an xml to SQL Server?
I have been hearing the podcast blog for a while, I hope I dont break this. The question is this: I have to insert an xml to a database. This will be for already defined tables and fields. So what is the best way to accomplish this? So far I am leaning toward programatic. I have been seeing varios options, one is Data Transfer Objects (DTO), in the SQL Server there is the sp_xml_preparedocument that is used to get transfer XMLs to an object and throught code. I am using CSharp and SQL Server 2005. The fields are not XML fields, they are the usual SQL datatypes.
[ "In an attempt to try and help, we may need some clarification. Maybe by restating the problem you can let us know if this is what you're asking:\nHow can one import existing xml into a SQL 2005 database, without relying on the built-in xml type?\nA fairly straight forward solution that you already mentioned is the sp_xml_preparedocument, combined with openxml. \nHopefully the following example illustrates the correct usage. For a more complete example checkout the MSDN docs on Using OPENXML.\ndeclare @XmlDocumentHandle int\ndeclare @XmlDocument nvarchar(1000)\nset @XmlDocument = N'<ROOT>\n<Customer>\n <FirstName>Will</FirstName>\n <LastName>Smith</LastName>\n</Customer>\n</ROOT>'\n\n-- Create temp table to insert data into\ncreate table #Customer \n( \n FirstName varchar(20),\n LastName varchar(20) \n)\n-- Create an internal representation of the XML document.\nexec sp_xml_preparedocument @XmlDocumentHandle output, @XmlDocument\n\n-- Insert using openxml allows us to read the structure\ninsert into #Customer\nselect \n FirstName = XmlFirstName,\n LastName = XmlLastName\nfrom openxml ( @XmlDocumentHandle, '/ROOT/Customer',2 )\nwith \n(\n XmlFirstName varchar(20) 'FirstName',\n XmlLastName varchar(20) 'LastName'\n)\nwhere ( XmlFirstName = 'Will' and XmlLastName = 'Smith' )\n\n-- Cleanup xml document\nexec sp_xml_removedocument @XmlDocumentHandle\n\n-- Show the data\nselect * \nfrom #Customer\n\n-- Drop tmp table\ndrop table #Customer\n\nIf you have an xml file and are using C#, then defining a stored procedure that does something like the above and then passing the entire xml file contents to the stored procedure as a string should give you a fairly straight forward way of importing xml into your existing table(s).\n", "If your XML conforms to a particular XSD schema, you can look into using the \"xsd.exe\" command line tool to generate C# object classes that you can bind the XML to, and then form your insert statements using the properties of those objects: MSDN XSD Doc\n", "Peruse this document and it will give you the options:\nMSDN: XML Options in Microsoft SQL Server 2005\n", "You may want to use XSLT to transfer your XML into SQL statements... ie\n<xml type=\"user\">\n <data>1</data>\n <data>2</data>\n<xml>\n\nThen the XSLT would look like\n<xsl:template match=\"xml\">\n INSERT INTO <xsl:value-of select=\"@type\" /> (data1, data2) VALUES (\n '<xsl:value-of select=\"data[1]\" />',\n '<xsl:value-of select=\"data[2]\" />');\n</xsl:template>\n\nThe match statement most likely won't be the root node, but hopefully you get the idea. You may also need to wrap the non xsl:value-of parts in xsl:text to prevent extra characters from being dumped into the query. And you'd have to make sure the output of the XSLT was text. That said you could get a list of SQL statements that you could run through the DB. or you could use XSLT to output a T-SQL statement that you could load as a stored procedure.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "sql_server", "xml" ]
stackoverflow_0000059790_sql_server_xml.txt
Q: Best OS App for Outbound SMTP Packet Capture? Okay, so this probably sounds terribly nefarious, but I need such capabilities for my senior project. Essentially I'm tasked with writing something that will cut down outbound spam on a zombified pc through a system of packet interception and evaluation. We have a number of algorithms we'll use on the captured messages, but it's the actual capture -- full on interception rather than just sniffing -- that has me a bit stumped. The app is being designed for windows, so I can't use IP tables. I could use the winpcap libraries, but I don't want to reinvent the wheel if I don't have to. Ettercap seemed a good option, but a test run on vista using the unofficial binaries resulted in nothing but crashes. So, any suggestions? Update: Great suggestions. Ended up scaling back the project a bit, but still received an A. I'm thinking Adam Mintz's answer is probably best, though we used WinPcap and Wireshark for the application. A: Sounds like you need to write a Winsock LSP. Once in the stack, a Layered Service Provider can intercept and modify inbound and outbound Internet traffic. It allows processing all the TCP/IP traffic taking place between the Internet and the applications that are accessing the Internet. A: One would think Wireshark would solve your problem -- no hassle install and pretty easy to use. Edit: Ah, I see now the interception requirement vs. just sniffing.. in this case Wireshark alone won't cut it. Probably whatever's the equivalent of iptables on windows would. A: The DSNIFF package has the mailsnarf utility. It can grab POP3 too. There are all sorts of other wonderful sniffing utilities there. Make sure you have the legal right before using these tools (the legal right to intercept other peoples traffic). I beleive the documentation has more information on the legality. According to the web page there are Windows and Mac OS X ports too. It would not be too hard to analyze the text output of the program. A: Ilkka: I was looking at Wireshark, but from what I could tell, that didn't handle the interception aspect -- only the sniffing and logging. The thing the professor's looking for is to prevent the spams from getting out onto the network. Adam: I'll definitely look into Winsock. I haven't checked that out yet. Only thing is the app's due in about 2 months, so if there are any OS apps that build off the WinSock SPI, I might want to tie into those. Know of any off the top of your head? A: Thanks, CDV. I'll look into that as well. Good call about the legality check. I've actually been trying to use gnu public license projects so far. A: I agree that Wireshark might be all you need. If you want to write your own filter application and can use Vista, then check out the Windows Filtering Platform. A: tcpdump if you need command line or something more visual like wireshark If you want to write something on your own use libpcap. A: Use Snort, stripped down, if this is a long-term thing. It's built to watch for particular packets flying by, examining payload where needed, recording data and launching alerts. It's intended for intrusion detection, but it makes a surprisingly good network monitor for particular things over long term use.
Best OS App for Outbound SMTP Packet Capture?
Okay, so this probably sounds terribly nefarious, but I need such capabilities for my senior project. Essentially I'm tasked with writing something that will cut down outbound spam on a zombified pc through a system of packet interception and evaluation. We have a number of algorithms we'll use on the captured messages, but it's the actual capture -- full on interception rather than just sniffing -- that has me a bit stumped. The app is being designed for windows, so I can't use IP tables. I could use the winpcap libraries, but I don't want to reinvent the wheel if I don't have to. Ettercap seemed a good option, but a test run on vista using the unofficial binaries resulted in nothing but crashes. So, any suggestions? Update: Great suggestions. Ended up scaling back the project a bit, but still received an A. I'm thinking Adam Mintz's answer is probably best, though we used WinPcap and Wireshark for the application.
[ "Sounds like you need to write a Winsock LSP.\n\nOnce in the stack, a Layered Service Provider can intercept and modify inbound and outbound Internet traffic. It allows processing all the TCP/IP traffic taking place between the Internet and the applications that are accessing the Internet.\n\n", "One would think Wireshark would solve your problem -- no hassle install and pretty easy to use.\nEdit: Ah, I see now the interception requirement vs. just sniffing.. in this case Wireshark alone won't cut it. Probably whatever's the equivalent of iptables on windows would.\n", "The DSNIFF package has the mailsnarf utility. It can grab POP3 too. There are all sorts of other wonderful sniffing utilities there. Make sure you have the legal right before using these tools (the legal right to intercept other peoples traffic). I beleive the documentation has more information on the legality. According to the web page there are Windows and Mac OS X ports too.\nIt would not be too hard to analyze the text output of the program.\n", "Ilkka: I was looking at Wireshark, but from what I could tell, that didn't handle the interception aspect -- only the sniffing and logging. The thing the professor's looking for is to prevent the spams from getting out onto the network.\nAdam: I'll definitely look into Winsock. I haven't checked that out yet. Only thing is the app's due in about 2 months, so if there are any OS apps that build off the WinSock SPI, I might want to tie into those. Know of any off the top of your head? \n", "Thanks, CDV. I'll look into that as well. Good call about the legality check. I've actually been trying to use gnu public license projects so far.\n", "I agree that Wireshark might be all you need. If you want to write your own filter application and can use Vista, then check out the Windows Filtering Platform. \n", "tcpdump if you need command line or something more visual like wireshark\nIf you want to write something on your own use libpcap.\n", "Use Snort, stripped down, if this is a long-term thing. It's built to watch for particular packets flying by, examining payload where needed, recording data and launching alerts.\nIt's intended for intrusion detection, but it makes a surprisingly good network monitor for particular things over long term use.\n" ]
[ 2, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "c++", "packet_capture", "smtp", "spam_prevention", "windows" ]
stackoverflow_0000080341_c++_packet_capture_smtp_spam_prevention_windows.txt
Q: Is it possible to add behavior to a non-dynamic ActionScript 3 class without inheriting the class? What I'd like to do is something like the following: FooClass.prototype.method = function():String { return "Something"; } var foo:FooClass = new FooClass(); foo.method(); Which is to say, I'd like to extend a generated class with a single method, not via inheritance but via the prototype. The class is generated from a WSDL, it's not a dynamic class, and I don't want to touch the generated code because it will be overwritten anyway. Long story short, I'd like to have the moral equivalent of C# 3:s Extension Methods for AS3. Edit: I accepted aib's answer, because it fits what I was asking best -- although upon further reflection it doesn't really solve my problem, but that's my fault for asking the wrong question. :) Also, upmods for the good suggestions. A: Yes, such a thing is possible. In fact, your example is very close to the solution. Try foo["method"](); instead of foo.method(); A: @Theo: How would you explain the following working in 3.0.0.477 with the default flex-config.xml (<strict>true</strict>) and even a -compiler.strict parameter passed to mxmlc? Foo.as: package { public class Foo { public var foo:String; public function Foo() { foo = "foo!"; } } } footest.as: package { import flash.display.Sprite; public class footest extends Sprite { public function footest() { Foo.prototype.method = function():String { return "Something"; } var foo:Foo = new Foo(); trace(foo["method"]()); } } } Note that the OP said inheritance was unacceptable, as was modifying the generated code. (If that weren't the case, adding "dynamic" to the class definition would probably be the easiest solution.) A: Depending on how many methods your class has, this may work: Actual Class: public class SampleClass { public function SampleClass() { } public function method1():void { Alert.show("Hi"); } Quick Wrapper: var actualClass:SampleClass = new SampleClass(); var QuickWrapper:Object = { ref: actualClass, method1: function():void { this.ref.method1(); }, method2: function():void { Alert.show("Hello!"); } }; QuickWrapper.method1(); QuickWrapper.method2(); A: @aib is unfortunately incorrect. Assuming strict mode (the default compiler mode) it is not possible to modify the prototype of non-dynamic class types in ActionScript 3. I'm not even sure that it's possible in non-strict mode. Is wrapping an option? Basically you create a class that takes one of the objects you get from the web service and just forwards all method calls to that, but also has methods of its own: public class FooWrapper extends Foo { private var wrappedFoo : Foo; public function FooWrapper( foo : Foo ) { wrappedFoo = foo; } override public function methodFromFoo( ) : void { wrappedFoo.methodFromFoo(); } override public function anotherMethodFromFoo( ) : void { wrappedFoo.anotherMethodFromFoo(); } public function newMethodNotOnFoo( ) : String { return "Hello world!" } } When you want to work with a Foo, but also have the extra method you need you wrap the Foo instance in a FooWrapper and work with that object instead. It's not the most convenient solution, there's a lot of typing and if the generated code changes you have to change the FooWrapper class by hand, but unless you can modify the generated code either to include the method you want or to make the class dynamic I don't see how it can be done. Another solution is to add a step to your build process that modifies the source of the generated classes. I assume that you already have a step that generates the code from a WSDL, so what you could do is to add a step after that that inserts the methods you need. A: Monkey patching is an (inelegant) option. For example, suppose you don't like the fact that Flex 3 SpriteAsset.as returns a default border metrics of [7,7,7,7] (unlike flex 2). To fix this, you can: Create a copy of SpriteAsset.as and add it to your project at /mx/core/SpriteAsset.as Edit your local copy to fix any problems you find Run your ap Google "flex monkey patch" for more examples and instructions.
Is it possible to add behavior to a non-dynamic ActionScript 3 class without inheriting the class?
What I'd like to do is something like the following: FooClass.prototype.method = function():String { return "Something"; } var foo:FooClass = new FooClass(); foo.method(); Which is to say, I'd like to extend a generated class with a single method, not via inheritance but via the prototype. The class is generated from a WSDL, it's not a dynamic class, and I don't want to touch the generated code because it will be overwritten anyway. Long story short, I'd like to have the moral equivalent of C# 3:s Extension Methods for AS3. Edit: I accepted aib's answer, because it fits what I was asking best -- although upon further reflection it doesn't really solve my problem, but that's my fault for asking the wrong question. :) Also, upmods for the good suggestions.
[ "Yes, such a thing is possible.\nIn fact, your example is very close to the solution.\nTry\nfoo[\"method\"]();\n\ninstead of\nfoo.method();\n\n", "@Theo: How would you explain the following working in 3.0.0.477 with the default flex-config.xml (<strict>true</strict>) and even a -compiler.strict parameter passed to mxmlc?\nFoo.as:\npackage\n{\n public class Foo\n {\n public var foo:String;\n\n public function Foo()\n {\n foo = \"foo!\";\n }\n }\n}\n\nfootest.as:\npackage\n{\n import flash.display.Sprite;\n\n public class footest extends Sprite\n {\n public function footest()\n {\n Foo.prototype.method = function():String\n {\n return \"Something\";\n }\n\n var foo:Foo = new Foo();\n trace(foo[\"method\"]());\n }\n }\n}\n\nNote that the OP said inheritance was unacceptable, as was modifying the generated code. (If that weren't the case, adding \"dynamic\" to the class definition would probably be the easiest solution.)\n", "Depending on how many methods your class has, this may work:\nActual Class:\npublic class SampleClass\n{\n public function SampleClass()\n {\n }\n\n public function method1():void {\n Alert.show(\"Hi\");\n }\n\nQuick Wrapper:\nvar actualClass:SampleClass = new SampleClass();\n\nvar QuickWrapper:Object = {\n ref: actualClass,\n method1: function():void {\n this.ref.method1();\n },\n method2: function():void {\n Alert.show(\"Hello!\");\n } \n};\n\nQuickWrapper.method1();\nQuickWrapper.method2();\n\n", "@aib is unfortunately incorrect. Assuming strict mode (the default compiler mode) it is not possible to modify the prototype of non-dynamic class types in ActionScript 3. I'm not even sure that it's possible in non-strict mode.\nIs wrapping an option? Basically you create a class that takes one of the objects you get from the web service and just forwards all method calls to that, but also has methods of its own:\npublic class FooWrapper extends Foo {\n\n private var wrappedFoo : Foo;\n\n public function FooWrapper( foo : Foo ) {\n wrappedFoo = foo;\n }\n\n override public function methodFromFoo( ) : void {\n wrappedFoo.methodFromFoo();\n }\n\n override public function anotherMethodFromFoo( ) : void {\n wrappedFoo.anotherMethodFromFoo();\n }\n\n public function newMethodNotOnFoo( ) : String {\n return \"Hello world!\"\n }\n\n}\n\nWhen you want to work with a Foo, but also have the extra method you need you wrap the Foo instance in a FooWrapper and work with that object instead.\nIt's not the most convenient solution, there's a lot of typing and if the generated code changes you have to change the FooWrapper class by hand, but unless you can modify the generated code either to include the method you want or to make the class dynamic I don't see how it can be done.\nAnother solution is to add a step to your build process that modifies the source of the generated classes. I assume that you already have a step that generates the code from a WSDL, so what you could do is to add a step after that that inserts the methods you need.\n", "Monkey patching is an (inelegant) option.\nFor example, suppose you don't like the fact that Flex 3 SpriteAsset.as returns a default border metrics of [7,7,7,7] (unlike flex 2). To fix this, you can:\n\nCreate a copy of SpriteAsset.as and add it to your project at /mx/core/SpriteAsset.as\nEdit your local copy to fix any problems you find\nRun your ap\n\nGoogle \"flex monkey patch\" for more examples and instructions.\n" ]
[ 3, 3, 2, 1, 1 ]
[]
[]
[ "actionscript_3", "apache_flex", "extension_methods" ]
stackoverflow_0000007212_actionscript_3_apache_flex_extension_methods.txt
Q: Web Scripting for Java What is a good way to render data produced by a Java process in the browser? I've made extensive use of JSP and the various associated frameworks (JSTL, Struts, Tapestry, etc), as well as more comprehensive frameworks not related to JSP (GWT, OpenLaszlo). None of the solutions have ever been entirely satisfactory - in most cases the framework is too constrained or too complex for my needs, while others would require extensive refactoring of existing code. Additionally, most frameworks seem to have performance problems. Currently I'm leaning towards the solution of exposing my java data via a simple servlet that returns JSON, and then rendering the data using PHP or Ruby. This has the added benefit of instantly exposing my service as a web service as well, but I'm wondering if I'm reinventing the wheel here. A: I personally use Tapestry 5 for creating webpages with Java, but I agree that it can sometimes be a bit overkill. I would look into using JAX-RS (java.net project, jsr311) it is pretty simple to use, it supports marshalling and unmarshalling objects to/from XML out of the box. It is possible to extend it to support JSON via Jettison. There are two implementations that I have tried: Jersey - the reference implementation for JAX-RS. Resteasy - the implementation I prefer, good support for marshalling and unmarshalling a wide-range of formats. Also pretty stable and has more features that Jersey. Take a look at the following code to get a feeling for what JAX-RS can do for you: @Path("/") class TestClass { @GET @Path("text") @Produces("text/plain") String getText() { return "String value"; } } This tiny class will expose itself at the root of the server (@Path on the class), then expose the getText() method at the URI /text and allow access to it via HTTP GET. The @Produces annotation tells the JAX-RS framework to attempt to turn the result of the method into plain text. The easiest way to learn about what is possible with JAX-RS is to read the specification. A: We're using Stripes. It gives you more structure than straight servlets, but it lets you control your urls through a @UrlBinding annotation. We use it to stream xml and json back to the browser for ajax stuff. You could easily consume it with another technology if you wanted to go that route, but you may actually enjoy developing with stripes. A: Check out Restlet for a good framework for exposing your domain model as REST services (including JSON and trivial XML output). For rendering your info, maybe you can use GWT on the client side and consume your data services? If GWT doesn't float your boat, then maybe JQuery would? A: Perhaps you could generate the data as XML and render it using XSLT? I'm not sure PHP or Ruby are the answer if Java isn't fast enough for you!
Web Scripting for Java
What is a good way to render data produced by a Java process in the browser? I've made extensive use of JSP and the various associated frameworks (JSTL, Struts, Tapestry, etc), as well as more comprehensive frameworks not related to JSP (GWT, OpenLaszlo). None of the solutions have ever been entirely satisfactory - in most cases the framework is too constrained or too complex for my needs, while others would require extensive refactoring of existing code. Additionally, most frameworks seem to have performance problems. Currently I'm leaning towards the solution of exposing my java data via a simple servlet that returns JSON, and then rendering the data using PHP or Ruby. This has the added benefit of instantly exposing my service as a web service as well, but I'm wondering if I'm reinventing the wheel here.
[ "I personally use Tapestry 5 for creating webpages with Java, but I agree that it can sometimes be a bit overkill. I would look into using JAX-RS (java.net project, jsr311) it is pretty simple to use, it supports marshalling and unmarshalling objects to/from XML out of the box. It is possible to extend it to support JSON via Jettison.\nThere are two implementations that I have tried:\n\nJersey - the reference implementation for JAX-RS.\nResteasy - the implementation I prefer, good support for marshalling and unmarshalling a wide-range of formats. Also pretty stable and has more features that Jersey.\n\nTake a look at the following code to get a feeling for what JAX-RS can do for you:\n@Path(\"/\")\nclass TestClass {\n @GET\n @Path(\"text\")\n @Produces(\"text/plain\")\n String getText() {\n return \"String value\";\n }\n}\n\nThis tiny class will expose itself at the root of the server (@Path on the class), then expose the getText() method at the URI /text and allow access to it via HTTP GET. The @Produces annotation tells the JAX-RS framework to attempt to turn the result of the method into plain text.\nThe easiest way to learn about what is possible with JAX-RS is to read the specification.\n", "We're using Stripes. It gives you more structure than straight servlets, but it lets you control your urls through a @UrlBinding annotation. We use it to stream xml and json back to the browser for ajax stuff. \nYou could easily consume it with another technology if you wanted to go that route, but you may actually enjoy developing with stripes.\n", "Check out Restlet for a good framework for exposing your domain model as REST services (including JSON and trivial XML output). \nFor rendering your info, maybe you can use GWT on the client side and consume your data services? If GWT doesn't float your boat, then maybe JQuery would?\n", "Perhaps you could generate the data as XML and render it using XSLT?\nI'm not sure PHP or Ruby are the answer if Java isn't fast enough for you!\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "java", "jsp", "scripting" ]
stackoverflow_0000084149_java_jsp_scripting.txt
Q: Is there an equivalent to Java's Robot class (java.awt.Robot) for Perl? Is there an equivalent to Java's Robot class (java.awt.Robot) for Perl? A: Alternatively, you can surely use the WWW::Mechanize module to create an agent as we do here at work. We have a tool called AppMon that is really just a dramatized wrapper around Mechanize. The Mechanize module allows you to use scripts that look a lot like this: use WWW::Mechanize; my $Agent = WWW::Mechanize->new(cookie_jar => {}); $Agent->get("http://www.google.com/search?q=stack+overflow+mechanize"); print "Found Mechanize" $Agent->content =~ /WWW::Mechanize/; and will result in "Found Mechanize" being output. This is a very simple script, but rest assured you can interact with forms quite well. You can also move to Ruby and use Watir, or Selenium as another alternative, albeit not as interesting (in terms of coding) or automate-able. Selenium has a firefox extension that is quite useful for creating the selenium scripts and can change them between the various languages that it supports, which is pretty extensive in terms of automation. Update - Nov 2016 Although I haven't had much of an opportunity to play with it, there are also webdriver packages for most languages, and Perl is no different. Selenium::Remote::Driver A: If you're looking for a way to control a browser for the purpose of functional testing, Selenium has Perl bindings: http://selenium.openqa.org/ A: For X (Linux/Unix), there's X11::GUITest. For Windows, there's Win32::CtrlGUI, although it can be a bit tricky to install its prerequisites. A: On Windows, I've always used Win32::GuiTest. A: There is on Linux/Unix: http://sourceforge.net/projects/x11guitest I'm not familiar of anything similar for Windows or Mac that uses Perl.
Is there an equivalent to Java's Robot class (java.awt.Robot) for Perl?
Is there an equivalent to Java's Robot class (java.awt.Robot) for Perl?
[ "Alternatively, you can surely use the WWW::Mechanize module to create an agent as we do here at work. We have a tool called AppMon that is really just a dramatized wrapper around Mechanize. \nThe Mechanize module allows you to use scripts that look a lot like this: \nuse WWW::Mechanize;\n\nmy $Agent = WWW::Mechanize->new(cookie_jar => {});\n\n$Agent->get(\"http://www.google.com/search?q=stack+overflow+mechanize\");\nprint \"Found Mechanize\" $Agent->content =~ /WWW::Mechanize/;\n\nand will result in \"Found Mechanize\" being output. This is a very simple script, but rest assured you can interact with forms quite well.\nYou can also move to Ruby and use Watir, or Selenium as another alternative, albeit not as interesting (in terms of coding) or automate-able. Selenium has a firefox extension that is quite useful for creating the selenium scripts and can change them between the various languages that it supports, which is pretty extensive in terms of automation.\nUpdate - Nov 2016\nAlthough I haven't had much of an opportunity to play with it, there are also webdriver packages for most languages, and Perl is no different. \nSelenium::Remote::Driver\n", "If you're looking for a way to control a browser for the purpose of functional testing, Selenium has Perl bindings: http://selenium.openqa.org/\n", "For X (Linux/Unix), there's X11::GUITest.\nFor Windows, there's Win32::CtrlGUI, although it can be a bit tricky to install its prerequisites.\n", "On Windows, I've always used Win32::GuiTest.\n", "There is on Linux/Unix:\nhttp://sourceforge.net/projects/x11guitest\nI'm not familiar of anything similar for Windows or Mac that uses Perl.\n" ]
[ 6, 4, 3, 2, 1 ]
[]
[]
[ "automated_tests", "awtrobot", "java", "perl" ]
stackoverflow_0000079935_automated_tests_awtrobot_java_perl.txt
Q: What is the best way to load a Hibernate object graph before using it in a UI? The situation is this: You have a Hibernate context with an object graph that has some lazy loading defined. You want to use the Hibernate objects in your UI as is without having to copy the data somewhere. There are different UI contexts that require different amounts of data. The data is too big to just eager load the whole graph each time. What is the best means to load all the appropriate objects in the object graph in a configurable way so that they can be accessed without having to go back to the database to load more data? Any help. A: Let's say you have the Client and at one point you have to something with his Orders and maybe he has a Bonus for his Orders. Then I would define a Repository with a fluent interface that will allow me to say something like : new ClientRepo().LoadClientBy(id) .WithOrders() .WithBonus() .OrderByName(); And there you have the client with everything you need. It's preferably that you know in advance what you will need for the current operation. This way you can avoid unwanted trips to the database.(new devs in your team will usually do this - call a property and not be aware of the fact that it's actually a call to the DB) A: If it's a webapp and you're using Spring, then OpenSessionInViewFilter could be the solution to your problems. A: An approach we use in our projects is to create a service for each view you have. Then the view fetches the sub-graph you need for this specific view, always trying to reduce the number of sqls send to the database. Therefore we are using a lot of joins to get the n:1 associated objects. If you are using a 2-tier desktop app directly connected to the DB you can just leave the objects attached and load additional data anytime automatically. Otherwise you have to reattach it to the session and initialize the association you need with Hibernate.initialize(Object entity, String propertyName) (Out of memory, maybe not 100% correct)
What is the best way to load a Hibernate object graph before using it in a UI?
The situation is this: You have a Hibernate context with an object graph that has some lazy loading defined. You want to use the Hibernate objects in your UI as is without having to copy the data somewhere. There are different UI contexts that require different amounts of data. The data is too big to just eager load the whole graph each time. What is the best means to load all the appropriate objects in the object graph in a configurable way so that they can be accessed without having to go back to the database to load more data? Any help.
[ "Let's say you have the Client and at one point you have to something with his Orders and maybe he has a Bonus for his Orders. \nThen I would define a Repository with a fluent interface that will allow me to say something like :\nnew ClientRepo().LoadClientBy(id)\n .WithOrders()\n .WithBonus()\n .OrderByName();\n\nAnd there you have the client with everything you need. It's preferably that you know in advance what you will need for the current operation. This way you can avoid unwanted trips to the database.(new devs in your team will usually do this - call a property and not be aware of the fact that it's actually a call to the DB)\n", "If it's a webapp and you're using Spring, then OpenSessionInViewFilter could be the solution to your problems.\n", "An approach we use in our projects is to create a service for each view you have. Then the view fetches the sub-graph you need for this specific view, always trying to reduce the number of sqls send to the database. Therefore we are using a lot of joins to get the n:1 associated objects.\nIf you are using a 2-tier desktop app directly connected to the DB you can just leave the objects attached and load additional data anytime automatically. Otherwise you have to reattach it to the session and initialize the association you need with Hibernate.initialize(Object entity, String propertyName)\n(Out of memory, maybe not 100% correct)\n" ]
[ 2, 1, 1 ]
[]
[]
[ "hibernate", "java" ]
stackoverflow_0000079843_hibernate_java.txt
Q: Detect browser connection closed in PHP Does anyone know if it is possible to detect whether the browser has closed the connection during the execution of a long PHP script, when using apache and mod_php? For example, in Java, the HttpOutputStream will throw an exception if one attempts to write to it after the browser has closed it -- Or will respond negatively to checkError(). A: Use connection_aborted() A: In at least PHP4, connection_aborted and connection_status only worked after the script sent any output to the browser (using: flush() | ob_flush()). Also don't expect accurately timed results. It's mostly useful to check if there is still someone waiting on the other side. A: http://nz.php.net/register-shutdown-function Probably less complicated if you just want a script to die and handle it when a user terminates. ( Ie: if it was a lengthy search, this would save you a bunch of operation cycles )
Detect browser connection closed in PHP
Does anyone know if it is possible to detect whether the browser has closed the connection during the execution of a long PHP script, when using apache and mod_php? For example, in Java, the HttpOutputStream will throw an exception if one attempts to write to it after the browser has closed it -- Or will respond negatively to checkError().
[ "Use connection_aborted()\n", "In at least PHP4, connection_aborted and connection_status only worked after the script sent any output to the browser (using: flush() | ob_flush()).\nAlso don't expect accurately timed results. \nIt's mostly useful to check if there is still someone waiting on the other side.\n", "http://nz.php.net/register-shutdown-function \nProbably less complicated if you just want a script to die and handle it when a user terminates. \n( Ie: if it was a lengthy search, this would save you a bunch of operation cycles )\n" ]
[ 7, 2, 1 ]
[]
[]
[ "apache", "http", "mod_php", "php" ]
stackoverflow_0000086197_apache_http_mod_php_php.txt
Q: Attempting to update a user's "connect to:" home directory path in AD using C# I have a small application I am working on that at one point needs to update a user's home directory path in AD under the profile tab where it allows you to map a drive letter to a particular path. The code I have put together so far sets the Home Folder Local path portion OK, but I'm trying to figure out the name for the "connect" portion, as well as how to select the drive letter. Go easy on me, I'm new to C#. Thanks!! Here's my code that updates the Local path section. DirectoryEntry deUser = new DirectoryEntry(findMeinAD(tbPNUID.Text)); deUser.InvokeSet("HomeDirectory", tbPFolderVerification.Text); deUser.CommitChanges(); Where findMeinAD is a method that looks up a user's info in AD and tbPFolderVerification.Text is a text box in the form that contains the path I'd like to set a particular drive to map to. A: You may need to set the HomeDrive property as well: DirectoryEntry deUser = new DirectoryEntry(findMeinAD(tbPNUID.Text)); deUser.InvokeSet("HomeDirectory", tbPFolderVerification.Text); deUser.InvokeSet("HomeDrive", "Z:"); deUser.CommitChanges();
Attempting to update a user's "connect to:" home directory path in AD using C#
I have a small application I am working on that at one point needs to update a user's home directory path in AD under the profile tab where it allows you to map a drive letter to a particular path. The code I have put together so far sets the Home Folder Local path portion OK, but I'm trying to figure out the name for the "connect" portion, as well as how to select the drive letter. Go easy on me, I'm new to C#. Thanks!! Here's my code that updates the Local path section. DirectoryEntry deUser = new DirectoryEntry(findMeinAD(tbPNUID.Text)); deUser.InvokeSet("HomeDirectory", tbPFolderVerification.Text); deUser.CommitChanges(); Where findMeinAD is a method that looks up a user's info in AD and tbPFolderVerification.Text is a text box in the form that contains the path I'd like to set a particular drive to map to.
[ "You may need to set the HomeDrive property as well:\nDirectoryEntry deUser = new DirectoryEntry(findMeinAD(tbPNUID.Text));\ndeUser.InvokeSet(\"HomeDirectory\", tbPFolderVerification.Text);\ndeUser.InvokeSet(\"HomeDrive\", \"Z:\");\ndeUser.CommitChanges();\n\n" ]
[ 3 ]
[]
[]
[ "active_directory", "c#" ]
stackoverflow_0000087071_active_directory_c#.txt
Q: What's wrong with singleton? Do not waste your time with this question. Follow up to: What is so bad about singletons? Please feel free to bitch on Singleton. Inappropriate usage of Singleton may cause lot of paint. What kind of problem do you experienced with singleton? What is common misuse of this pattern? After some digging into Corey's answer I discovered some greate articles on this topic. Why Singletons Are Controversial Performant Singletons Singletons are Pathological Liars Where Have All the Singletons Gone? Root Cause of Singletons A: There's nothing inherently wrong with the Singleton pattern. It is a tool and sometimes it should be used. A: Sometimes it can make your code more tightly coupled with the singleton class being refrerenced directly by name from different parts of your codebase. So, for example, when you need to test some part of your code and it references a singleton from a diferent part of the code you cannot easily fake that dependency with a mock object. A: I think a more appropriate question might be: In what situations is the use of a SIngleton Pattern inappropriate? Or what have you seen that uses a Singleton that shouldn't. A: There's nothing wrong with a singleton in itself, and as a pattern it fills a vital role in recognising the need for certain objects to only be created a single time. What it is frequently used for is a euphemism for global variables as an attempt to get around global variable stigma, and it is this use that is inherently wrong. If a global variable happens to be the correct solution, using a singleton won't improve it. If it is (as is fairly common) incorrect to use a global variable, wrapping it in a singleton won't make it any more correct. A: I haven't been exposed to the Singleton as much as some of the other posters have, but nearly all implementations that I have seen (in C#) could have been achieved with static classes/methods. I suppose you could argue that a static class is an implementation of the singleton pattern, but that's not what I've been seeing. I've been seeing people build up and manage these Singleton classes/objects when all they really needed was to use the static keyword. So, I wouldn't say the Singleton pattern is bad. I'd say it's kinda like guns. I don't think guns are bad, but they can most certainly can be used inappropriately. A: Basically singleton is a way to have static data and pretend it is not really static. Of course I use it, but try to not abuse it. A: One basic problem with the original GoF design is the fact that the destructor isn't protected. Anyone with a reference to the singleton instance is free to destroy the singleton. See John Vlissides update "To Kill A Singleton" in his book "Pattern Hatching" (Amazon link). cheers, Rob
What's wrong with singleton?
Do not waste your time with this question. Follow up to: What is so bad about singletons? Please feel free to bitch on Singleton. Inappropriate usage of Singleton may cause lot of paint. What kind of problem do you experienced with singleton? What is common misuse of this pattern? After some digging into Corey's answer I discovered some greate articles on this topic. Why Singletons Are Controversial Performant Singletons Singletons are Pathological Liars Where Have All the Singletons Gone? Root Cause of Singletons
[ "There's nothing inherently wrong with the Singleton pattern. It is a tool and sometimes it should be used.\n", "Sometimes it can make your code more tightly coupled with the singleton class being refrerenced directly by name from different parts of your codebase. So, for example, when you need to test some part of your code and it references a singleton from a diferent part of the code you cannot easily fake that dependency with a mock object. \n", "I think a more appropriate question might be: In what situations is the use of a SIngleton Pattern inappropriate? Or what have you seen that uses a Singleton that shouldn't.\n", "There's nothing wrong with a singleton in itself, and as a pattern it fills a vital role in recognising the need for certain objects to only be created a single time.\nWhat it is frequently used for is a euphemism for global variables as an attempt to get around global variable stigma, and it is this use that is inherently wrong. If a global variable happens to be the correct solution, using a singleton won't improve it. If it is (as is fairly common) incorrect to use a global variable, wrapping it in a singleton won't make it any more correct.\n", "I haven't been exposed to the Singleton as much as some of the other posters have, but nearly all implementations that I have seen (in C#) could have been achieved with static classes/methods. I suppose you could argue that a static class is an implementation of the singleton pattern, but that's not what I've been seeing. I've been seeing people build up and manage these Singleton classes/objects when all they really needed was to use the static keyword. \nSo, I wouldn't say the Singleton pattern is bad. I'd say it's kinda like guns. I don't think guns are bad, but they can most certainly can be used inappropriately.\n", "Basically singleton is a way to have static data and pretend it is not really static.\nOf course I use it, but try to not abuse it. \n", "One basic problem with the original GoF design is the fact that the destructor isn't protected. Anyone with a reference to the singleton instance is free to destroy the singleton.\nSee John Vlissides update \"To Kill A Singleton\" in his book \"Pattern Hatching\" (Amazon link).\ncheers,\nRob\n" ]
[ 3, 3, 1, 1, 1, 0, 0 ]
[ "Most of the singleton patterns that I see written aren't written in a thread safe manner. If written correctly, they can be useful.\n" ]
[ -1 ]
[ "design_patterns", "language_agnostic", "oop", "singleton" ]
stackoverflow_0000086654_design_patterns_language_agnostic_oop_singleton.txt
Q: How do you manage "pick lists" in a database I have an application with multiple "pick list" entities, such as used to populate choices of dropdown selection boxes. These entities need to be stored in the database. How do one persist these entities in the database? Should I create a new table for each pick list? Is there a better solution? A: In the past I've created a table that has the Name of the list and the acceptable values, then queried it to display the list. I also include a underlying value, so you can return a display value for the list, and a bound value that may be much uglier (a small int for normalized data, for instance) CREATE TABLE PickList( ListName varchar(15), Value varchar(15), Display varchar(15), Primary Key (ListName, Display) ) You could also add a sortOrder field if you want to manually define the order to display them in. A: It depends on various things: if they are immutable and non relational (think "names of US States") an argument could be made that they should not be in the database at all: after all they are simply formatting of something simpler (like the two character code assigned). This has the added advantage that you don't need a round trip to the db to fetch something that never changes in order to populate the combo box. You can then use an Enum in code and a constraint in the DB. In case of localized display, so you need a different formatting for each culture, then you can use XML files or other resources to store the literals. if they are relational (think "states - capitals") I am not very convinced either way... but lately I've been using XML files, database constraints and javascript to populate. It works quite well and it's easy on the DB. if they are not read-only but rarely change (i.e. typically cannot be changed by the end user but only by some editor or daily batch), then I would still consider the opportunity of not storing them in the DB... it would depend on the particular case. in other cases, storing in the DB is the way (think of the tags of StackOverflow... they are "lookup" but can also be changed by the end user) -- possibly with some caching if needed. It requires some careful locking, but it would work well enough. A: Well, you could do something like this: PickListContent IdList IdPick Text 1 1 Apples 1 2 Oranges 1 3 Pears 2 1 Dogs 2 2 Cats and optionally.. PickList Id Description 1 Fruit 2 Pets A: I've found that creating individual tables is the best idea. I've been down the road of trying to create one master table of all pick lists and then filtering out based on type. While it works, it has invariably created headaches down the line. For example you may find that something you presumed to be a simple pick list is not so simple and requires an extra field, do you now split this data into an additional table or extend you master list? From a database perspective, having individual tables makes it much easier to manage your relational integrity and it makes it easier to interpret the data in the database when you're not using the application A: We have followed the pattern of a new table for each pick list. For example: Table FRUIT has columns ID, NAME, and DESCRIPTION. Values might include: 15000, Apple, Red fruit 15001, Banana, yellow and yummy ... If you have a need to reference FRUIT in another table, you would call the column FRUIT_ID and reference the ID value of the row in the FRUIT table. A: Create one table for lists and one table for list_options. # Put in the name of the list insert into lists (id, name) values (1, "Country in North America"); # Put in the values of the list insert into list_options (id, list_id, value_text) values (1, 1, "Canada"), (2, 1, "United States of America"), (3, 1, "Mexico"); A: Two tables. If you try to cram everything into one table then you break normalization (if you care about that). Here are examples: LIST --------------- LIST_ID (PK) NAME DESCR LIST_OPTION ---------------------------- LIST_OPTION_ID (PK) LIST_ID (FK) OPTION_NAME OPTION_VALUE MANUAL_SORT The list table simply describes a pick list. The list_ option table describes each option in a given list. So your queries will always start with knowing which pick list you'd like to populate (either by name or ID) which you join to the list_ option table to pull all the options. The manual_sort column is there just in case you want to enforce a particular order other than by name or value. (BTW, whenever I try to post the words "list" and "option" connected with an underscore, the preview window goes a little wacky. That's why I put a space there.) The query would look something like: select b.option_name, b.option_value from list a, list_option b where a.name="States" and a.list_id = b.list_id order by b.manual_sort asc You'll also want to create an index on list.name if you think you'll ever use it in a where clause. The pk and fk columns will typically automatically be indexed. And please don't create a new table for each pick list unless you're putting in "relationally relevant" data that will be used elsewhere by the app. You'd be circumventing exactly the relational functionality that a database provides. You'd be better off statically defining pick lists as constants somewhere in a base class or a properties file (your choice on how to model the name-value pair). A: To answer the second question first: yes, I would create a separate table for each pick list in most cases. Especially if they are for completely different types of values (e.g. states and cities). The general table format I use is as follows: id - identity or UUID field (I actually call the field xxx_id where xxx is the name of the table). name - display name of the item display_order - small int of order to display. Default this value to something greater than 1 If you want you could add a separate 'value' field but I just usually use the id field as the select box value. I generally use a select that orders first by display order, then by name, so you can order something alphabetically while still adding your own exceptions. For example, let's say you have a list of countries that you want in alpha order but have the US first and Canada second you could say "SELECT id, name FROM theTable ORDER BY display_order, name" and set the display_order value for the US as 1, Canada as 2 and all other countries as 9. You can get fancier, such as having an 'active' flag so you can activate or deactivate options, or setting a 'x_type' field so you can group options, description column for use in tooltips, etc. But the basic table works well for most circumstances. A: Depending on your needs, you can just have an options table that has a list identifier and a list value as the primary key. select optionDesc from Options where 'MyList' = optionList You can then extend it with an order column, etc. If you have an ID field, that is how you can reference your answers back... of if it is often changing, you can just copy the answer value to the answer table. A: If you don't mind using strings for the actual values, you can simply give each list a different list_id in value and populate a single table with : item_id: int list_id: int text: varchar(50) Seems easiest unless you need multiple things per list item A: We actually created entities to handle simple pick lists. We created a Lookup table, that holds all the available pick lists, and a LookupValue table that contains all the name/value records for the Lookup. Works great for us when we need it to be simple. A: I've done this in two different ways: 1) unique tables per list 2) a master table for the list, with views to give specific ones I tend to prefer the initial option as it makes updating lists easier (at least in my opinion). A: Try turning the question around. Why do you need to pull it from the database? Isn't the data part of your model but you really want to persist it in the database? You could use an OR mapper like linq2sql or nhibernate (assuming you're in the .net world) or depending on the data you could store it manually in a table each - there are situations where it would make good sense to put it all in the same table but do consider this only if you feel it makes really good sense. Normally putting different data in different tables makes it a lot easier to (later) understand what is going on. A: There are several approaches here. 1) Create one table per pick list. Each of the tables would have the ID and Name columns; the value that was picked by the user would be stored based on the ID of the item that was selected. 2) Create a single table with all pick lists. Columns: ID; list ID (or list type); Name. When you need to populate a list, do a query "select all items where list ID = ...". Advantage of this approach: really easy to add pick lists; disadvantage: a little more difficult to write group-by style queries (for example, give me the number of records that picked value X". I personally prefer option 1, it seems "cleaner" to me. A: You can use either a separate table for each (my preferred), or a common picklist table that has a type column you can use to filter on from your application. I'm not sure that one has a great benefit over the other generally speaking. If you have more than 25 or so, organizationally it might be easier to use the single table solution so you don't have several picklist tables cluttering up your database. Performance might be a hair better using separate tables for each if your lists are very long, but this is probably negligible provided your indexes and such are set up properly. I like using separate tables so that if something changes in a picklist - it needs and additional attribute for instance - you can change just that picklist table with little effect on the rest of your schema. In the single table solution, you will either have to denormalize your picklist data, pull that picklist out into a separate table, etc. Constraints are also easier to enforce in the separate table solution. A: This has served us well: SQL> desc aux_values; Name Type ----------------------------------------- ------------ VARIABLE_ID VARCHAR2(20) VALUE_SEQ NUMBER DESCRIPTION VARCHAR2(80) INTEGER_VALUE NUMBER CHAR_VALUE VARCHAR2(40) FLOAT_VALUE FLOAT(126) ACTIVE_FLAG VARCHAR2(1) The "Variable ID" indicates the kind of data, like "Customer Status" or "Defect Code" or whatever you need. Then you have several entries, each one with the appropriate data type column filled in. So for a status, you'd have several entries with the "CHAR_VALUE" filled in.
How do you manage "pick lists" in a database
I have an application with multiple "pick list" entities, such as used to populate choices of dropdown selection boxes. These entities need to be stored in the database. How do one persist these entities in the database? Should I create a new table for each pick list? Is there a better solution?
[ "In the past I've created a table that has the Name of the list and the acceptable values, then queried it to display the list. I also include a underlying value, so you can return a display value for the list, and a bound value that may be much uglier (a small int for normalized data, for instance)\nCREATE TABLE PickList(\n ListName varchar(15),\n Value varchar(15),\n Display varchar(15),\n Primary Key (ListName, Display)\n)\n\nYou could also add a sortOrder field if you want to manually define the order to display them in.\n", "It depends on various things:\n\nif they are immutable and non relational (think \"names of US States\") an argument could be made that they should not be in the database at all: after all they are simply formatting of something simpler (like the two character code assigned). This has the added advantage that you don't need a round trip to the db to fetch something that never changes in order to populate the combo box.\nYou can then use an Enum in code and a constraint in the DB. In case of localized display, so you need a different formatting for each culture, then you can use XML files or other resources to store the literals.\n\nif they are relational (think \"states - capitals\") I am not very convinced either way... but lately I've been using XML files, database constraints and javascript to populate. It works quite well and it's easy on the DB.\n\nif they are not read-only but rarely change (i.e. typically cannot be changed by the end user but only by some editor or daily batch), then I would still consider the opportunity of not storing them in the DB... it would depend on the particular case.\n\nin other cases, storing in the DB is the way (think of the tags of StackOverflow... they are \"lookup\" but can also be changed by the end user) -- possibly with some caching if needed. It requires some careful locking, but it would work well enough.\n\n\n", "Well, you could do something like this:\nPickListContent\n\nIdList IdPick Text \n1 1 Apples\n1 2 Oranges\n1 3 Pears\n2 1 Dogs\n2 2 Cats\n\nand optionally..\nPickList\n\nId Description\n1 Fruit\n2 Pets\n\n", "I've found that creating individual tables is the best idea.\nI've been down the road of trying to create one master table of all pick lists and then filtering out based on type. While it works, it has invariably created headaches down the line. For example you may find that something you presumed to be a simple pick list is not so simple and requires an extra field, do you now split this data into an additional table or extend you master list?\nFrom a database perspective, having individual tables makes it much easier to manage your relational integrity and it makes it easier to interpret the data in the database when you're not using the application\n", "We have followed the pattern of a new table for each pick list. For example:\nTable FRUIT has columns ID, NAME, and DESCRIPTION.\nValues might include:\n15000, Apple, Red fruit\n15001, Banana, yellow and yummy\n... \nIf you have a need to reference FRUIT in another table, you would call the column FRUIT_ID and reference the ID value of the row in the FRUIT table.\n", "Create one table for lists and one table for list_options.\n # Put in the name of the list\n insert into lists (id, name) values (1, \"Country in North America\");\n\n # Put in the values of the list\n insert into list_options (id, list_id, value_text) values\n (1, 1, \"Canada\"),\n (2, 1, \"United States of America\"),\n (3, 1, \"Mexico\");\n\n", "Two tables. If you try to cram everything into one table then you break normalization (if you care about that). Here are examples:\n\nLIST\n---------------\nLIST_ID (PK)\nNAME\nDESCR\n\n\nLIST_OPTION\n----------------------------\nLIST_OPTION_ID (PK)\nLIST_ID (FK)\nOPTION_NAME\nOPTION_VALUE\nMANUAL_SORT\n\nThe list table simply describes a pick list. The list_ option table describes each option in a given list. So your queries will always start with knowing which pick list you'd like to populate (either by name or ID) which you join to the list_ option table to pull all the options. The manual_sort column is there just in case you want to enforce a particular order other than by name or value. (BTW, whenever I try to post the words \"list\" and \"option\" connected with an underscore, the preview window goes a little wacky. That's why I put a space there.)\nThe query would look something like:\n\nselect\n b.option_name,\n b.option_value\nfrom\n list a,\n list_option b\nwhere\n a.name=\"States\"\nand\n a.list_id = b.list_id\norder by\n b.manual_sort asc\n\nYou'll also want to create an index on list.name if you think you'll ever use it in a where clause. The pk and fk columns will typically automatically be indexed. \nAnd please don't create a new table for each pick list unless you're putting in \"relationally relevant\" data that will be used elsewhere by the app. You'd be circumventing exactly the relational functionality that a database provides. You'd be better off statically defining pick lists as constants somewhere in a base class or a properties file (your choice on how to model the name-value pair).\n", "To answer the second question first: yes, I would create a separate table for each pick list in most cases. Especially if they are for completely different types of values (e.g. states and cities). The general table format I use is as follows:\nid - identity or UUID field (I actually call the field xxx_id where xxx is the name of the table). \nname - display name of the item \ndisplay_order - small int of order to display. Default this value to something greater than 1 \n\nIf you want you could add a separate 'value' field but I just usually use the id field as the select box value. \nI generally use a select that orders first by display order, then by name, so you can order something alphabetically while still adding your own exceptions. For example, let's say you have a list of countries that you want in alpha order but have the US first and Canada second you could say \"SELECT id, name FROM theTable ORDER BY display_order, name\" and set the display_order value for the US as 1, Canada as 2 and all other countries as 9.\nYou can get fancier, such as having an 'active' flag so you can activate or deactivate options, or setting a 'x_type' field so you can group options, description column for use in tooltips, etc. But the basic table works well for most circumstances.\n", "Depending on your needs, you can just have an options table that has a list identifier and a list value as the primary key.\nselect optionDesc from Options where 'MyList' = optionList\n\nYou can then extend it with an order column, etc. If you have an ID field, that is how you can reference your answers back... of if it is often changing, you can just copy the answer value to the answer table.\n", "If you don't mind using strings for the actual values, you can simply give each list a different list_id in value and populate a single table with :\nitem_id: int\nlist_id: int\ntext: varchar(50)\nSeems easiest unless you need multiple things per list item\n", "We actually created entities to handle simple pick lists. We created a Lookup table, that holds all the available pick lists, and a LookupValue table that contains all the name/value records for the Lookup. \nWorks great for us when we need it to be simple.\n", "I've done this in two different ways:\n1) unique tables per list\n2) a master table for the list, with views to give specific ones\nI tend to prefer the initial option as it makes updating lists easier (at least in my opinion). \n", "Try turning the question around. Why do you need to pull it from the database? Isn't the data part of your model but you really want to persist it in the database? You could use an OR mapper like linq2sql or nhibernate (assuming you're in the .net world) or depending on the data you could store it manually in a table each - there are situations where it would make good sense to put it all in the same table but do consider this only if you feel it makes really good sense. Normally putting different data in different tables makes it a lot easier to (later) understand what is going on.\n", "There are several approaches here. \n1) Create one table per pick list. Each of the tables would have the ID and Name columns; the value that was picked by the user would be stored based on the ID of the item that was selected. \n2) Create a single table with all pick lists. Columns: ID; list ID (or list type); Name. When you need to populate a list, do a query \"select all items where list ID = ...\". Advantage of this approach: really easy to add pick lists; disadvantage: a little more difficult to write group-by style queries (for example, give me the number of records that picked value X\".\nI personally prefer option 1, it seems \"cleaner\" to me. \n", "You can use either a separate table for each (my preferred), or a common picklist table that has a type column you can use to filter on from your application. I'm not sure that one has a great benefit over the other generally speaking. \nIf you have more than 25 or so, organizationally it might be easier to use the single table solution so you don't have several picklist tables cluttering up your database.\nPerformance might be a hair better using separate tables for each if your lists are very long, but this is probably negligible provided your indexes and such are set up properly. \nI like using separate tables so that if something changes in a picklist - it needs and additional attribute for instance - you can change just that picklist table with little effect on the rest of your schema. In the single table solution, you will either have to denormalize your picklist data, pull that picklist out into a separate table, etc. Constraints are also easier to enforce in the separate table solution.\n", "This has served us well:\nSQL> desc aux_values;\n Name Type\n ----------------------------------------- ------------\n VARIABLE_ID VARCHAR2(20)\n VALUE_SEQ NUMBER\n DESCRIPTION VARCHAR2(80)\n INTEGER_VALUE NUMBER\n CHAR_VALUE VARCHAR2(40)\n FLOAT_VALUE FLOAT(126)\n ACTIVE_FLAG VARCHAR2(1)\n\nThe \"Variable ID\" indicates the kind of data, like \"Customer Status\" or \"Defect Code\" or whatever you need. Then you have several entries, each one with the appropriate data type column filled in. So for a status, you'd have several entries with the \"CHAR_VALUE\" filled in.\n" ]
[ 7, 7, 5, 3, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "database" ]
stackoverflow_0000086992_database.txt
Q: Ways to avoid eager spool operations on SQL Server I have an ETL process that involves a stored procedure that makes heavy use of SELECT INTO statements (minimally logged and therefore faster as they generate less log traffic). Of the batch of work that takes place in one particular stored the stored procedure several of the most expensive operations are eager spools that appear to just buffer the query results and then copy them into the table just being made. The MSDN documentation on eager spools is quite sparse. Does anyone have a deeper insight into whether these are really necessary (and under what circumstances)? I have a few theories that may or may not make sense, but no success in eliminating these from the queries. The .sqlplan files are quite large (160kb) so I guess it's probably not reasonable to post them directly to a forum. So, here are some theories that may be amenable to specific answers: The query uses some UDFs for data transformation, such as parsing formatted dates. Does this data transformation necessitate the use of eager spools to allocate sensible types (e.g. varchar lengths) to the table before it constructs it? As an extension of the question above, does anyone have a deeper view of what does or does not drive this operation in a query? A: My understanding of spooling is that it's a bit of a red herring on your execution plan. Yes, it accounts for a lot of your query cost, but it's actually an optimization that SQL Server undertakes automatically so that it can avoid costly rescanning. If you were to avoid spooling, the cost of the execution tree it sits on will go up and almost certainly the cost of the whole query would increase. I don't have any particular insight into what in particular might cause the database's query optimizer to parse the execution that way, especially without seeing the SQL code, but you're probably better off trusting its behavior. However, that doesn't mean your execution plan can't be optimized, depending on exactly what you're up to and how volatile your source data is. When you're doing a SELECT INTO, you'll often see spooling items on your execution plan, and it can be related to read isolation. If it's appropriate for your particular situation, you might try just lowering the transaction isolation level to something less costly, and/or using the NOLOCK hint. I've found in complicated performance-critical queries that NOLOCK, if safe and appropriate for your data, can vastly increase the speed of query execution even when there doesn't seem to be any reason it should. In this situation, if you try READ UNCOMMITTED or the NOLOCK hint, you may be able to eliminate some of the Spools. (Obviously you don't want to do this if it's likely to land you in an inconsistent state, but everyone's data isolation requirements are different). The TOP operator and the OR operator can occasionally cause spooling, but I doubt you're doing any of those in an ETL process... You're right in saying that your UDFs could also be the culprit. If you're only using each UDF once, it would be an interesting experiment to try putting them inline to see if you get a large performance benefit. (And if you can't figure out a way to write them inline with the query, that's probably why they might be causing spooling). One last thing I would look at is that, if you're doing any joins that can be re-ordered, try using a hint to force the join order to happen in what you know to be the most selective order. That's a bit of a reach but it doesn't hurt to try it if you're already stuck optimizing.
Ways to avoid eager spool operations on SQL Server
I have an ETL process that involves a stored procedure that makes heavy use of SELECT INTO statements (minimally logged and therefore faster as they generate less log traffic). Of the batch of work that takes place in one particular stored the stored procedure several of the most expensive operations are eager spools that appear to just buffer the query results and then copy them into the table just being made. The MSDN documentation on eager spools is quite sparse. Does anyone have a deeper insight into whether these are really necessary (and under what circumstances)? I have a few theories that may or may not make sense, but no success in eliminating these from the queries. The .sqlplan files are quite large (160kb) so I guess it's probably not reasonable to post them directly to a forum. So, here are some theories that may be amenable to specific answers: The query uses some UDFs for data transformation, such as parsing formatted dates. Does this data transformation necessitate the use of eager spools to allocate sensible types (e.g. varchar lengths) to the table before it constructs it? As an extension of the question above, does anyone have a deeper view of what does or does not drive this operation in a query?
[ "My understanding of spooling is that it's a bit of a red herring on your execution plan. Yes, it accounts for a lot of your query cost, but it's actually an optimization that SQL Server undertakes automatically so that it can avoid costly rescanning. If you were to avoid spooling, the cost of the execution tree it sits on will go up and almost certainly the cost of the whole query would increase. I don't have any particular insight into what in particular might cause the database's query optimizer to parse the execution that way, especially without seeing the SQL code, but you're probably better off trusting its behavior.\nHowever, that doesn't mean your execution plan can't be optimized, depending on exactly what you're up to and how volatile your source data is. When you're doing a SELECT INTO, you'll often see spooling items on your execution plan, and it can be related to read isolation. If it's appropriate for your particular situation, you might try just lowering the transaction isolation level to something less costly, and/or using the NOLOCK hint. I've found in complicated performance-critical queries that NOLOCK, if safe and appropriate for your data, can vastly increase the speed of query execution even when there doesn't seem to be any reason it should.\nIn this situation, if you try READ UNCOMMITTED or the NOLOCK hint, you may be able to eliminate some of the Spools. (Obviously you don't want to do this if it's likely to land you in an inconsistent state, but everyone's data isolation requirements are different). The TOP operator and the OR operator can occasionally cause spooling, but I doubt you're doing any of those in an ETL process...\nYou're right in saying that your UDFs could also be the culprit. If you're only using each UDF once, it would be an interesting experiment to try putting them inline to see if you get a large performance benefit. (And if you can't figure out a way to write them inline with the query, that's probably why they might be causing spooling).\nOne last thing I would look at is that, if you're doing any joins that can be re-ordered, try using a hint to force the join order to happen in what you know to be the most selective order. That's a bit of a reach but it doesn't hurt to try it if you're already stuck optimizing.\n" ]
[ 34 ]
[]
[]
[ "eager", "spool", "sql_server", "tsql" ]
stackoverflow_0000081278_eager_spool_sql_server_tsql.txt
Q: How do I make a Microsoft Word document “read only” within a SharePoint document library? How do I make open “read only” the only option within a SharePoint document library? When using either Word 2003 or 2007 and saving the document as a template or modifying the file properties as “read only” doesn’t prevent modification of the file in a SharePoint document library. Modifying the document library permissions to only allow “read only” access doesn’t work either. What works is to use a SharePoint URL link list to access the files within an external server directory, but that defeats the use of a SharePoint document library. A: I don't know if you can force read only to be the only option, but you can implement your own event handler to override the ItemUpdating event. Just cancel the update and any changes will be discarded. Sahil shows a very basic event handler that performs the cancel here. A: The event handler works, but I have found a simpler workaround. If you “Check Out” the file and leave it checked out, no one else has the option to edit the file. This is still not ideal, but for forcing a document to be “read only” it works.
How do I make a Microsoft Word document “read only” within a SharePoint document library?
How do I make open “read only” the only option within a SharePoint document library? When using either Word 2003 or 2007 and saving the document as a template or modifying the file properties as “read only” doesn’t prevent modification of the file in a SharePoint document library. Modifying the document library permissions to only allow “read only” access doesn’t work either. What works is to use a SharePoint URL link list to access the files within an external server directory, but that defeats the use of a SharePoint document library.
[ "I don't know if you can force read only to be the only option, but you can implement your own event handler to override the ItemUpdating event. Just cancel the update and any changes will be discarded.\nSahil shows a very basic event handler that performs the cancel here.\n", "The event handler works, but I have found a simpler workaround.\nIf you “Check Out” the file and leave it checked out, no one else has the option to edit the file. This is still not ideal, but for forcing a document to be “read only” it works.\n" ]
[ 2, 1 ]
[]
[]
[ "asp.net", "sharepoint", "visual_studio_2008" ]
stackoverflow_0000086033_asp.net_sharepoint_visual_studio_2008.txt
Q: Reading changes in a file in real-time using .NET I have a .csv file that is frequently updated (about 20 to 30 times per minute). I want to insert the newly added lines to a database as soon as they are written to the file. The FileSystemWatcher class listens to the file system change notifications and can raise an event whenever there is a change in a specified file. The problem is that the FileSystemWatcher cannot determine exactly which lines were added or removed (as far as I know). One way to read those lines is to save and compare the line count between changes and read the difference between the last and second last change. However, I am looking for a cleaner (perhaps more elegant) solution. A: I've written something very similar. I used the FileSystemWatcher to get notifications about changes. I then used a FileStream to read the data (keeping track of my last position within the file and seeking to that before reading the new data). Then I add the read data to a buffer which automatically extracts complete lines and then outputs then to the UI. Note: "this.MoreData(..) is an event, the listener of which adds to the aforementioned buffer, and handles the complete line extraction. Note: As has already been mentioned, this will only work if the modifications are always additions to the file. Any deletions will cause problems. Hope this helps. public void File_Changed( object source, FileSystemEventArgs e ) { lock ( this ) { if ( !this.bPaused ) { bool bMoreData = false; // Read from current seek position to end of file byte[] bytesRead = new byte[this.iMaxBytes]; FileStream fs = new FileStream( this.strFilename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite ); if ( 0 == this.iPreviousSeekPos ) { if ( this.bReadFromStart ) { if ( null != this.BeginReadStart ) { this.BeginReadStart( null, null ); } this.bReadingFromStart = true; } else { if ( fs.Length > this.iMaxBytes ) { this.iPreviousSeekPos = fs.Length - this.iMaxBytes; } } } this.iPreviousSeekPos = (int)fs.Seek( this.iPreviousSeekPos, SeekOrigin.Begin ); int iNumBytes = fs.Read( bytesRead, 0, this.iMaxBytes ); this.iPreviousSeekPos += iNumBytes; // If we haven't read all the data, then raise another event if ( this.iPreviousSeekPos < fs.Length ) { bMoreData = true; } fs.Close(); string strData = this.encoding.GetString( bytesRead ); this.MoreData( this, strData ); if ( bMoreData ) { File_Changed( null, null ); } else { if ( this.bReadingFromStart ) { this.bReadingFromStart = false; if ( null != this.EndReadStart ) { this.EndReadStart( null, null ); } } } } } A: Right, the FileSystemWatcher doesn't know anything about your file's contents. It'll tell you if it changed, etc. but not what changed. Are you only adding to the file? It was a little unclear from the post as to whether lines were added or could also be removed. Assuming they are appended, the solution is pretty straightforward, otherwise you'll be doing some comparisons. A: I think you should use NTFS Change Journal or similar: The change journal is used by NTFS to provide a persistent log of all changes made to files on the volume. For each volume, NTFS uses the change journal to track information about added, deleted, and modified files. The change journal is much more efficient than time stamps or file notifications for determining changes in a given namespace. You can find a description on TechNet. You will need to use PInvoke in .NET. A: I would keep the current text in memory if it is small enough and then use a diff algorithm to check if the new text and previous text changed. This library, http://www.mathertel.de/Diff/, not only will tell you that something changed but what changed as well. So you can then insert the changed data into the db. A: off the top of my head, you could store the last known file size. Check against the file size, and when it changes, open a reader. Then seek the reader to your last file size, and start reading from there. A: You're right about the FileSystemWatcher. You can listen for created, modified, deleted, etc. events but you don't get deeper than the file that raised them. Do you have control over the file itself? You could change the model slightly to use the file like a buffer. Instead of one file, have two. One is the staging, one is the sum of all processed output. Read all lines from your "buffer" file, process them, then insert them into the end of another file that is the total of all lines processed. Then, delete the lines you processed. This way, all info in your file is pending processing. The catch is that if the system is anything other than write (i.e. also deletes lines) then it won't work.
Reading changes in a file in real-time using .NET
I have a .csv file that is frequently updated (about 20 to 30 times per minute). I want to insert the newly added lines to a database as soon as they are written to the file. The FileSystemWatcher class listens to the file system change notifications and can raise an event whenever there is a change in a specified file. The problem is that the FileSystemWatcher cannot determine exactly which lines were added or removed (as far as I know). One way to read those lines is to save and compare the line count between changes and read the difference between the last and second last change. However, I am looking for a cleaner (perhaps more elegant) solution.
[ "I've written something very similar. I used the FileSystemWatcher to get notifications about changes. I then used a FileStream to read the data (keeping track of my last position within the file and seeking to that before reading the new data). Then I add the read data to a buffer which automatically extracts complete lines and then outputs then to the UI.\nNote: \"this.MoreData(..) is an event, the listener of which adds to the aforementioned buffer, and handles the complete line extraction.\nNote: As has already been mentioned, this will only work if the modifications are always additions to the file. Any deletions will cause problems.\nHope this helps.\n public void File_Changed( object source, FileSystemEventArgs e )\n {\n lock ( this )\n {\n if ( !this.bPaused )\n {\n bool bMoreData = false;\n\n // Read from current seek position to end of file\n byte[] bytesRead = new byte[this.iMaxBytes];\n FileStream fs = new FileStream( this.strFilename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite );\n\n if ( 0 == this.iPreviousSeekPos )\n {\n if ( this.bReadFromStart )\n {\n if ( null != this.BeginReadStart )\n {\n this.BeginReadStart( null, null );\n }\n this.bReadingFromStart = true;\n }\n else\n {\n if ( fs.Length > this.iMaxBytes )\n {\n this.iPreviousSeekPos = fs.Length - this.iMaxBytes;\n }\n }\n }\n\n this.iPreviousSeekPos = (int)fs.Seek( this.iPreviousSeekPos, SeekOrigin.Begin );\n int iNumBytes = fs.Read( bytesRead, 0, this.iMaxBytes );\n this.iPreviousSeekPos += iNumBytes;\n\n // If we haven't read all the data, then raise another event\n if ( this.iPreviousSeekPos < fs.Length )\n {\n bMoreData = true;\n }\n\n fs.Close();\n\n string strData = this.encoding.GetString( bytesRead );\n this.MoreData( this, strData );\n\n if ( bMoreData )\n {\n File_Changed( null, null );\n }\n else\n {\n if ( this.bReadingFromStart )\n {\n this.bReadingFromStart = false;\n if ( null != this.EndReadStart )\n {\n this.EndReadStart( null, null );\n }\n }\n }\n }\n }\n\n", "Right, the FileSystemWatcher doesn't know anything about your file's contents. It'll tell you if it changed, etc. but not what changed.\nAre you only adding to the file? It was a little unclear from the post as to whether lines were added or could also be removed. Assuming they are appended, the solution is pretty straightforward, otherwise you'll be doing some comparisons.\n", "I think you should use NTFS Change Journal or similar:\n\nThe change journal is used by NTFS to\n provide a persistent log of all\n changes made to files on the volume.\n For each volume, NTFS uses the change\n journal to track information about\n added, deleted, and modified files.\n The change journal is much more\n efficient than time stamps or file\n notifications for determining changes\n in a given namespace.\n\nYou can find a description on TechNet. You will need to use PInvoke in .NET.\n", "I would keep the current text in memory if it is small enough and then use a diff algorithm to check if the new text and previous text changed. This library, http://www.mathertel.de/Diff/, not only will tell you that something changed but what changed as well. So you can then insert the changed data into the db.\n", "off the top of my head, you could store the last known file size. Check against the file size, and when it changes, open a reader.\nThen seek the reader to your last file size, and start reading from there.\n", "You're right about the FileSystemWatcher. You can listen for created, modified, deleted, etc. events but you don't get deeper than the file that raised them.\nDo you have control over the file itself? You could change the model slightly to use the file like a buffer. Instead of one file, have two. One is the staging, one is the sum of all processed output. Read all lines from your \"buffer\" file, process them, then insert them into the end of another file that is the total of all lines processed. Then, delete the lines you processed. This way, all info in your file is pending processing. The catch is that if the system is anything other than write (i.e. also deletes lines) then it won't work.\n" ]
[ 3, 2, 2, 1, 0, 0 ]
[]
[]
[ ".net", "file", "filesystemwatcher" ]
stackoverflow_0000086534_.net_file_filesystemwatcher.txt
Q: ASP.NET UrlRewriting and Constructing Page Links So this post talked about how to actually implement url rewriting in an ASP.NET application to get "friendly urls". That works perfect and it is great for sending a user to a specific page, but does anyone know of a good solution for creating "Friendly" URLs inside your code when using one of the tools referenced? For example listing a link inside of an asp.net control as ~/mypage.aspx?product=12 when a rewrite rule exists would be an issue as then you are linking to content in two different ways. I'm familiar with using DotNetNuke and FriendlyUrl's where there is a "NavigateUrl" method that will get the friendly Url code from the re-writer but I'm not finding examples of how to do this with UrlRewriting.net or the other solutions out there. Ideally I'd like to be able to get something like this. string friendlyUrl = GetFriendlyUrl("~/MyUnfriendlyPage.aspx?myid=13"); EDIT: I am looking for a generic solution, not something that I have to implement for every page in my site, but potentially something that can match against the rules in the opposite direction. A: See System.Web.Routing Routing is a different from rewriting. Implementing this technique does require minor changes to your pages (namely, any code accessing querystring parameters will need to be modified), but it allows you to generate links based on the routes you define. It's used by ASP.NET MVC, but can be employed in any ASP.NET application. Routing is part of .Net 3.5 SP1 A: Create a UrlBuilder class with methods for each page like so: public class UrlBuilder { public static string BuildProductUrl(int id) { if (true) // replace with logic to determine if URL rewriting is enabled { return string.Format("~/Product/{0}", id); } else { return string.Format("~/product.aspx?id={0}", id); } } }
ASP.NET UrlRewriting and Constructing Page Links
So this post talked about how to actually implement url rewriting in an ASP.NET application to get "friendly urls". That works perfect and it is great for sending a user to a specific page, but does anyone know of a good solution for creating "Friendly" URLs inside your code when using one of the tools referenced? For example listing a link inside of an asp.net control as ~/mypage.aspx?product=12 when a rewrite rule exists would be an issue as then you are linking to content in two different ways. I'm familiar with using DotNetNuke and FriendlyUrl's where there is a "NavigateUrl" method that will get the friendly Url code from the re-writer but I'm not finding examples of how to do this with UrlRewriting.net or the other solutions out there. Ideally I'd like to be able to get something like this. string friendlyUrl = GetFriendlyUrl("~/MyUnfriendlyPage.aspx?myid=13"); EDIT: I am looking for a generic solution, not something that I have to implement for every page in my site, but potentially something that can match against the rules in the opposite direction.
[ "See System.Web.Routing\nRouting is a different from rewriting. Implementing this technique does require minor changes to your pages (namely, any code accessing querystring parameters will need to be modified), but it allows you to generate links based on the routes you define. It's used by ASP.NET MVC, but can be employed in any ASP.NET application.\nRouting is part of .Net 3.5 SP1\n", "Create a UrlBuilder class with methods for each page like so:\npublic class UrlBuilder\n{\n public static string BuildProductUrl(int id)\n {\n if (true) // replace with logic to determine if URL rewriting is enabled\n {\n return string.Format(\"~/Product/{0}\", id);\n }\n else\n {\n return string.Format(\"~/product.aspx?id={0}\", id);\n }\n }\n}\n\n" ]
[ 3, 0 ]
[]
[]
[ ".net", "friendly_url", "url_rewriting" ]
stackoverflow_0000087221_.net_friendly_url_url_rewriting.txt
Q: How do I get a warning before killing a temporary buffer in Emacs? More than once I've lost work by accidentally killing a temporary buffer in Emacs. Can I set up Emacs to give me a warning when I kill a buffer not associated with a file? A: Make a function that will ask you whether you're sure when the buffer has been edited and is not associated with a file. Then add that function to the list kill-buffer-query-functions. Looking at the documentation for Buffer File Name you understand: a buffer is not visiting a file if and only if the variable buffer-file-name is nil Use that insight to write the function: (defun maybe-kill-buffer () (if (and (not buffer-file-name) (buffer-modified-p)) ;; buffer is not visiting a file (y-or-n-p "This buffer is not visiting a file but has been edited. Kill it anyway? ") t)) And then add the function to the hook like so: (add-to-list 'kill-buffer-query-functions 'maybe-kill-buffer) A: (defun maybe-kill-buffer () (if (and (not buffer-file-name) (buffer-modified-p)) ;; buffer is not visiting a file (y-or-n-p (format "Buffer %s has been edited. Kill it anyway? " (buffer-name))) t)) (add-to-list 'kill-buffer-query-functions 'maybe-kill-buffer)
How do I get a warning before killing a temporary buffer in Emacs?
More than once I've lost work by accidentally killing a temporary buffer in Emacs. Can I set up Emacs to give me a warning when I kill a buffer not associated with a file?
[ "Make a function that will ask you whether you're sure when the buffer has been edited and is not associated with a file. Then add that function to the list kill-buffer-query-functions.\nLooking at the documentation for Buffer File Name you understand:\n\na buffer is not visiting a file if and only if the variable buffer-file-name is nil\n\nUse that insight to write the function:\n(defun maybe-kill-buffer ()\n (if (and (not buffer-file-name)\n (buffer-modified-p))\n ;; buffer is not visiting a file\n (y-or-n-p \"This buffer is not visiting a file but has been edited. Kill it anyway? \")\n t))\n\nAnd then add the function to the hook like so:\n(add-to-list 'kill-buffer-query-functions 'maybe-kill-buffer)\n\n", "(defun maybe-kill-buffer ()\n (if (and (not buffer-file-name)\n (buffer-modified-p))\n ;; buffer is not visiting a file\n (y-or-n-p (format \"Buffer %s has been edited. Kill it anyway? \"\n (buffer-name)))\n t))\n\n(add-to-list 'kill-buffer-query-functions 'maybe-kill-buffer)\n\n" ]
[ 11, 2 ]
[]
[]
[ "emacs" ]
stackoverflow_0000086963_emacs.txt
Q: How can I efficiently build different versions of a component with one Makefile I hope I haven't painted myself into a corner. I've gotten what seems to be most of the way through implementing a Makefile and I can't get the last bit to work. I hope someone here can suggest a technique to do what I'm trying to do. I have what I'll call "bills of materials" in version controlled files in a source repository and I build something like: make VER=x I want my Makefile to use $(VER) as a tag to retrieve a BOM from the repository, generate a dependency file to include in the Makefile, rescan including that dependency, and then build the product. More generally, my Makefile may have several targets -- A, B, C, etc. -- and I can build different versions of each so I might do: make A VER=x make B VER=y make C VER=z and the dependency file includes information about all three targets. However, creating the dependency file is somewhat expensive so if I do: make A VER=x ...make source (not BOM) changes... make A VER=x I'd really like the Makefile to not regenerate the dependency. And just to make things as complicated as possible, I might do: make A VER=x .. change version x of A's BOM and check it in make A VER=x so I need to regenerate the dependency on the second build. The check out messes up the timestamps used to regenerate the dependencies so I think I need a way for the dependency file to depend not on the BOM but on some indication that the BOM changed. What I've come to is making the BOM checkout happen in a .PHONY target (so it always gets checked out) and keeping track of the contents of the last checkout in a ".sig" file (if the signature file is missing or the contents are different than the signature of the new file, then the BOM changed), and having the dependency generation depend on the signatures). At the top of my Makefile, I have some setup: BOMS = $(addsuffix .bom,$(MAKECMDGOALS) SIGS = $(subst .bom,.sig,$(BOMS)) DEP = include.d -include $(DEP) And it seems I always need to do: .PHONY: $(BOMS) $(BOMS): ...checkout TAG=$(VER) $@ But, as noted above, if i do just that, and continue: $(DEP) : $(BOMS) ... recreate dependency Then the dependency gets updated every time I invoke make. So I try: $(DEP) : $(SIGS) ... recreate dependency and $(BOMS): ...checkout TAG=$(VER) $@ ...if $(subst .bom,.sig,$@) doesn't exist ... create signature file ...else ... if new signature is different from file contents ... update signature file ... endif ...endif But the dependency generation doesn't get tripped when the signature changes. I think it's because because $(SIGS) isn't a target, so make doesn't notice when the $(BOMS) rule updates a signature. I tried creating a .sig:.bom rule and managing the timestamps of the checked out BOM with touch but that didn't work. Someone suggested something like: $(DEP) : $(SIGS) ... recreate dependency $(BOMS) : $(SIGS) ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif but how can the BOM depend on the SIG when the SIG is created from the BOM? As I read that it says, "create the SIG from the BOM and if the SIG is newer than the BOM then checkout the BOM". How do I bootstrap that process? Where does the first BOM come from? A: Make is very bad at being able to detect actual file changes, as opposed to just updated timestamps. It sounds to me that the root of the problem is that the bom-checkout always modifies the timestamp of the bom, causing the dependencies to be regenerated. I would probably try to solve this problem instead -- try to checkout the bom without messing up the timestamp. A wrapper script around the checkout tool might do the trick; first checkout the bom to a temporary file, compare it to the already checked out version, and replace it only if the new one is different. If you're not strictly bound to using make, there are other tools which are much better at detecting actual file changes (SCons, for example). A: I'm not a make expert, but I would try have $(BOMS) depend on $(SIGS), and making the $(SIGS) target execute the if/else rules that you currently have under the $(BOMS) target. $(DEP) : $(SIGS) ... recreate dependency $(BOMS) : $(SIGS) ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif EDIT: You're right, of course, you can't have $(BOM) depend on $(SIGS). But in order to have $(DEP) recreate, you need to have $(SIG) as a target. Maybe have an intermediate target that depends on both $(BOM) and $(SIG). $(DEP) : $(SIGS) ... recreate dependency $(NEWTARGET) : $(BOMS) $(SIGS) $(BOMS) : ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif $(SIGS) might also need to depend on $(BOMS), I would play with that and see.
How can I efficiently build different versions of a component with one Makefile
I hope I haven't painted myself into a corner. I've gotten what seems to be most of the way through implementing a Makefile and I can't get the last bit to work. I hope someone here can suggest a technique to do what I'm trying to do. I have what I'll call "bills of materials" in version controlled files in a source repository and I build something like: make VER=x I want my Makefile to use $(VER) as a tag to retrieve a BOM from the repository, generate a dependency file to include in the Makefile, rescan including that dependency, and then build the product. More generally, my Makefile may have several targets -- A, B, C, etc. -- and I can build different versions of each so I might do: make A VER=x make B VER=y make C VER=z and the dependency file includes information about all three targets. However, creating the dependency file is somewhat expensive so if I do: make A VER=x ...make source (not BOM) changes... make A VER=x I'd really like the Makefile to not regenerate the dependency. And just to make things as complicated as possible, I might do: make A VER=x .. change version x of A's BOM and check it in make A VER=x so I need to regenerate the dependency on the second build. The check out messes up the timestamps used to regenerate the dependencies so I think I need a way for the dependency file to depend not on the BOM but on some indication that the BOM changed. What I've come to is making the BOM checkout happen in a .PHONY target (so it always gets checked out) and keeping track of the contents of the last checkout in a ".sig" file (if the signature file is missing or the contents are different than the signature of the new file, then the BOM changed), and having the dependency generation depend on the signatures). At the top of my Makefile, I have some setup: BOMS = $(addsuffix .bom,$(MAKECMDGOALS) SIGS = $(subst .bom,.sig,$(BOMS)) DEP = include.d -include $(DEP) And it seems I always need to do: .PHONY: $(BOMS) $(BOMS): ...checkout TAG=$(VER) $@ But, as noted above, if i do just that, and continue: $(DEP) : $(BOMS) ... recreate dependency Then the dependency gets updated every time I invoke make. So I try: $(DEP) : $(SIGS) ... recreate dependency and $(BOMS): ...checkout TAG=$(VER) $@ ...if $(subst .bom,.sig,$@) doesn't exist ... create signature file ...else ... if new signature is different from file contents ... update signature file ... endif ...endif But the dependency generation doesn't get tripped when the signature changes. I think it's because because $(SIGS) isn't a target, so make doesn't notice when the $(BOMS) rule updates a signature. I tried creating a .sig:.bom rule and managing the timestamps of the checked out BOM with touch but that didn't work. Someone suggested something like: $(DEP) : $(SIGS) ... recreate dependency $(BOMS) : $(SIGS) ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif but how can the BOM depend on the SIG when the SIG is created from the BOM? As I read that it says, "create the SIG from the BOM and if the SIG is newer than the BOM then checkout the BOM". How do I bootstrap that process? Where does the first BOM come from?
[ "Make is very bad at being able to detect actual file changes, as opposed to just updated timestamps. \nIt sounds to me that the root of the problem is that the bom-checkout always modifies the timestamp of the bom, causing the dependencies to be regenerated. I would probably try to solve this problem instead -- try to checkout the bom without messing up the timestamp. A wrapper script around the checkout tool might do the trick; first checkout the bom to a temporary file, compare it to the already checked out version, and replace it only if the new one is different.\nIf you're not strictly bound to using make, there are other tools which are much better at detecting actual file changes (SCons, for example).\n", "I'm not a make expert, but I would try have $(BOMS) depend on $(SIGS), and making the $(SIGS) target execute the if/else rules that you currently have under the $(BOMS) target.\n$(DEP) : $(SIGS)\n ... recreate dependency\n$(BOMS) : $(SIGS)\n ...checkout TAG=$(VER) $@\n$(SIGS) :\n ...if $(subst .bom,.sig,$(BOMS)) doesn't exist\n ... create it\n ...else\n ... if new signature is different from file contents\n ... update signature file\n ... endif\n ...endif\n\nEDIT: You're right, of course, you can't have $(BOM) depend on $(SIGS). But in order to have $(DEP) recreate, you need to have $(SIG) as a target. Maybe have an intermediate target that depends on both $(BOM) and $(SIG).\n$(DEP) : $(SIGS)\n ... recreate dependency\n$(NEWTARGET) : $(BOMS) $(SIGS)\n$(BOMS) : \n ...checkout TAG=$(VER) $@\n$(SIGS) :\n ...if $(subst .bom,.sig,$(BOMS)) doesn't exist\n ... create it\n ...else\n ... if new signature is different from file contents\n ... update signature file\n ... endif\n ...endif\n\n$(SIGS) might also need to depend on $(BOMS), I would play with that and see.\n" ]
[ 1, 0 ]
[]
[]
[ "dependencies", "makefile" ]
stackoverflow_0000071534_dependencies_makefile.txt
Q: In Delphi, how can you have currency data types shown in different currencies in different forms? I need to write a Delphi application that pulls entries up from various tables in a database, and different entries will be in different currencies. Thus, I need to show a different number of decimal places and a different currency character for every Currency data type ($, Pounds, Euros, etc) depending on the currency of the item I've loaded. Is there a way to change the currency almost-globally, that is, for all Currency data shown in a form? A: Even with the same currency, you may have to display values with a different format (separators for instance), so I would recommend that you associate a LOCALE instead of the currency only with your values. You can use a simple Integer to hold the LCID (locale ID). See the list here: http://msdn.microsoft.com/en-us/library/0h88fahh.aspx Then to display the values, use something like: function CurrFormatFromLCID(const AValue: Currency; const LCID: Integer = LOCALE_SYSTEM_DEFAULT): string; var AFormatSettings: TFormatSettings; begin GetLocaleFormatSettings(LCID, AFormatSettings); Result := CurrToStrF(AValue, ffCurrency, AFormatSettings.CurrencyDecimals, AFormatSettings); end; function USCurrFormat(const AValue: Currency): string; begin Result := CurrFormatFromLCID(AValue, 1033); //1033 = US_LCID end; function FrenchCurrFormat(const AValue: Currency): string; begin Result := CurrFormatFromLCID(AValue, 1036); //1036 = French_LCID end; procedure TestIt; var val: Currency; begin val:=1234.56; ShowMessage('US: ' + USCurrFormat(val)); ShowMessage('FR: ' + FrenchCurrFormat(val)); ShowMessage('GB: ' + CurrFormatFromLCID(val, 2057)); // 2057 = GB_LCID ShowMessage('def: ' + CurrFormatFromLCID(val)); end; A: I'd use SysUtils.CurrToStr(Value: Currency; var FormatSettings: TFormatSettings): string; I'd setup an array of TFormatSettings, each position configured to reflect each currency your application supports. You'll need to set the following fields of the TFormat Settings for each array position: CurrencyString, CurrencyFormat, NegCurrFormat, ThousandSeparator, DecimalSeparator and CurrencyDecimals.
In Delphi, how can you have currency data types shown in different currencies in different forms?
I need to write a Delphi application that pulls entries up from various tables in a database, and different entries will be in different currencies. Thus, I need to show a different number of decimal places and a different currency character for every Currency data type ($, Pounds, Euros, etc) depending on the currency of the item I've loaded. Is there a way to change the currency almost-globally, that is, for all Currency data shown in a form?
[ "Even with the same currency, you may have to display values with a different format (separators for instance), so I would recommend that you associate a LOCALE instead of the currency only with your values.\nYou can use a simple Integer to hold the LCID (locale ID).\nSee the list here: http://msdn.microsoft.com/en-us/library/0h88fahh.aspx \nThen to display the values, use something like:\nfunction CurrFormatFromLCID(const AValue: Currency; const LCID: Integer = LOCALE_SYSTEM_DEFAULT): string;\nvar\n AFormatSettings: TFormatSettings;\nbegin\n GetLocaleFormatSettings(LCID, AFormatSettings);\n Result := CurrToStrF(AValue, ffCurrency, AFormatSettings.CurrencyDecimals, AFormatSettings);\nend;\n\nfunction USCurrFormat(const AValue: Currency): string;\nbegin\n Result := CurrFormatFromLCID(AValue, 1033); //1033 = US_LCID\nend;\n\nfunction FrenchCurrFormat(const AValue: Currency): string;\nbegin\n Result := CurrFormatFromLCID(AValue, 1036); //1036 = French_LCID\nend;\n\nprocedure TestIt;\nvar\n val: Currency;\nbegin\n val:=1234.56;\n ShowMessage('US: ' + USCurrFormat(val));\n ShowMessage('FR: ' + FrenchCurrFormat(val));\n ShowMessage('GB: ' + CurrFormatFromLCID(val, 2057)); // 2057 = GB_LCID\n ShowMessage('def: ' + CurrFormatFromLCID(val));\nend;\n\n", "I'd use SysUtils.CurrToStr(Value: Currency; var FormatSettings: TFormatSettings): string;\nI'd setup an array of TFormatSettings, each position configured to reflect each currency your application supports. You'll need to set the following fields of the TFormat Settings for each array position: CurrencyString, CurrencyFormat, NegCurrFormat, ThousandSeparator, DecimalSeparator and CurrencyDecimals.\n" ]
[ 7, 5 ]
[]
[]
[ "delphi", "finance" ]
stackoverflow_0000086002_delphi_finance.txt
Q: Different values of GetHashCode for inproc and stateserver session variables I've recently inherited an application that makes very heavy use of session, including storing a lot of custom data objects in session. One of my first points of business with this application was to at least move the session data away from InProc, and off load it to either a stateserver or SQL Server. After I made all of the appropriate data objects serializable, and changed the web.config to use a state service, everything appeared to work fine. However, I found that this application does a lot of object comparisons using GetHashCode(). Methods that worked fine when the session was InProc no longer work because the HashCodes no longer match when they are supposed to. This appears to be the case when trying to find a specific child object from a parent when you know the child object's original hash code If I simply change the web.config back to using inproc, it works again. Anyone have any thoughts on where to begin with this? EDIT: qbeuek: thanks for the quick reply. In regards to: The default implementation of GetHashCode in Object class return a hash value based on objects address in memory or something similar. If some other identity comparison is required, you have to override both Equals and GetHashCode. I should have given more information on how they are using this. Basically, they have one parent data object, and there are several arrays of child objects. They happen to know the hash code for a particular object they need, so they are looping through a specific array of child objects looking for a hash code that matches. Once a match is found, they then use that object for other work. A: When you write does a lot of object comparisons using GetHashCode() i sense there is something horribly wrong with this code. The GetHashCode method does not guarantee, that the returned hash values should be in any way unique given two different objects. As far as GetHashCode is concerned, it can return 0 for all objects and still be considered correct. When two object are the same (the Equals method returns true), they MUST have the same value returned from GetHashCode. When two objects have the same hash value, they can be the same object (Equals returns true) or be different objects (Equals returns false). There are no other guarantees on the result of GetHashCode. The default implementation of GetHashCode in Object class return a hash value based on objects address in memory or something similar. If some other identity comparison is required, you have to override both Equals and GetHashCode. A: Override the GetHashCode method in classes that get called this method and calculate the hash code based on unique object properties (like ID or all object fields). A: Solution 1: Create a unique ID for all the child objects and use that instead of hash code. Solution 2: Replace if (a.GetHashCode() == b.GetHashCode()) with if (a.Equals(b)).
Different values of GetHashCode for inproc and stateserver session variables
I've recently inherited an application that makes very heavy use of session, including storing a lot of custom data objects in session. One of my first points of business with this application was to at least move the session data away from InProc, and off load it to either a stateserver or SQL Server. After I made all of the appropriate data objects serializable, and changed the web.config to use a state service, everything appeared to work fine. However, I found that this application does a lot of object comparisons using GetHashCode(). Methods that worked fine when the session was InProc no longer work because the HashCodes no longer match when they are supposed to. This appears to be the case when trying to find a specific child object from a parent when you know the child object's original hash code If I simply change the web.config back to using inproc, it works again. Anyone have any thoughts on where to begin with this? EDIT: qbeuek: thanks for the quick reply. In regards to: The default implementation of GetHashCode in Object class return a hash value based on objects address in memory or something similar. If some other identity comparison is required, you have to override both Equals and GetHashCode. I should have given more information on how they are using this. Basically, they have one parent data object, and there are several arrays of child objects. They happen to know the hash code for a particular object they need, so they are looping through a specific array of child objects looking for a hash code that matches. Once a match is found, they then use that object for other work.
[ "When you write\n\ndoes a lot of object comparisons using GetHashCode()\n\ni sense there is something horribly wrong with this code. The GetHashCode method does not guarantee, that the returned hash values should be in any way unique given two different objects. As far as GetHashCode is concerned, it can return 0 for all objects and still be considered correct.\nWhen two object are the same (the Equals method returns true), they MUST have the same value returned from GetHashCode. When two objects have the same hash value, they can be the same object (Equals returns true) or be different objects (Equals returns false).\nThere are no other guarantees on the result of GetHashCode.\nThe default implementation of GetHashCode in Object class return a hash value based on objects address in memory or something similar. If some other identity comparison is required, you have to override both Equals and GetHashCode.\n", "Override the GetHashCode method in classes that get called this method and calculate the hash code based on unique object properties (like ID or all object fields).\n", "Solution 1: Create a unique ID for all the child objects and use that instead of hash code.\nSolution 2: Replace if (a.GetHashCode() == b.GetHashCode()) with if (a.Equals(b)).\n" ]
[ 2, 1, 1 ]
[]
[]
[ "asp.net", "c#", "session" ]
stackoverflow_0000086660_asp.net_c#_session.txt
Q: Javascript array reference If I have the following: {"hdrs": ["Make","Model","Year"], "data" : [ {"Make":"Honda","Model":"Accord","Year":"2008"} {"Make":"Toyota","Model":"Corolla","Year":"2008"} {"Make":"Honda","Model":"Pilot","Year":"2008"}] } And I have a "hdrs" name (i.e. "Make"), how can I reference the data array instances? seems like data["Make"][0] should work...but unable to get the right reference EDIT Sorry for the ambiguity.. I can loop through hdrs to get each hdr name, but I need to use each instance value of hdrs to find all the data elements in data (not sure that is any better of an explanation). and I will have it in a variable t since it is JSON (appreciate the re-tagging) I would like to be able to reference with something like this: t.data[hdrs[i]][j] A: I had to alter your code a little: var x = {"hdrs": ["Make","Model","Year"], "data" : [ {"Make":"Honda","Model":"Accord","Year":"2008"}, {"Make":"Toyota","Model":"Corolla","Year":"2008"}, {"Make":"Honda","Model":"Pilot","Year":"2008"}] }; alert( x.data[0].Make ); EDIT: in response to your edit var x = {"hdrs": ["Make","Model","Year"], "data" : [ {"Make":"Honda","Model":"Accord","Year":"2008"}, {"Make":"Toyota","Model":"Corolla","Year":"2008"}, {"Make":"Honda","Model":"Pilot","Year":"2008"}] }; var Header = 0; // Make for( var i = 0; i <= x.data.length - 1; i++ ) { alert( x.data[i][x.hdrs[Header]] ); } A: First, you forgot your trailing commas in your data array items. Try the following: var obj_hash = { "hdrs": ["Make", "Model", "Year"], "data": [ {"Make": "Honda", "Model": "Accord", "Year": "2008"}, {"Make": "Toyota", "Model": "Corolla", "Year": "2008"}, {"Make": "Honda", "Model": "Pilot", "Year": "2008"}, ] }; var ref_data = obj_hash.data; alert(ref_data[0].Make); @Kent Fredric: note that the last comma is not strictly needed, but allows you to more easily move lines around (i.e., if you move or add after the last line, and it didn't have a comma, you'd have to specifically remember to add one). I think it's best to always have trailing commas. A: So, like this? var theMap = /* the stuff you posted */; var someHdr = "Make"; var whichIndex = 0; var correspondingData = theMap["data"][whichIndex][someHdr]; That should work, if I'm understanding you correctly... A: var x = {"hdrs": ["Make","Model","Year"], "data" : [ {"Make":"Honda","Model":"Accord","Year":"2008"} {"Make":"Toyota","Model":"Corolla","Year":"2008"} {"Make":"Honda","Model":"Pilot","Year":"2008"}] }; x.data[0].Make == "Honda" x['data'][0]['Make'] == "Honda" You have your array/hash lookup backwards :) A: I'm not sure I understand your question, but... Assuming the above JSON is the var obj, you want: obj.data[0]["Make"] // == "Honda" If you just want to refer to the field referenced by the first header, it would be something like: obj.data[0][obj.hdrs[0]] // == "Honda" A: perhaps try data[0].Make A: Close, you'd use var x = data[0].Make; var z = data[0].Model; var y = data[0].Year; A: Your code as displayed is not syntactically correct; it needs some commas. I got this to work: $foo = {"hdrs": ["Make","Model","Year"], "data" : [ {"Make":"Honda","Model":"Accord","Year":"2008"}, {"Make":"Toyota","Model":"Corolla","Year":"2008"}, {"Make":"Honda","Model":"Pilot","Year":"2008"}] }; and then I can access data as: $foo["data"][0]["make"] A: With the help of the answers (and after getting the inside and outside loops correct) I got this to work: var t = eval( "(" + request + ")" ) ; for (var i = 0; i < t.data.length; i++) { myTable += "<tr>"; for (var j = 0; j < t.hdrs.length; j++) { myTable += "<td>" ; if (t.data[i][t.hdrs[j]] == "") {myTable += "&nbsp;" ; } else { myTable += t.data[i][t.hdrs[j]] ; } myTable += "</td>"; } myTable += "</tr>"; }
Javascript array reference
If I have the following: {"hdrs": ["Make","Model","Year"], "data" : [ {"Make":"Honda","Model":"Accord","Year":"2008"} {"Make":"Toyota","Model":"Corolla","Year":"2008"} {"Make":"Honda","Model":"Pilot","Year":"2008"}] } And I have a "hdrs" name (i.e. "Make"), how can I reference the data array instances? seems like data["Make"][0] should work...but unable to get the right reference EDIT Sorry for the ambiguity.. I can loop through hdrs to get each hdr name, but I need to use each instance value of hdrs to find all the data elements in data (not sure that is any better of an explanation). and I will have it in a variable t since it is JSON (appreciate the re-tagging) I would like to be able to reference with something like this: t.data[hdrs[i]][j]
[ "I had to alter your code a little:\nvar x = {\"hdrs\": [\"Make\",\"Model\",\"Year\"],\n \"data\" : [ \n {\"Make\":\"Honda\",\"Model\":\"Accord\",\"Year\":\"2008\"},\n {\"Make\":\"Toyota\",\"Model\":\"Corolla\",\"Year\":\"2008\"},\n {\"Make\":\"Honda\",\"Model\":\"Pilot\",\"Year\":\"2008\"}]\n };\n\n alert( x.data[0].Make );\n\nEDIT: in response to your edit\nvar x = {\"hdrs\": [\"Make\",\"Model\",\"Year\"],\n \"data\" : [ \n {\"Make\":\"Honda\",\"Model\":\"Accord\",\"Year\":\"2008\"},\n {\"Make\":\"Toyota\",\"Model\":\"Corolla\",\"Year\":\"2008\"},\n {\"Make\":\"Honda\",\"Model\":\"Pilot\",\"Year\":\"2008\"}]\n };\nvar Header = 0; // Make\nfor( var i = 0; i <= x.data.length - 1; i++ )\n{\n alert( x.data[i][x.hdrs[Header]] );\n} \n\n", "First, you forgot your trailing commas in your data array items.\nTry the following:\nvar obj_hash = {\n \"hdrs\": [\"Make\", \"Model\", \"Year\"],\n \"data\": [\n {\"Make\": \"Honda\", \"Model\": \"Accord\", \"Year\": \"2008\"},\n {\"Make\": \"Toyota\", \"Model\": \"Corolla\", \"Year\": \"2008\"},\n {\"Make\": \"Honda\", \"Model\": \"Pilot\", \"Year\": \"2008\"},\n ]\n};\n\nvar ref_data = obj_hash.data;\n\nalert(ref_data[0].Make);\n@Kent Fredric: note that the last comma is not strictly needed, but allows you to more easily move lines around (i.e., if you move or add after the last line, and it didn't have a comma, you'd have to specifically remember to add one). I think it's best to always have trailing commas.\n", "So, like this?\nvar theMap = /* the stuff you posted */;\nvar someHdr = \"Make\";\nvar whichIndex = 0;\nvar correspondingData = theMap[\"data\"][whichIndex][someHdr];\n\nThat should work, if I'm understanding you correctly...\n", "var x = {\"hdrs\": [\"Make\",\"Model\",\"Year\"],\n \"data\" : [ \n {\"Make\":\"Honda\",\"Model\":\"Accord\",\"Year\":\"2008\"}\n {\"Make\":\"Toyota\",\"Model\":\"Corolla\",\"Year\":\"2008\"}\n {\"Make\":\"Honda\",\"Model\":\"Pilot\",\"Year\":\"2008\"}]\n};\n\nx.data[0].Make == \"Honda\"\nx['data'][0]['Make'] == \"Honda\"\n\nYou have your array/hash lookup backwards :) \n", "I'm not sure I understand your question, but...\nAssuming the above JSON is the var obj, you want:\nobj.data[0][\"Make\"] // == \"Honda\"\n\nIf you just want to refer to the field referenced by the first header, it would be something like:\nobj.data[0][obj.hdrs[0]] // == \"Honda\"\n\n", "perhaps try data[0].Make\n", "Close, you'd use \nvar x = data[0].Make;\nvar z = data[0].Model;\nvar y = data[0].Year;\n\n", "Your code as displayed is not syntactically correct; it needs some commas. I got this to work:\n$foo = {\"hdrs\": [\"Make\",\"Model\",\"Year\"],\n \"data\" : [ \n {\"Make\":\"Honda\",\"Model\":\"Accord\",\"Year\":\"2008\"},\n {\"Make\":\"Toyota\",\"Model\":\"Corolla\",\"Year\":\"2008\"},\n {\"Make\":\"Honda\",\"Model\":\"Pilot\",\"Year\":\"2008\"}]\n};\n\nand then I can access data as: \n$foo[\"data\"][0][\"make\"]\n\n", "With the help of the answers (and after getting the inside and outside loops correct) I got this to work:\nvar t = eval( \"(\" + request + \")\" ) ;\nfor (var i = 0; i < t.data.length; i++) {\n myTable += \"<tr>\";\n for (var j = 0; j < t.hdrs.length; j++) {\n myTable += \"<td>\" ;\n if (t.data[i][t.hdrs[j]] == \"\") {myTable += \"&nbsp;\" ; }\n else { myTable += t.data[i][t.hdrs[j]] ; }\n myTable += \"</td>\";\n }\n myTable += \"</tr>\";\n}\n\n" ]
[ 4, 2, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "javascript", "json" ]
stackoverflow_0000086849_javascript_json.txt
Q: MFC IE embedded web browser wackiness I have this modeless MFC dialog which embeds an Internet Explorer web browser control. The control is derived straight from CWnd with ActiveX wrappers generated by Visual Studio, and I map it to the CDialog using only a DDX_Control(pDX, IDC_EXPLORER, m_explorer);. I have 2 problems. Problem #1: Being modeless, I start and stop the dialog at my own pleasure using new/Create(), then DestroyWindow()/delete(in PostNcDestroy). Trouble begins when the IE control starts loading a Flash video (regular YouTube stuff): when one closes, thus destroying the dialog, the video still loads! Right until fully cached. The Flash ActiveX thread still lingers and continues to run even when the parent dialog has passed PostNcDestroy and all memory was freed. What to do? How do you trully 'kill' that child web control and all its threads? Problem #2: The web browser control covers the whole area of the dialog. I cannot intercept any OnMouseMove() - in the parent dialog or in the web browser mapping class! What gives? Thanks! "Cleanup" "delete this" in PostNcDestroy() - and calling the base func of course. Should it be more? What? Shouldn't the dialog gracefully take care of its children? I tried to explicitly call DestroyWindow on the web control, or send/post him messages like WM_DESTROY, WM_CLOSE, even WM_QUIT - but nothing - same deal. Problem #2: No, like indented, the control takes all space and it's on top so I guess any mouse action doesn't get transmitted 'bellow' :)? But then why doesn't his own OnMouseMove get called? Because it goes straight from CWnd? I'm lost... A: problem 1) try myBrowser.navigate("about:blank") before destroying the window.
MFC IE embedded web browser wackiness
I have this modeless MFC dialog which embeds an Internet Explorer web browser control. The control is derived straight from CWnd with ActiveX wrappers generated by Visual Studio, and I map it to the CDialog using only a DDX_Control(pDX, IDC_EXPLORER, m_explorer);. I have 2 problems. Problem #1: Being modeless, I start and stop the dialog at my own pleasure using new/Create(), then DestroyWindow()/delete(in PostNcDestroy). Trouble begins when the IE control starts loading a Flash video (regular YouTube stuff): when one closes, thus destroying the dialog, the video still loads! Right until fully cached. The Flash ActiveX thread still lingers and continues to run even when the parent dialog has passed PostNcDestroy and all memory was freed. What to do? How do you trully 'kill' that child web control and all its threads? Problem #2: The web browser control covers the whole area of the dialog. I cannot intercept any OnMouseMove() - in the parent dialog or in the web browser mapping class! What gives? Thanks! "Cleanup" "delete this" in PostNcDestroy() - and calling the base func of course. Should it be more? What? Shouldn't the dialog gracefully take care of its children? I tried to explicitly call DestroyWindow on the web control, or send/post him messages like WM_DESTROY, WM_CLOSE, even WM_QUIT - but nothing - same deal. Problem #2: No, like indented, the control takes all space and it's on top so I guess any mouse action doesn't get transmitted 'bellow' :)? But then why doesn't his own OnMouseMove get called? Because it goes straight from CWnd? I'm lost...
[ "problem 1) try myBrowser.navigate(\"about:blank\") before destroying the window.\n" ]
[ 3 ]
[]
[]
[ "c++", "dialog", "internet_explorer", "mfc" ]
stackoverflow_0000056145_c++_dialog_internet_explorer_mfc.txt
Q: Best way to handle null when writing equals operator Possible Duplicate: How do I check for nulls in an '==' operator overload without infinite recursion? When I overload the == operator for objects I typically write something like this: public static bool operator ==(MyObject uq1, MyObject uq2) { if (((object)uq1 == null) || ((object)uq2 == null)) return false; return uq1.Field1 == uq2.Field1 && uq1.Field2 == uq2.Field2; } If you don't down-cast to object the function recurses into itself but I have to wonder if there isn't a better way? A: As Microsoft says, A common error in overloads of operator == is to use (a == b), (a == null), or (b == null) to check for reference equality. This instead results in a call to the overloaded operator ==, causing an infinite loop. Use ReferenceEquals or cast the type to Object, to avoid the loop. So use ReferenceEquals(a, null) || ReferenceEquals(b, null) is one possibility, but casting to object is just as good (is actually equivalent, I believe). So yes, it seems there should be a better way, but the method you use is the one recommended. However, as has been pointed out, you really SHOULD override Equals as well when overriding ==. With LINQ providers being written in different languages and doing expression resolution at runtime, who knows when you'll be bit by not doing it even if you own all the code yourself. A: ReferenceEquals(object obj1, object obj2) A: @neouser99: That's the right solution, however the part that is missed is that when overriding the equality operator (the operator ==) you should also override the Equals function and simply make the operator call the function. Not all .NET languages support operator overloading, hence the reason for overriding the Equals function. A: if ((object)uq1 == null) return ((object)uq2 == null) else if ((object)uq2 == null) return false; else //return normal comparison This compares them as equal when both are null. A: Just use Resharper to create you Equals & GetHashCode methods. It creates the most comprehensive code for this purpose. Update I didn't post it on purpose - I prefer people to use Resharper's function instead of copy-pasting, because the code changes from class to class. As for developing C# without Resharper - I don't understand how you live, man. Anyway, here is the code for a simple class (Generated by Resharper 3.0, the older version - I have 4.0 at work, I don't currently remember if it creates identical code) public class Foo : IEquatable<Foo> { public static bool operator !=(Foo foo1, Foo foo2) { return !Equals(foo1, foo2); } public static bool operator ==(Foo foo1, Foo foo2) { return Equals(foo1, foo2); } public bool Equals(Foo foo) { if (foo == null) return false; return y == foo.y && x == foo.x; } public override bool Equals(object obj) { if (ReferenceEquals(this, obj)) return true; return Equals(obj as Foo); } public override int GetHashCode() { return y + 29*x; } private int y; private int x; }
Best way to handle null when writing equals operator
Possible Duplicate: How do I check for nulls in an '==' operator overload without infinite recursion? When I overload the == operator for objects I typically write something like this: public static bool operator ==(MyObject uq1, MyObject uq2) { if (((object)uq1 == null) || ((object)uq2 == null)) return false; return uq1.Field1 == uq2.Field1 && uq1.Field2 == uq2.Field2; } If you don't down-cast to object the function recurses into itself but I have to wonder if there isn't a better way?
[ "As Microsoft says,\n\nA common error in overloads of\n operator == is to use (a == b), (a ==\n null), or (b == null) to check for\n reference equality. This instead\n results in a call to the overloaded\n operator ==, causing an infinite loop.\n Use ReferenceEquals or cast the type\n to Object, to avoid the loop.\n\nSo use ReferenceEquals(a, null) || ReferenceEquals(b, null) is one possibility, but casting to object is just as good (is actually equivalent, I believe). \nSo yes, it seems there should be a better way, but the method you use is the one recommended.\nHowever, as has been pointed out, you really SHOULD override Equals as well when overriding ==. With LINQ providers being written in different languages and doing expression resolution at runtime, who knows when you'll be bit by not doing it even if you own all the code yourself.\n", "ReferenceEquals(object obj1, object obj2)\n", "@neouser99: That's the right solution, however the part that is missed is that when overriding the equality operator (the operator ==) you should also override the Equals function and simply make the operator call the function. Not all .NET languages support operator overloading, hence the reason for overriding the Equals function.\n", "if ((object)uq1 == null) \n return ((object)uq2 == null)\nelse if ((object)uq2 == null)\n return false;\nelse\n //return normal comparison\n\nThis compares them as equal when both are null.\n", "Just use Resharper to create you Equals & GetHashCode methods. It creates the most comprehensive code for this purpose.\nUpdate\nI didn't post it on purpose - I prefer people to use Resharper's function instead of copy-pasting, because the code changes from class to class. As for developing C# without Resharper - I don't understand how you live, man.\nAnyway, here is the code for a simple class (Generated by Resharper 3.0, the older version - I have 4.0 at work, I don't currently remember if it creates identical code)\npublic class Foo : IEquatable<Foo>\n{\n public static bool operator !=(Foo foo1, Foo foo2)\n {\n return !Equals(foo1, foo2);\n }\n\n public static bool operator ==(Foo foo1, Foo foo2)\n {\n return Equals(foo1, foo2);\n }\n\n public bool Equals(Foo foo)\n {\n if (foo == null) return false;\n return y == foo.y && x == foo.x;\n }\n\n public override bool Equals(object obj)\n {\n if (ReferenceEquals(this, obj)) return true;\n return Equals(obj as Foo);\n }\n\n public override int GetHashCode()\n {\n return y + 29*x;\n }\n\n private int y;\n private int x;\n}\n\n" ]
[ 7, 2, 2, 0, 0 ]
[ "But why don't you create an object member function? It can certainly not be called on a Null reference, so you're sure the first argument is not Null.\nIndeed, you lose the symmetricity of a binary operator, but still...\n(note on Purfideas' answer: Null might equal Null if needed as a sentinel value of an array)\nAlso think of the semantics of your == function: sometimes you really want to be able to choose whether you test for \n\nIdentity (points to same object)\nValue Equality\nEquivalence ( e.g. 1.000001 is equivalent to .9999999 )\n\n", "Follow the DB treatment:\nnull == <anything> is always false\n\n" ]
[ -1, -2 ]
[ "c#", "equals", "operator_overloading" ]
stackoverflow_0000086947_c#_equals_operator_overloading.txt
Q: Numbering Regex Submatches Is there a canonical ordering of submatch expressions in a regular expression? For example: What is the order of the submatches in "(([0-9]{3}).([0-9]{3}).([0-9]{3}).([0-9]{3}))\s+([A-Z]+)" ? a. (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))\s+([A-Z]+) (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3})) ([A-Z]+) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) b. (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))\s+([A-Z]+) (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3})) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([A-Z]+) or c. somthin' else. A: They tend to be numbered in the order the capturing parens start, left to right. Therefore, option b. A: In Perl 5 regular expressions, answer b is correct. Submatch groupings are stored in order of open-parentheses. Many other regular expression engines take their cues from Perl, but you would have to look up individual implementations to be sure. I'd suggest the book Mastering Regular Expressions for a deeper understanding. A: You count opening parentheses, left to right. So the order would be (([0-9]{3}).([0-9]{3}).([0-9]{3}).([0-9]{3})) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([A-Z]+) At least this is what Perl would do. Other regex engines might have different rules.
Numbering Regex Submatches
Is there a canonical ordering of submatch expressions in a regular expression? For example: What is the order of the submatches in "(([0-9]{3}).([0-9]{3}).([0-9]{3}).([0-9]{3}))\s+([A-Z]+)" ? a. (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))\s+([A-Z]+) (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3})) ([A-Z]+) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) b. (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))\s+([A-Z]+) (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3})) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([A-Z]+) or c. somthin' else.
[ "They tend to be numbered in the order the capturing parens start, left to right. Therefore, option b.\n", "In Perl 5 regular expressions, answer b is correct. Submatch groupings are stored in order of open-parentheses.\nMany other regular expression engines take their cues from Perl, but you would have to look up individual implementations to be sure. I'd suggest the book Mastering Regular Expressions for a deeper understanding.\n", "You count opening parentheses, left to right. So the order would be\n(([0-9]{3}).([0-9]{3}).([0-9]{3}).([0-9]{3}))\n([0-9]{3})\n([0-9]{3})\n([0-9]{3})\n([0-9]{3})\n([A-Z]+)\n\nAt least this is what Perl would do. Other regex engines might have different rules.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "linearization", "regex", "tree_traversal" ]
stackoverflow_0000087330_linearization_regex_tree_traversal.txt
Q: In ASP.net Webforms how do you detect which Textbox someone pressed enter? In ASP.net Webforms how do you detect which Textbox someone pressed enter? Please no Javascript answers. I need to handle it all in the code behind using VB.NET. A: I suspect it cannot be done without javascript - when you hit enter, the browser submits the form - it doesn't submit what field had the focus. So unless you use JS to add that information to the form being submitted, you're out of luck. A: Why do you need to determine the which TextBox was pressed? Are you looking to see which TextBox was being focused so that you can trigger the proper button click event? If you are looking to do something like this, one trick I've done was to "group" the appropriate form elements within their own panel and then set the "DefaultButton" property accordingly. Doing this allows me to have a "Search by Name", "Search by Department", "Search by Id", etc. Textbox/Button combination on a single form and still allow the user to type their query parameter, hit Enter, and have the proper search method get invoked in the code behind. A: Without using Javascript, you just can't. That information is not conveyed from the client browser to the server. A: As far as I know there is no possible way for a server side script to detect that. It simply does not get sent to the server. It must be done client-side (i.e. With Javascript) and then sent to the server. A: I solved this for one site's search by looking at the Request.Form object, server side to see if the search box had a value. I did it in a base class that all my pages (or a base class for the masterpage) inherit from. If it has a value, the odds are pretty good somebody typed something in and hit enter and so I handled the search. A: In the event handler, the "source" object (the first parameter of the event handler) is the object raising the event. Type it to button and get the name, or use reflection to get information out of the non-typed object. In addition, if the control is a child of a web control that you do not have raise its own events (just saying...) then you can use OnBubbleEvent to determine what's going on. OnBubbleEvent also has a "source" parameter you can type, or use reflection on.
In ASP.net Webforms how do you detect which Textbox someone pressed enter?
In ASP.net Webforms how do you detect which Textbox someone pressed enter? Please no Javascript answers. I need to handle it all in the code behind using VB.NET.
[ "I suspect it cannot be done without javascript - when you hit enter, the browser submits the form - it doesn't submit what field had the focus. So unless you use JS to add that information to the form being submitted, you're out of luck.\n", "Why do you need to determine the which TextBox was pressed? Are you looking to see which TextBox was being focused so that you can trigger the proper button click event?\nIf you are looking to do something like this, one trick I've done was to \"group\" the appropriate form elements within their own panel and then set the \"DefaultButton\" property accordingly.\nDoing this allows me to have a \"Search by Name\", \"Search by Department\", \"Search by Id\", etc. Textbox/Button combination on a single form and still allow the user to type their query parameter, hit Enter, and have the proper search method get invoked in the code behind.\n", "Without using Javascript, you just can't. That information is not conveyed from the client browser to the server.\n", "As far as I know there is no possible way for a server side script to detect that. It simply does not get sent to the server. It must be done client-side (i.e. With Javascript) and then sent to the server.\n", "I solved this for one site's search by looking at the Request.Form object, server side to see if the search box had a value. I did it in a base class that all my pages (or a base class for the masterpage) inherit from. If it has a value, the odds are pretty good somebody typed something in and hit enter and so I handled the search.\n", "In the event handler, the \"source\" object (the first parameter of the event handler) is the object raising the event. Type it to button and get the name, or use reflection to get information out of the non-typed object.\nIn addition, if the control is a child of a web control that you do not have raise its own events (just saying...) then you can use OnBubbleEvent to determine what's going on. OnBubbleEvent also has a \"source\" parameter you can type, or use reflection on.\n" ]
[ 2, 2, 1, 1, 1, 1 ]
[]
[]
[ "asp.net", "webforms" ]
stackoverflow_0000087245_asp.net_webforms.txt
Q: javascript library to show photo album I visited few web sites in the past where they had a set of photo thumbnails and clicking on one of them created a cool effect of an expanding popup showing the full size image. Is there any available free JavaScript library that will do this? I'm interested mostly in the popup effect and less in the rest of the album management. A: Lightbox is another popular one: Lightbox Project Page A: The thickbox plugin for jQuery will do what you want. A: check out jQuery http://jquery.com and then the LightBox plugin for jQuery: http://leandrovieira.com/projects/jquery/lightbox/
javascript library to show photo album
I visited few web sites in the past where they had a set of photo thumbnails and clicking on one of them created a cool effect of an expanding popup showing the full size image. Is there any available free JavaScript library that will do this? I'm interested mostly in the popup effect and less in the rest of the album management.
[ "Lightbox is another popular one:\nLightbox Project Page\n", "The thickbox plugin for jQuery will do what you want.\n", "check out jQuery http://jquery.com \nand then the LightBox plugin for jQuery: http://leandrovieira.com/projects/jquery/lightbox/\n" ]
[ 3, 2, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0000087373_javascript.txt
Q: Query Web Service for list of Messages? Is there a straightforward way to query a web service to see which messages it supports? The C# .NET application I'm working on needs to be able to handle an older version of the web service, which does not implement the message I'm trying to send. The web service does not expose a version number, so Plan B is to see if the message is defined. I'm assuming I can just make an HTTP request for the WSDL and parse it, but before I go down that path, I want to make sure there's not a simpler approach. Update: I've decided to get the WSDL and get messages directly. Here's the rough draft for getting all the messages: HttpWebRequest webRequest = (HttpWebRequest) WebRequest.Create( "http://your/web/service/here.asmx?WSDL" ); webRequest.PreAuthenticate = // details elided webRequest.Credentials = // details elided webRequest.Timeout = // details elided HttpWebResponse webResponse = (HttpWebResponse) webRequest.GetResponse(); XPathDocument xpathDocument = new XPathDocument( webResponse.GetResponseStream() ); XPathNavigator xpathNavigator = xpathDocument.CreateNavigator(); XmlNamespaceManager xmlNamespaceManager = new XmlNamespaceManager( new NameTable() ); xmlNamespaceManager.AddNamespace( "wsdl", "http://schemas.xmlsoap.org/wsdl/" ); foreach( XPathNavigator node in xpathNavigator.Select( "//wsdl:message/@name", xmlNamespaceManager ) ) { string messageName = node.Value; } A: Parsing the WSDL is probably the simplest way to do this. Using WCF, it's also possible to download the WSDL at runtime, essentially run svcutil on it through code, and end up with a dynamically generated proxy that you can check the structure of. See https://learn.microsoft.com/en-us/archive/blogs/vipulmodi/dynamic-programming-with-wcf for an example of a runtime-generated proxy. A: I'm pretty sure WSDL is the way to do this.
Query Web Service for list of Messages?
Is there a straightforward way to query a web service to see which messages it supports? The C# .NET application I'm working on needs to be able to handle an older version of the web service, which does not implement the message I'm trying to send. The web service does not expose a version number, so Plan B is to see if the message is defined. I'm assuming I can just make an HTTP request for the WSDL and parse it, but before I go down that path, I want to make sure there's not a simpler approach. Update: I've decided to get the WSDL and get messages directly. Here's the rough draft for getting all the messages: HttpWebRequest webRequest = (HttpWebRequest) WebRequest.Create( "http://your/web/service/here.asmx?WSDL" ); webRequest.PreAuthenticate = // details elided webRequest.Credentials = // details elided webRequest.Timeout = // details elided HttpWebResponse webResponse = (HttpWebResponse) webRequest.GetResponse(); XPathDocument xpathDocument = new XPathDocument( webResponse.GetResponseStream() ); XPathNavigator xpathNavigator = xpathDocument.CreateNavigator(); XmlNamespaceManager xmlNamespaceManager = new XmlNamespaceManager( new NameTable() ); xmlNamespaceManager.AddNamespace( "wsdl", "http://schemas.xmlsoap.org/wsdl/" ); foreach( XPathNavigator node in xpathNavigator.Select( "//wsdl:message/@name", xmlNamespaceManager ) ) { string messageName = node.Value; }
[ "Parsing the WSDL is probably the simplest way to do this. Using WCF, it's also possible to download the WSDL at runtime, essentially run svcutil on it through code, and end up with a dynamically generated proxy that you can check the structure of. See https://learn.microsoft.com/en-us/archive/blogs/vipulmodi/dynamic-programming-with-wcf for an example of a runtime-generated proxy.\n", "I'm pretty sure WSDL is the way to do this.\n" ]
[ 2, 0 ]
[]
[]
[ ".net", "c#", "soap", "web_services" ]
stackoverflow_0000087023_.net_c#_soap_web_services.txt
Q: LINQ to SQL in Visual Studio 2005 I normally run VS 2008 at home and LINQ is built in. At work we are still using VS 2005 and I have the opportunity to start a new project that I would like to use LINQ to SQL. After doing some searching all I could come up with was the MAY 2006 CTP of LINQ would have to be installed for LINQ to work in VS 2005. Does someone know the proper add ins or updates I would need to install to use LINQ in VS 2005 (preferably without having to use the CTP mentioned above). A: You can reference System.Data.Linq.dll and System.Core.dll, and set your build target for C# 3.0 or the latest VB compiler, but everything else would have to be mapped manually (no designer support in VS2005 in LINQ to SQL RTM). A: It's no longer legal to use the May CTP (the beta software). It's not legal to deploy System.Core.dll (among others) without installing .Net 3.5 The best way to do LINQ in VS2005 is to use LINQBridge for LinqToObjects, and to use simple table adapters or some other data access method to punt your data into objects (for further in-memory querying). Also note: LinqToObjects expects Func(T) - which are essentially delegate types. LinqToSQL requires Expression(Func(T)) - which are expression trees and much harder to construct without the lambda syntax.
LINQ to SQL in Visual Studio 2005
I normally run VS 2008 at home and LINQ is built in. At work we are still using VS 2005 and I have the opportunity to start a new project that I would like to use LINQ to SQL. After doing some searching all I could come up with was the MAY 2006 CTP of LINQ would have to be installed for LINQ to work in VS 2005. Does someone know the proper add ins or updates I would need to install to use LINQ in VS 2005 (preferably without having to use the CTP mentioned above).
[ "You can reference System.Data.Linq.dll and System.Core.dll, and set your build target for C# 3.0 or the latest VB compiler, but everything else would have to be mapped manually (no designer support in VS2005 in LINQ to SQL RTM).\n", "It's no longer legal to use the May CTP (the beta software).\nIt's not legal to deploy System.Core.dll (among others) without installing .Net 3.5\nThe best way to do LINQ in VS2005 is to use LINQBridge for LinqToObjects, and to use simple table adapters or some other data access method to punt your data into objects (for further in-memory querying).\nAlso note: LinqToObjects expects Func(T) - which are essentially delegate types. LinqToSQL requires Expression(Func(T)) - which are expression trees and much harder to construct without the lambda syntax.\n" ]
[ 2, 2 ]
[]
[]
[ "linq", "linq_to_sql", "visual_studio_2005" ]
stackoverflow_0000086262_linq_linq_to_sql_visual_studio_2005.txt
Q: Scaling cheaply: MySQL and MS SQL How cheap can MySQL be compared to MS SQL when you have tons of data (and joins/search)? Consider a site like stackoverflow full of Q&As already and after getting dugg. My ASP.NET sites are currently on SQL Server Express so I don't have any idea how cost compares in the long run. Although after a quick research, I'm starting to envy the savings MySQL folks get. A: MSSQL Standard Edition (32 or 64 bit) will cost around $5K per CPU socket. 64 bit will allow you to use as much RAM as you need. Enterprise Edition is not really necessary for most deployments, so don't worry about the $20K you would need for that license. MySQL is only free if you forego a lot of the useful tools offered with the licenses, and it's probably (at least as of 2008) going to be a little more work to get it to scale like Sql Server. In the long run I think you will spend much more on hardware and people than you will on just the licenses. If you need to scale, then you will probably have the cash flow to handle $5K here and there. A: The performance benefits of MS SQL over MySQL are fairly negligible, especially if you mitigate them with server and client side optimzations like server caching (in RAM), client caching (cache and expires headers) and gzip compression. A: I know that stackoverflow has had problems with deadlocks from reads/writes coming at odd intervals but they're claiming their architecture (MSSQL) is holding up fine. This was before the public beta of course and according to Jeff's twitter earlier today: the range of top 32 newest/modified questions was about 20 minutes in the private beta; now it's about 2 minutes. That the site hasn't crashed yet is a testament to the database (as well as good coding and testing). But why not post some specific numbers about your site? A: MySQL is extremely cheap when you have the distro (or staff to build) that carries MySQL Enterprise edition. This is a High Availability version which offers multi-master replication over many servers. Pros are low (license-) costs after initial purchase of hardware (Gigs of RAM needed!) and time to set up. The drawbacks are suboptimal performance with many joins, no full-text indexing, stored procesures (I think) and one need to replicate grants to every master node. Yet it's easier to run than the replication/proxy balancing setup that's available for PostgreSQL.
Scaling cheaply: MySQL and MS SQL
How cheap can MySQL be compared to MS SQL when you have tons of data (and joins/search)? Consider a site like stackoverflow full of Q&As already and after getting dugg. My ASP.NET sites are currently on SQL Server Express so I don't have any idea how cost compares in the long run. Although after a quick research, I'm starting to envy the savings MySQL folks get.
[ "MSSQL Standard Edition (32 or 64 bit) will cost around $5K per CPU socket. 64 bit will allow you to use as much RAM as you need. Enterprise Edition is not really necessary for most deployments, so don't worry about the $20K you would need for that license.\nMySQL is only free if you forego a lot of the useful tools offered with the licenses, and it's probably (at least as of 2008) going to be a little more work to get it to scale like Sql Server.\nIn the long run I think you will spend much more on hardware and people than you will on just the licenses. If you need to scale, then you will probably have the cash flow to handle $5K here and there.\n", "The performance benefits of MS SQL over MySQL are fairly negligible, especially if you mitigate them with server and client side optimzations like server caching (in RAM), client caching (cache and expires headers) and gzip compression.\n", "I know that stackoverflow has had problems with deadlocks from reads/writes coming at odd intervals but they're claiming their architecture (MSSQL) is holding up fine. This was before the public beta of course and according to Jeff's twitter earlier today:\n\nthe range of top 32 newest/modified\n questions was about 20 minutes in the\n private beta; now it's about 2\n minutes.\n\nThat the site hasn't crashed yet is a testament to the database (as well as good coding and testing).\nBut why not post some specific numbers about your site?\n", "MySQL is extremely cheap when you have the distro (or staff to build) that carries MySQL Enterprise edition. This is a High Availability version which offers multi-master replication over many servers.\nPros are low (license-) costs after initial purchase of hardware (Gigs of RAM needed!) and time to set up. \nThe drawbacks are suboptimal performance with many joins, no full-text indexing, stored procesures (I think) and one need to replicate grants to every master node. \nYet it's easier to run than the replication/proxy balancing setup that's available for PostgreSQL.\n" ]
[ 5, 3, 2, 1 ]
[]
[]
[ "mysql", "scalability", "scaling", "sql", "sql_server" ]
stackoverflow_0000076762_mysql_scalability_scaling_sql_sql_server.txt
Q: How do I prevent Flash's URLRequest from escaping the url? I load some XML from a servlet from my Flex application like this: _loader = new URLLoader(); _loader.load(new URLRequest(_servletURL+"?do=load&id="+_id)); As you can imagine _servletURL is something like http://foo.bar/path/to/servlet In some cases, this URL contains accented characters (long story). I pass the unescaped string to URLRequest, but it seems that flash escapes it and calls the escaped URL, which is invalid. Ideas? A: My friend Luis figured it out: You should use encodeURI does the UTF8URL encoding http://livedocs.adobe.com/flex/3/langref/package.html#encodeURI() but not unescape because it unescapes to ASCII see http://livedocs.adobe.com/flex/3/langref/package.html#unescape() I think that is where we are getting a %E9 in the URL instead of the expected %C3%A9. http://www.w3schools.com/TAGS/ref_urlencode.asp A: I'm not sure if this will be any different, but this is a cleaner way of achieving the same URLRequest: var request:URLRequest = new URLRequest(_servletURL) request.method = URLRequestMethod.GET; var reqData:Object = new Object(); reqData.do = "load"; reqData.id = _id; request.data = reqData; _loader = new URLLoader(request); A: From the livedocs: http://livedocs.adobe.com/flex/3/langref/flash/net/URLRequest.html Creates a URLRequest object. If System.useCodePage is true, the request is encoded using the system code page, rather than Unicode. If System.useCodePage is false, the request is encoded using Unicode, rather than the system code page. This page has more information: http://livedocs.adobe.com/flex/3/html/help.html?content=18_Client_System_Environment_3.html but basically you just need to add this to a function that will be run before the URLRequest (I would probably put it in a creationComplete event) System.useCodePage = false;
How do I prevent Flash's URLRequest from escaping the url?
I load some XML from a servlet from my Flex application like this: _loader = new URLLoader(); _loader.load(new URLRequest(_servletURL+"?do=load&id="+_id)); As you can imagine _servletURL is something like http://foo.bar/path/to/servlet In some cases, this URL contains accented characters (long story). I pass the unescaped string to URLRequest, but it seems that flash escapes it and calls the escaped URL, which is invalid. Ideas?
[ "My friend Luis figured it out:\nYou should use encodeURI does the UTF8URL encoding\nhttp://livedocs.adobe.com/flex/3/langref/package.html#encodeURI()\nbut not unescape because it unescapes to ASCII see\nhttp://livedocs.adobe.com/flex/3/langref/package.html#unescape()\nI think that is where we are getting a %E9 in the URL instead of the expected %C3%A9.\nhttp://www.w3schools.com/TAGS/ref_urlencode.asp\n", "I'm not sure if this will be any different, but this is a cleaner way of achieving the same URLRequest:\nvar request:URLRequest = new URLRequest(_servletURL)\nrequest.method = URLRequestMethod.GET;\nvar reqData:Object = new Object();\n\nreqData.do = \"load\";\nreqData.id = _id;\nrequest.data = reqData;\n\n_loader = new URLLoader(request); \n\n", "From the livedocs: http://livedocs.adobe.com/flex/3/langref/flash/net/URLRequest.html\n\nCreates a URLRequest object. If System.useCodePage is true, the request is encoded using the system code page, rather than Unicode. If System.useCodePage is false, the request is encoded using Unicode, rather than the system code page.\n\nThis page has more information: http://livedocs.adobe.com/flex/3/html/help.html?content=18_Client_System_Environment_3.html\nbut basically you just need to add this to a function that will be run before the URLRequest (I would probably put it in a creationComplete event)\nSystem.useCodePage = false;\n" ]
[ 5, 4, 0 ]
[]
[]
[ "apache_flex", "flash", "urlrequest" ]
stackoverflow_0000062437_apache_flex_flash_urlrequest.txt
Q: What's the best way to determine if a temporary table exists in SQL Server? When writing a T-SQL script that I plan on re-running, often times I use temporary tables to store temporary data. Since the temp table is created on the fly, I'd like to be able to drop that table only if it exists (before I create it). I'll post the method that I use, but I'd like to see if there is a better way. A: IF Object_Id('TempDB..#TempTable') IS NOT NULL BEGIN DROP TABLE #TempTable END A: The OBJECT_ID function returns the internal object id for the given object name and type. 'tempdb..#t1' refers to the table #t1 in the tempdb database. 'U' is for user-defined table. IF OBJECT_ID('tempdb..#t1', 'U') IS NOT NULL DROP TABLE #t1 CREATE TABLE #t1 ( id INT IDENTITY(1,1), msg VARCHAR(255) ) A: SELECT name FROM sysobjects WHERE type = 'U' AND name = 'TempTable'
What's the best way to determine if a temporary table exists in SQL Server?
When writing a T-SQL script that I plan on re-running, often times I use temporary tables to store temporary data. Since the temp table is created on the fly, I'd like to be able to drop that table only if it exists (before I create it). I'll post the method that I use, but I'd like to see if there is a better way.
[ "IF Object_Id('TempDB..#TempTable') IS NOT NULL\nBEGIN\n DROP TABLE #TempTable\nEND\n\n", "The OBJECT_ID function returns the internal object id for the given object name and type. 'tempdb..#t1' refers to the table #t1 in the tempdb database. 'U' is for user-defined table.\nIF OBJECT_ID('tempdb..#t1', 'U') IS NOT NULL\n DROP TABLE #t1\n\nCREATE TABLE #t1\n(\n id INT IDENTITY(1,1),\n msg VARCHAR(255)\n)\n\n", "SELECT name\nFROM sysobjects\nWHERE type = 'U' AND name = 'TempTable'\n\n" ]
[ 27, 14, 0 ]
[]
[]
[ "sql_server" ]
stackoverflow_0000002649_sql_server.txt
Q: How might I pass variables through to cached content in PHP? Essentially I have a PHP page that calls out some other HTML to be rendered through an object's method. It looks like this: MY PHP PAGE: // some content... <?php $GLOBALS["topOfThePage"] = true; $this->renderSomeHTML(); ?> // some content... <?php $GLOBALS["topOfThePage"] = false; $this->renderSomeHTML(); ?> The first method call is cached, but I need renderSomeHTML() to display slightly different based upon its location in the page. I tried passing through to $GLOBALS, but the value doesn't change, so I'm assuming it is getting cached. Is this not possible without passing an argument through the method or by not caching it? Any help is appreciated. This is not my application -- it is Magento. Edit: This is Magento, and it looks to be using memcached. I tried to pass an argument through renderSomeHTML(), but when I use func_get_args() on the PHP include to be rendered, what comes out is not what I put into it. Edit: Further down the line I was able to "invalidate" the cache by calling a different method that pulled the same content and passing in an argument that turned off caching. Thanks everyone for your help. A: Obviously, you cannot. The whole point of caching is that the 'thing' you cache is not going to change. So you either: provide a parameter aviod caching invalidate the cache when you set a different parameter Or, you rewrite the cache mechanism yourself - to support some dynamic binding. A: Chaching is handled differently by different frameworks, so you'd have to help us out with some more information. But I also wonder if you could pass that as a parameter instead of using $GLOBALS. $this->renderSomeHTML(true); A: Your question seems unclear, but caching pretty much means 'stored so we don't have to calculate it again'. If you want the content to differ, you need to cache more results and pick the correct cached object one to send back. Need more info to give a better answer. What is caching the document, Smarty? And what do you mean by "its location in the page"? What is 'it'?
How might I pass variables through to cached content in PHP?
Essentially I have a PHP page that calls out some other HTML to be rendered through an object's method. It looks like this: MY PHP PAGE: // some content... <?php $GLOBALS["topOfThePage"] = true; $this->renderSomeHTML(); ?> // some content... <?php $GLOBALS["topOfThePage"] = false; $this->renderSomeHTML(); ?> The first method call is cached, but I need renderSomeHTML() to display slightly different based upon its location in the page. I tried passing through to $GLOBALS, but the value doesn't change, so I'm assuming it is getting cached. Is this not possible without passing an argument through the method or by not caching it? Any help is appreciated. This is not my application -- it is Magento. Edit: This is Magento, and it looks to be using memcached. I tried to pass an argument through renderSomeHTML(), but when I use func_get_args() on the PHP include to be rendered, what comes out is not what I put into it. Edit: Further down the line I was able to "invalidate" the cache by calling a different method that pulled the same content and passing in an argument that turned off caching. Thanks everyone for your help.
[ "Obviously, you cannot. The whole point of caching is that the 'thing' you cache is not going to change. So you either:\n\nprovide a parameter\naviod caching\ninvalidate the cache when you set a different parameter\n\nOr, you rewrite the cache mechanism yourself - to support some dynamic binding.\n", "Chaching is handled differently by different frameworks, so you'd have to help us out with some more information. But I also wonder if you could pass that as a parameter instead of using $GLOBALS. \n$this->renderSomeHTML(true);\n\n", "Your question seems unclear, but caching pretty much means 'stored so we don't have to calculate it again'. If you want the content to differ, you need to cache more results and pick the correct cached object one to send back.\nNeed more info to give a better answer. What is caching the document, Smarty? And what do you mean by \"its location in the page\"? What is 'it'?\n" ]
[ 3, 1, 0 ]
[]
[]
[ "caching", "php", "variables" ]
stackoverflow_0000087468_caching_php_variables.txt
Q: Iterating through all the cells in Excel VBA or VSTO 2005 I need to simply go through all the cells in a Excel Spreadsheet and check the values in the cells. The cells may contain text, numbers or be blank. I am not very familiar / comfortable working with the concept of 'Range'. Therefore, any sample codes would be greatly appreciated. (I did try to google it, but the code snippets I found didn't quite do what I needed) Thank you. A: If you only need to look at the cells that are in use you can use: sub IterateCells() For Each Cell in ActiveSheet.UsedRange.Cells 'do some stuff Next End Sub that will hit everything in the range from A1 to the last cell with data (the bottom right-most cell) A: Sub CheckValues1() Dim rwIndex As Integer Dim colIndex As Integer For rwIndex = 1 To 10 For colIndex = 1 To 5 If Cells(rwIndex, colIndex).Value <> 0 Then _ Cells(rwIndex, colIndex).Value = 0 Next colIndex Next rwIndex End Sub Found this snippet on http://www.java2s.com/Code/VBA-Excel-Access-Word/Excel/Checksvaluesinarange10rowsby5columns.htm It seems to be quite useful as a function to illustrate the means to check values in cells in an ordered fashion. Just imagine it as being a 2d Array of sorts and apply the same logic to loop through cells. A: If you're just looking at values of cells you can store the values in an array of variant type. It seems that getting the value of an element in an array can be much faster than interacting with Excel, so you can see some difference in performance using an array of all cell values compared to repeatedly getting single cells. Dim ValArray as Variant ValArray = Range("A1:IV" & Rows.Count).Value Then you can get a cell value just by checking ValArray( row , column ) A: You can use a For Each to iterate through all the cells in a defined range. Public Sub IterateThroughRange() Dim wb As Workbook Dim ws As Worksheet Dim rng As Range Dim cell As Range Set wb = Application.Workbooks(1) Set ws = wb.Sheets(1) Set rng = ws.Range("A1", "C3") For Each cell In rng.Cells cell.Value = cell.Address Next cell End Sub A: For a VB or C# app, one way to do this is by using Office Interop. This depends on which version of Excel you're working with. For Excel 2003, this MSDN article is a good place to start. Understanding the Excel Object Model from a Visual Studio 2005 Developer's Perspective You'll basically need to do the following: Start the Excel application. Open the Excel workbook. Retrieve the worksheet from the workbook by name or index. Iterate through all the Cells in the worksheet which were retrieved as a range. Sample (untested) code excerpt below for the last step. Excel.Range allCellsRng; string lowerRightCell = "IV65536"; allCellsRng = ws.get_Range("A1", lowerRightCell).Cells; foreach (Range cell in allCellsRng) { if (null == cell.Value2 || isBlank(cell.Value2)) { // Do something. } else if (isText(cell.Value2)) { // Do something. } else if (isNumeric(cell.Value2)) { // Do something. } } For Excel 2007, try this MSDN reference. A: There are several methods to accomplish this, each of which has advantages and disadvantages; First and foremost, you're going to need to have an instance of a Worksheet object, Application.ActiveSheet works if you just want the one the user is looking at. The Worksheet object has three properties that can be used to access cell data (Cells, Rows, Columns) and a method that can be used to obtain a block of cell data, (get_Range). Ranges can be resized and such, but you may need to use the properties mentioned above to find out where the boundaries of your data are. The advantage to a Range becomes apparent when you are working with large amounts of data because VSTO add-ins are hosted outside the boundaries of the Excel application itself, so all calls to Excel have to be passed through a layer with overhead; obtaining a Range allows you to get/set all of the data you want in one call which can have huge performance benefits, but it requires you to use explicit details rather than iterating through each entry. This MSDN forum post shows a VB.Net developer asking a question about getting the results of a Range as an array A: You basically can loop over a Range Get a sheet myWs = (Worksheet)MyWb.Worksheets[1]; Get the Range you're interested in If you really want to check every cell use Excel's limits The Excel 2007 "Big Grid" increases the maximum number of rows per worksheet from 65,536 to over 1 million, and the number of columns from 256 (IV) to 16,384 (XFD). from here http://msdn.microsoft.com/en-us/library/aa730921.aspx#Office2007excelPerf_BigGridIncreasedLimitsExcel and then loop over the range Range myBigRange = myWs.get_Range("A1", "A256"); string myValue; foreach(Range myCell in myBigRange ) { myValue = myCell.Value2.ToString(); } A: In Excel VBA, this function will give you the content of any cell in any worksheet. Function getCellContent(Byref ws As Worksheet, ByVal rowindex As Integer, ByVal colindex As Integer) as String getCellContent = CStr(ws.Cells(rowindex, colindex)) End Function So if you want to check the value of cells, just put the function in a loop, give it the reference to the worksheet you want and the row index and column index of the cell. Row index and column index both start from 1, meaning that cell A1 will be ws.Cells(1,1) and so on. A: My VBA skills are a little rusty, but this is the general idea of what I'd do. The easiest way to do this would be to iterate through a loop for every column: public sub CellProcessing() on error goto errHandler dim MAX_ROW as Integer 'how many rows in the spreadsheet dim i as Integer dim cols as String for i = 1 to MAX_ROW 'perform checks on the cell here 'access the cell with Range("A" & i) to get cell A1 where i = 1 next i exitHandler: exit sub errHandler: msgbox "Error " & err.Number & ": " & err.Description resume exitHandler end sub it seems that the color syntax highlighting doesn't like vba, but hopefully this will help somewhat (at least give you a starting point to work from). Brisketeer
Iterating through all the cells in Excel VBA or VSTO 2005
I need to simply go through all the cells in a Excel Spreadsheet and check the values in the cells. The cells may contain text, numbers or be blank. I am not very familiar / comfortable working with the concept of 'Range'. Therefore, any sample codes would be greatly appreciated. (I did try to google it, but the code snippets I found didn't quite do what I needed) Thank you.
[ "If you only need to look at the cells that are in use you can use:\nsub IterateCells()\n\n For Each Cell in ActiveSheet.UsedRange.Cells\n 'do some stuff\n Next\n\nEnd Sub\n\nthat will hit everything in the range from A1 to the last cell with data (the bottom right-most cell)\n", "Sub CheckValues1()\n Dim rwIndex As Integer\n Dim colIndex As Integer\n For rwIndex = 1 To 10\n For colIndex = 1 To 5\n If Cells(rwIndex, colIndex).Value <> 0 Then _\n Cells(rwIndex, colIndex).Value = 0\n Next colIndex\n Next rwIndex\nEnd Sub\n\nFound this snippet on http://www.java2s.com/Code/VBA-Excel-Access-Word/Excel/Checksvaluesinarange10rowsby5columns.htm It seems to be quite useful as a function to illustrate the means to check values in cells in an ordered fashion.\nJust imagine it as being a 2d Array of sorts and apply the same logic to loop through cells.\n", "If you're just looking at values of cells you can store the values in an array of variant type. It seems that getting the value of an element in an array can be much faster than interacting with Excel, so you can see some difference in performance using an array of all cell values compared to repeatedly getting single cells.\nDim ValArray as Variant\nValArray = Range(\"A1:IV\" & Rows.Count).Value\n\nThen you can get a cell value just by checking ValArray( row , column )\n", "You can use a For Each to iterate through all the cells in a defined range.\nPublic Sub IterateThroughRange()\n\nDim wb As Workbook\nDim ws As Worksheet\nDim rng As Range\nDim cell As Range\n\nSet wb = Application.Workbooks(1)\nSet ws = wb.Sheets(1)\nSet rng = ws.Range(\"A1\", \"C3\")\n\nFor Each cell In rng.Cells\n cell.Value = cell.Address\nNext cell\n\nEnd Sub\n\n", "For a VB or C# app, one way to do this is by using Office Interop. This depends on which version of Excel you're working with.\nFor Excel 2003, this MSDN article is a good place to start.\nUnderstanding the Excel Object Model from a Visual Studio 2005 Developer's Perspective \nYou'll basically need to do the following:\n\nStart the Excel application.\nOpen the Excel workbook.\nRetrieve the worksheet from the workbook by name or index.\nIterate through all the Cells in the worksheet which were retrieved as a range.\nSample (untested) code excerpt below for the last step.\n\n\n Excel.Range allCellsRng;\n string lowerRightCell = \"IV65536\";\n allCellsRng = ws.get_Range(\"A1\", lowerRightCell).Cells;\n foreach (Range cell in allCellsRng)\n {\n if (null == cell.Value2 || isBlank(cell.Value2))\n {\n // Do something.\n }\n else if (isText(cell.Value2))\n {\n // Do something.\n }\n else if (isNumeric(cell.Value2))\n {\n // Do something.\n }\n }\n\nFor Excel 2007, try this MSDN reference.\n", "There are several methods to accomplish this, each of which has advantages and disadvantages; First and foremost, you're going to need to have an instance of a Worksheet object, Application.ActiveSheet works if you just want the one the user is looking at.\nThe Worksheet object has three properties that can be used to access cell data (Cells, Rows, Columns) and a method that can be used to obtain a block of cell data, (get_Range).\nRanges can be resized and such, but you may need to use the properties mentioned above to find out where the boundaries of your data are. The advantage to a Range becomes apparent when you are working with large amounts of data because VSTO add-ins are hosted outside the boundaries of the Excel application itself, so all calls to Excel have to be passed through a layer with overhead; obtaining a Range allows you to get/set all of the data you want in one call which can have huge performance benefits, but it requires you to use explicit details rather than iterating through each entry.\nThis MSDN forum post shows a VB.Net developer asking a question about getting the results of a Range as an array\n", "You basically can loop over a Range\nGet a sheet\nmyWs = (Worksheet)MyWb.Worksheets[1];\n\nGet the Range you're interested in If you really want to check every cell use Excel's limits \n\nThe Excel 2007 \"Big Grid\" increases\n the maximum number of rows per\n worksheet from 65,536 to over 1\n million, and the number of columns\n from 256 (IV) to 16,384 (XFD).\n from here http://msdn.microsoft.com/en-us/library/aa730921.aspx#Office2007excelPerf_BigGridIncreasedLimitsExcel\n\nand then loop over the range\n Range myBigRange = myWs.get_Range(\"A1\", \"A256\");\n\n string myValue;\n\n foreach(Range myCell in myBigRange )\n {\n myValue = myCell.Value2.ToString();\n }\n\n", "In Excel VBA, this function will give you the content of any cell in any worksheet.\nFunction getCellContent(Byref ws As Worksheet, ByVal rowindex As Integer, ByVal colindex As Integer) as String\n getCellContent = CStr(ws.Cells(rowindex, colindex))\nEnd Function\n\nSo if you want to check the value of cells, just put the function in a loop, give it the reference to the worksheet you want and the row index and column index of the cell. Row index and column index both start from 1, meaning that cell A1 will be ws.Cells(1,1) and so on.\n", "My VBA skills are a little rusty, but this is the general idea of what I'd do.\nThe easiest way to do this would be to iterate through a loop for every column:\npublic sub CellProcessing()\non error goto errHandler\n\n dim MAX_ROW as Integer 'how many rows in the spreadsheet\n dim i as Integer\n dim cols as String\n\n for i = 1 to MAX_ROW\n 'perform checks on the cell here\n 'access the cell with Range(\"A\" & i) to get cell A1 where i = 1\n next i\n\nexitHandler:\n exit sub\nerrHandler:\n msgbox \"Error \" & err.Number & \": \" & err.Description\n resume exitHandler\nend sub\n\nit seems that the color syntax highlighting doesn't like vba, but hopefully this will help somewhat (at least give you a starting point to work from).\n\nBrisketeer\n\n" ]
[ 63, 7, 5, 4, 2, 1, 1, 0, 0 ]
[]
[]
[ "excel", "vba", "vsto" ]
stackoverflow_0000073785_excel_vba_vsto.txt
Q: Is a software token a valid second factor in multi-factor security? We are changing our remote log-in security process at my workplace, and we are concerned that the new system does not use multi-factor authentication as the old one did. (We had been using RSA key-fobs, but they are being replaced due to cost.) The new system is an anti-phishing image system which has been misunderstood to be a two-factor authentication system. We are now exploring ways to continue providing multi-factor security without issuing hardware devices to the users. Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? Would this be considered "something the user has", or would it simply be another form of "something the user knows"? Edit: phreakre makes a good point about cookies. For the sake of this question, assume that cookies have been ruled out as they are not secure enough. A: I would say "no". I don't think you can really get the "something you have" part of multi-factor authentication without issuing something the end user can carry with them. If you "have" something, it implies it can be lost - not many users lose their entire desktop machines. The security of "something you have", after all, comes from the following: you would notice when you don't have it - a clear indication security has been compromised only 1 person can have it. So if you do, someone else doesn't Software tokens do not offer the same guarantees, and I would not in good conscience class it as something the user "has". A: While I am not sure it is a "valid" second factor, many websites have been using this type of process for a while: cookies. Hardly secure, but it is the type of item you are describing. Insofar as regarding "something the user has" vs "something the user knows", if it is something resident on the user PC [like a background app providing information when asked but not requiring the user to do anything], I would file it under "things the user has". If they are typing a password into some field and then typing another password to unlock the information you are storing on their PC, then it is "something the user knows". With regards to commercial solutions out there already in existence: We use a product for windows called BigFix. While it is primarily a remote configuration and compliance product, we have a module for it that works as part of our multi-factor system for remote/VPN situations. A: A software token is a second factor, but it probably isn't as good choice a choice as a RSA fob. If the user's computer is compromised the attacker could silently copy the software token without leaving any trace it's been stolen (unlike a RSA fob where they'd have to take the fob itself, so the user has a chance to notice it's missing). A: I agree with @freespace that the the image is not part of the multi-factor authentication for the user. As you state the image is part of the anti-phishing scheme. I think that the image is actually a weak authentication of the system to the user. The image provides authentication to the user that the website is valid and not a fake phishing site. Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? The software based token system sounds like you may want to investigate the Kerberos protocol, http://en.wikipedia.org/wiki/Kerberos_(protocol). I am not sure if this would count as a multi-factor authentication, though. A: What you're describing is something the computer has, not the user. So you can supposedly (depending on implementation) be assured that it is the computer, but no assurance regarding the user... Now, since we're talking about remote login, perhaps the situation is personal laptops? In which case, the laptop is the something you have, and of course the password to it as something you know... Then all that remains is secure implementation, and that can work fine. A: Security is always about trade-offs. Hardware tokens may be harder to steal, but they offer no protection against network-based MITM attacks. If this is a web-based solution (I assume it is, since you're using one of the image-based systems), you should consider something that offer mutual https authentication. Then you get protection from the numerous DNS attacks and wi-fi based attacks. You can find out more here: http://www.wikidsystems.com/learn-more/technology/mutual_authentication and http://en.wikipedia.org/wiki/Mutual_authentication and here is a tutorial on setting up mutual authentication to prevent phishing: http://www.howtoforge.net/prevent_phishing_with_mutual_authentication. The image-based system is pitched as mutual authentication, which I guess it is, but since it's not based on cryptographic principals, it's pretty weak. What's to stop a MITM from presenting the image too? It's less than user-friendly IMO too.
Is a software token a valid second factor in multi-factor security?
We are changing our remote log-in security process at my workplace, and we are concerned that the new system does not use multi-factor authentication as the old one did. (We had been using RSA key-fobs, but they are being replaced due to cost.) The new system is an anti-phishing image system which has been misunderstood to be a two-factor authentication system. We are now exploring ways to continue providing multi-factor security without issuing hardware devices to the users. Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? Would this be considered "something the user has", or would it simply be another form of "something the user knows"? Edit: phreakre makes a good point about cookies. For the sake of this question, assume that cookies have been ruled out as they are not secure enough.
[ "I would say \"no\". I don't think you can really get the \"something you have\" part of multi-factor authentication without issuing something the end user can carry with them. If you \"have\" something, it implies it can be lost - not many users lose their entire desktop machines. The security of \"something you have\", after all, comes from the following:\n\nyou would notice when you don't have it - a clear indication security has been compromised\nonly 1 person can have it. So if you do, someone else doesn't\n\nSoftware tokens do not offer the same guarantees, and I would not in good conscience class it as something the user \"has\". \n", "While I am not sure it is a \"valid\" second factor, many websites have been using this type of process for a while: cookies. Hardly secure, but it is the type of item you are describing. \nInsofar as regarding \"something the user has\" vs \"something the user knows\", if it is something resident on the user PC [like a background app providing information when asked but not requiring the user to do anything], I would file it under \"things the user has\". If they are typing a password into some field and then typing another password to unlock the information you are storing on their PC, then it is \"something the user knows\".\nWith regards to commercial solutions out there already in existence: We use a product for windows called BigFix. While it is primarily a remote configuration and compliance product, we have a module for it that works as part of our multi-factor system for remote/VPN situations.\n", "A software token is a second factor, but it probably isn't as good choice a choice as a RSA fob. If the user's computer is compromised the attacker could silently copy the software token without leaving any trace it's been stolen (unlike a RSA fob where they'd have to take the fob itself, so the user has a chance to notice it's missing). \n", "I agree with @freespace that the the image is not part of the multi-factor authentication for the user. As you state the image is part of the anti-phishing scheme. I think that the image is actually a weak authentication of the system to the user. The image provides authentication to the user that the website is valid and not a fake phishing site.\n\nIs it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? \n\nThe software based token system sounds like you may want to investigate the Kerberos protocol, http://en.wikipedia.org/wiki/Kerberos_(protocol). I am not sure if this would count as a multi-factor authentication, though.\n", "What you're describing is something the computer has, not the user. \nSo you can supposedly (depending on implementation) be assured that it is the computer, but no assurance regarding the user...\nNow, since we're talking about remote login, perhaps the situation is personal laptops? In which case, the laptop is the something you have, and of course the password to it as something you know... Then all that remains is secure implementation, and that can work fine.\n", "Security is always about trade-offs. Hardware tokens may be harder to steal, but they offer no protection against network-based MITM attacks. If this is a web-based solution (I assume it is, since you're using one of the image-based systems), you should consider something that offer mutual https authentication. Then you get protection from the numerous DNS attacks and wi-fi based attacks. \nYou can find out more here:\nhttp://www.wikidsystems.com/learn-more/technology/mutual_authentication\nand\nhttp://en.wikipedia.org/wiki/Mutual_authentication\nand here is a tutorial on setting up mutual authentication to prevent phishing:\nhttp://www.howtoforge.net/prevent_phishing_with_mutual_authentication.\nThe image-based system is pitched as mutual authentication, which I guess it is, but since it's not based on cryptographic principals, it's pretty weak. What's to stop a MITM from presenting the image too? It's less than user-friendly IMO too.\n" ]
[ 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "authentication", "language_agnostic" ]
stackoverflow_0000072540_authentication_language_agnostic.txt
Q: Excel column names What column names cannot be used when creating an Excel spreadsheet with ADO. I have a statement that creates a page in a spreadsheet: CREATE TABLE [TableName] (Column string, Column2 string); I have found that using a column name of Date or Container will generate an error when the statement is executed. Does anyone have a complete (or partial) list of words that cannot be used as column names? This is for use in a user-driven environment and it would be better to "fix" the columns than to crash. My work-around for these is to replace any occurences of Date or Container with Date_ and Container_ respectively. A: Here are the reserved words for MS Query: http://support.microsoft.com/kb/125948 Cell naming rules: http://ezinearticles.com/?Rules-For-Naming-Cells-in-Microsoft-Excel&id=218607 A: It seems more like an issue with SQL reserved words. This is a good list A: You can use brackets for any fieldname, e.g.: CREATE TABLE [TableName] ([Date] string, [Container] string) Full example: using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\temp\\test.xls;Extended Properties='Excel 8.0;HDR=Yes'")) { conn.Open(); OleDbCommand cmd = new OleDbCommand("CREATE TABLE [TableName] ([Date] string, [Container] string)", conn); cmd.ExecuteNonQuery(); }
Excel column names
What column names cannot be used when creating an Excel spreadsheet with ADO. I have a statement that creates a page in a spreadsheet: CREATE TABLE [TableName] (Column string, Column2 string); I have found that using a column name of Date or Container will generate an error when the statement is executed. Does anyone have a complete (or partial) list of words that cannot be used as column names? This is for use in a user-driven environment and it would be better to "fix" the columns than to crash. My work-around for these is to replace any occurences of Date or Container with Date_ and Container_ respectively.
[ "Here are the reserved words for MS Query:\nhttp://support.microsoft.com/kb/125948\nCell naming rules:\nhttp://ezinearticles.com/?Rules-For-Naming-Cells-in-Microsoft-Excel&id=218607\n", "It seems more like an issue with SQL reserved words. This is a good list\n", "You can use brackets for any fieldname, e.g.:\nCREATE TABLE [TableName] ([Date] string, [Container] string)\n\nFull example:\nusing (OleDbConnection conn = new OleDbConnection(\"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\\\temp\\\\test.xls;Extended Properties='Excel 8.0;HDR=Yes'\"))\n{\n conn.Open();\n OleDbCommand cmd = new OleDbCommand(\"CREATE TABLE [TableName] ([Date] string, [Container] string)\", conn);\n cmd.ExecuteNonQuery();\n}\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "ado.net", "excel" ]
stackoverflow_0000087541_ado.net_excel.txt
Q: Linkbutton click event not running handler I'm creating a custom drop down list with AJAX dropdownextender. Inside my drop panel I have linkbuttons for my options. <asp:Label ID="ddl_Remit" runat="server" Text="Select remit address." Style="display: block; width: 300px; padding:2px; padding-right: 50px; font-family: Tahoma; font-size: 11px;" /> <asp:Panel ID="DropPanel" runat="server" CssClass="ContextMenuPanel" Style="display :none; visibility: hidden;"> <asp:LinkButton runat="server" ID="Option1z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" /> <asp:LinkButton runat="server" ID="Option2z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" /> <asp:LinkButton runat="server" ID="Option3z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" />--> </asp:Panel> <ajaxToolkit:DropDownExtender runat="server" ID="DDE" TargetControlID="ddl_Remit" DropDownControlID="DropPanel" /> And this works well. Now what I have to do is dynamically fill this dropdownlist. Here is my best attempt: private void fillRemitDDL() { //LinkButton Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" DAL_ScanlineTableAdapters.SL_GetRemitByScanlineIDTableAdapter ta = new DAL_ScanlineTableAdapters.SL_GetRemitByScanlineIDTableAdapter(); DataTable dt = (DataTable)ta.GetData(int.Parse(this.SLID)); if (dt.Rows.Count > 0) { Panel ddl = this.FindControl("DropPanel") as Panel; ddl.Controls.Clear(); for (int x = 0; x < dt.Rows.Count; x++) { LinkButton lb = new LinkButton(); lb.Text = dt.Rows[x]["Remit3"].ToString().Trim() + "<br />" + dt.Rows[x]["Remit4"].ToString().Trim() + "<br />" + dt.Rows[x]["RemitZip"].ToString().Trim(); lb.CssClass = "ContextMenuItem"; lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")"); ddl.Controls.Add(lb); } } } My problem is that I cannot get the event to run script! I've tried the above code as well as replacing lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")"); with lb.Click += new EventHandler(OnSelect); and also lb.OnClientClick = "setDDL(" + lb.Text + ")"); I'm testing the the branches with Alerts on client-side and getting nothing. Edit: I would like to try adding the generic anchor but I think I can add the element to an asp.net control. Nor can I access a client-side div from server code to add it that way. I'm going to have to use some sort of control with an event. My setDLL function goes as follows: function setDDL(var) { alert(var); document.getElementById('ctl00_ContentPlaceHolder1_Scanline1_ddl_Remit').innerText = var; } Also I just took out the string variable in the function call (i.e. from lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")"); to lb.Attributes.Add("onclick", "setDDL()"); A: I'm not sure what your setDDL method does in your script but it should fire if one of the link buttons is clicked. I think you might be better off just inserting a generic html anchor though instead of a .net linkbutton as you will have no reference to the control on the server side. Then you can handle the data excahnge with your setDDL method. Furthermore you might want to quote the string you are placing inside the call to setDDL because will cause script issues (like not calling the method + page errors) given you are placing literal string data without quotes. A: Ok, I used Literals to create anchor tags with onclicks on them and that seems to be working great. Thanks alot. A: the add should probably look like this (add the '' around the string and add a ; to the end of the javascript statement). lb.Attributes.Add("onclick", "setDDL('" + lb.Text + "');"); OR! set the OnClientClick property on the linkbutton.
Linkbutton click event not running handler
I'm creating a custom drop down list with AJAX dropdownextender. Inside my drop panel I have linkbuttons for my options. <asp:Label ID="ddl_Remit" runat="server" Text="Select remit address." Style="display: block; width: 300px; padding:2px; padding-right: 50px; font-family: Tahoma; font-size: 11px;" /> <asp:Panel ID="DropPanel" runat="server" CssClass="ContextMenuPanel" Style="display :none; visibility: hidden;"> <asp:LinkButton runat="server" ID="Option1z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" /> <asp:LinkButton runat="server" ID="Option2z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" /> <asp:LinkButton runat="server" ID="Option3z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" />--> </asp:Panel> <ajaxToolkit:DropDownExtender runat="server" ID="DDE" TargetControlID="ddl_Remit" DropDownControlID="DropPanel" /> And this works well. Now what I have to do is dynamically fill this dropdownlist. Here is my best attempt: private void fillRemitDDL() { //LinkButton Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" DAL_ScanlineTableAdapters.SL_GetRemitByScanlineIDTableAdapter ta = new DAL_ScanlineTableAdapters.SL_GetRemitByScanlineIDTableAdapter(); DataTable dt = (DataTable)ta.GetData(int.Parse(this.SLID)); if (dt.Rows.Count > 0) { Panel ddl = this.FindControl("DropPanel") as Panel; ddl.Controls.Clear(); for (int x = 0; x < dt.Rows.Count; x++) { LinkButton lb = new LinkButton(); lb.Text = dt.Rows[x]["Remit3"].ToString().Trim() + "<br />" + dt.Rows[x]["Remit4"].ToString().Trim() + "<br />" + dt.Rows[x]["RemitZip"].ToString().Trim(); lb.CssClass = "ContextMenuItem"; lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")"); ddl.Controls.Add(lb); } } } My problem is that I cannot get the event to run script! I've tried the above code as well as replacing lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")"); with lb.Click += new EventHandler(OnSelect); and also lb.OnClientClick = "setDDL(" + lb.Text + ")"); I'm testing the the branches with Alerts on client-side and getting nothing. Edit: I would like to try adding the generic anchor but I think I can add the element to an asp.net control. Nor can I access a client-side div from server code to add it that way. I'm going to have to use some sort of control with an event. My setDLL function goes as follows: function setDDL(var) { alert(var); document.getElementById('ctl00_ContentPlaceHolder1_Scanline1_ddl_Remit').innerText = var; } Also I just took out the string variable in the function call (i.e. from lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")"); to lb.Attributes.Add("onclick", "setDDL()");
[ "I'm not sure what your setDDL method does in your script but it should fire if one of the link buttons is clicked. I think you might be better off just inserting a generic html anchor though instead of a .net linkbutton as you will have no reference to the control on the server side. Then you can handle the data excahnge with your setDDL method. Furthermore you might want to quote the string you are placing inside the call to setDDL because will cause script issues (like not calling the method + page errors) given you are placing literal string data without quotes.\n", "Ok, I used Literals to create anchor tags with onclicks on them and that seems to be working great. Thanks alot.\n", "the add should probably look like this (add the '' around the string and add a ; to the end of the javascript statement).\nlb.Attributes.Add(\"onclick\", \"setDDL('\" + lb.Text + \"');\");\n\nOR!\nset the OnClientClick property on the linkbutton. \n" ]
[ 1, 1, 0 ]
[]
[]
[ "asp.net_ajax", "c#", "events", "webforms" ]
stackoverflow_0000086563_asp.net_ajax_c#_events_webforms.txt
Q: How best to alleviate scenarios that trigger non-incremental linking (MSVS) While incremental linking addresses much of the time spent linking, even for very large projects, I find the incremental linker in MSVS to be pretty haphazard. (I'm currently using 2003 atm, would love to hear if 2005/8 addressed any of this.) My list of known triggers include: changing anything external to the main .exe project will trigger a full link adding static variables had a 50% chance of triggering a full link and this list is certainly not inclusive. What can I do to avoid full links? So far, the only diagnosis tool i've found so far is /test in the linker command line options and it's terrible. What solutions are out there for diagnosing triggers for full re-links? A: Minimizing the number of projects in your solution makes the problem a little better. And of course all the normal build speed-ups will work, like reducing includes and shrinking obj files size. A: I'm using 2008; and while I have only used it for small->medium sized projects, so far I haven't experienced any unexpected full links. I haven't used 03, but in my opinion 08 seems to be far better then 05.
How best to alleviate scenarios that trigger non-incremental linking (MSVS)
While incremental linking addresses much of the time spent linking, even for very large projects, I find the incremental linker in MSVS to be pretty haphazard. (I'm currently using 2003 atm, would love to hear if 2005/8 addressed any of this.) My list of known triggers include: changing anything external to the main .exe project will trigger a full link adding static variables had a 50% chance of triggering a full link and this list is certainly not inclusive. What can I do to avoid full links? So far, the only diagnosis tool i've found so far is /test in the linker command line options and it's terrible. What solutions are out there for diagnosing triggers for full re-links?
[ "Minimizing the number of projects in your solution makes the problem a little better. And of course all the normal build speed-ups will work, like reducing includes and shrinking obj files size.\n", "I'm using 2008; and while I have only used it for small->medium sized projects, so far I haven't experienced any unexpected full links. \nI haven't used 03, but in my opinion 08 seems to be far better then 05.\n" ]
[ 1, 0 ]
[]
[]
[ "c", "c++", "linker" ]
stackoverflow_0000086751_c_c++_linker.txt
Q: What is your preferred method for moving directory structures around in Subversion? I have recently run into an issue where I wanted to add a folder to the directory structure of my project that would become the new 'root' directory for the previously housed files. I've been getting help in a related thread but I wanted to put out a more open ended question to see what a best practice might be. Essentially, my situation was that I was working on development and realized that I wanted to have a resources directory that would not be part of the main development thrust but would still be versioned (to hold mockups and such). So I wanted to add a resources directory and an implementation directory, the implementation directory being the new root directory. How would you go about moving all of the previous directory structure into the implementation directory? A: You can do it pretty easily if you use some GUI for SVN. Personally I love TortoiseSVN for when I'm working in Windows. You just open up the "Repository Browser", right-click on some folder, and choose "Move...". Or, you have the option of doing it straight from within Windows Explorer, drag the files/folders you want to move with the RIGHT mouse button, when you drop them in their new location you'll get a menu, one of the options is "Move in SVN". A: Moves in subversion are done by removing the old files and adding the new ones, so there's nothing special to do. The series of 'svn mv' commands in a loop recommended in the other question should probably work just fine.
What is your preferred method for moving directory structures around in Subversion?
I have recently run into an issue where I wanted to add a folder to the directory structure of my project that would become the new 'root' directory for the previously housed files. I've been getting help in a related thread but I wanted to put out a more open ended question to see what a best practice might be. Essentially, my situation was that I was working on development and realized that I wanted to have a resources directory that would not be part of the main development thrust but would still be versioned (to hold mockups and such). So I wanted to add a resources directory and an implementation directory, the implementation directory being the new root directory. How would you go about moving all of the previous directory structure into the implementation directory?
[ "You can do it pretty easily if you use some GUI for SVN. Personally I love TortoiseSVN for when I'm working in Windows. You just open up the \"Repository Browser\", right-click on some folder, and choose \"Move...\". Or, you have the option of doing it straight from within Windows Explorer, drag the files/folders you want to move with the RIGHT mouse button, when you drop them in their new location you'll get a menu, one of the options is \"Move in SVN\".\n", "Moves in subversion are done by removing the old files and adding the new ones, so there's nothing special to do. The series of 'svn mv' commands in a loop recommended in the other question should probably work just fine.\n" ]
[ 8, 1 ]
[]
[]
[ "svn" ]
stackoverflow_0000087666_svn.txt
Q: Flash designer/coder collaboration best practices I've done several flash projects working as the ActionScripter with a designer doing all the pretty things and animation. When starting out I found quite a lot of information about ActionScript coding and flash design. Most of the information available seems to focus on one or the other. I didn't find any information about building flash projects in a way that lets the coder do their thing AND gives the designer freedom as well. Hopefully more experienced people can share, these are some of the things i discovered after a few projects Version control is a must (as always) but can be difficult to explain to designers No ActionScript in the flash .fla files, they are binary and as a coder you want to try to keep away as much as possible Model View Controller is the best way I've found to isolate visual design changes Try to build the views so that they use frame labels, this allows the designer to decide what actually happens What are your experiences? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: I've been doing Flash for 9 years and I still find this a difficult thing to get right. There is a balance of power between designers and developers, which will inevitably tip one way or the other. If you work for a developer led studio, then you are lucky, as the designers will be instructed to make a design that fits your functionality. In Flex / MXML this is the only way to work. If, on the other hand, you work in a graphic design/creative/advertising studio, you will be instructed to build whatever the designer puts together in PhotoShop, whether or not it is feasible to build within the time. The key to getting around this is communication and education. Designers and design-focussed managers may not know what is involved in creating a particular piece of functionality, and if you explain to them why a particular thing is hard to do they might be persuaded to go and rethink their design. On the other hand, they may well think you're just a whiner! It never feels good when you have to tell someone "sorry, I can't really do that" when you know that you could make it work, given a few late nights! As well as the things you and others have already noted, like using FlashDevelop and external AS classes, here's some other things I recommend: Start with a site map / wireframe that both the developers and designers agree to. Load all your text from XML into dynamic text fields, and make sure your buttons etc are designed to expand to fit content Make sure your designers have some idea how to correctly cut-up graphics and lay them out in Flash. A developer shouldn't be messing about in PhotoShop when you're up against a deadline. Make sure you get all you graphics assets well before the deadline - inevitably there'll be things they've missed and things that need changing. Be firm and don't let your design team try to sneak in extra features at the last minute. Let the designers use the timeline for character animation etc, but for simple tweens use an ActionScript tweening engine. Hope these tips are some use! A: The way I currently work is that I (the developer) build the functionality using a dummy FLA file, using only external class files. When the designer(s) finish up the layout, they send me a FLA with all the imported assets, and linked buttons and MovieClips. I then attach my document class to the new FLA, and make sure all the Objects match my code. Overall, its a pretty simple transition. If an asset needs to be updated for whatever reason, the designers just send me the asset, and I update the FLA manually. A: separating the design from the code is the important thing to do - since I try to do projects as a series of modular components (stitched together a little, of course, since nothing ever fits exactly) I first make a sort of interactive wireframe. It's got placeholders for all UI elements, appropriately named and nested. That .fla can be passed off to a designer, who can add whatever they want, as long as they keep the names and nesting order - essentially like skinning an app. A: On our team everyone uses TortoiseSVN and a Trac instance per project. Designers are using the standard Flash designer to edit .FLAs and developers are using FlashDevelop to manage ActionScript files and debug the project. The tool-chain works like this: Developers program the behavior of each window by hand-editing MXML files (it's not as hard as it sounds) and developing the corresponding .AS files at the same time. Designers make graphics for skins and other UI elements that get (link) exported and store them in .FLAs alongside the code. Developers [Import()] the resources in the .AS files. This way everything gets into source control and designers don't even look at a line of ActionScript. Off course I'm oversimplifying the process but I hope you'll get the idea.
Flash designer/coder collaboration best practices
I've done several flash projects working as the ActionScripter with a designer doing all the pretty things and animation. When starting out I found quite a lot of information about ActionScript coding and flash design. Most of the information available seems to focus on one or the other. I didn't find any information about building flash projects in a way that lets the coder do their thing AND gives the designer freedom as well. Hopefully more experienced people can share, these are some of the things i discovered after a few projects Version control is a must (as always) but can be difficult to explain to designers No ActionScript in the flash .fla files, they are binary and as a coder you want to try to keep away as much as possible Model View Controller is the best way I've found to isolate visual design changes Try to build the views so that they use frame labels, this allows the designer to decide what actually happens What are your experiences? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
[ "I've been doing Flash for 9 years and I still find this a difficult thing to get right. \nThere is a balance of power between designers and developers, which will inevitably tip one way or the other. \nIf you work for a developer led studio, then you are lucky, as the designers will be instructed to make a design that fits your functionality. In Flex / MXML this is the only way to work.\nIf, on the other hand, you work in a graphic design/creative/advertising studio, you will be instructed to build whatever the designer puts together in PhotoShop, whether or not it is feasible to build within the time.\nThe key to getting around this is communication and education. Designers and design-focussed managers may not know what is involved in creating a particular piece of functionality, and if you explain to them why a particular thing is hard to do they might be persuaded to go and rethink their design. On the other hand, they may well think you're just a whiner! It never feels good when you have to tell someone \"sorry, I can't really do that\" when you know that you could make it work, given a few late nights!\nAs well as the things you and others have already noted, like using FlashDevelop and external AS classes, here's some other things I recommend:\n\nStart with a site map / wireframe that both the developers and designers agree to.\nLoad all your text from XML into dynamic text fields, and make sure your buttons etc are designed to expand to fit content\nMake sure your designers have some idea how to correctly cut-up graphics and lay them out in Flash. A developer shouldn't be messing about in PhotoShop when you're up against a deadline.\nMake sure you get all you graphics assets well before the deadline - inevitably there'll be things they've missed and things that need changing.\nBe firm and don't let your design team try to sneak in extra features at the last minute.\nLet the designers use the timeline for character animation etc, but for simple tweens use an ActionScript tweening engine.\n\nHope these tips are some use!\n", "The way I currently work is that I (the developer) build the functionality using a dummy FLA file, using only external class files. When the designer(s) finish up the layout, they send me a FLA with all the imported assets, and linked buttons and MovieClips. I then attach my document class to the new FLA, and make sure all the Objects match my code. Overall, its a pretty simple transition.\nIf an asset needs to be updated for whatever reason, the designers just send me the asset, and I update the FLA manually.\n", "separating the design from the code is the important thing to do - since I try to do projects as a series of modular components (stitched together a little, of course, since nothing ever fits exactly) I first make a sort of interactive wireframe. It's got placeholders for all UI elements, appropriately named and nested.\nThat .fla can be passed off to a designer, who can add whatever they want, as long as they keep the names and nesting order - essentially like skinning an app.\n", "On our team everyone uses TortoiseSVN and a Trac instance per project. Designers are using the standard Flash designer to edit .FLAs and developers are using FlashDevelop to manage ActionScript files and debug the project.\nThe tool-chain works like this:\n\nDevelopers program the behavior of each window by hand-editing MXML files (it's not as hard as it sounds) and developing the corresponding .AS files at the same time.\nDesigners make graphics for skins and other UI elements that get (link) exported and store them in .FLAs alongside the code.\nDevelopers [Import()] the resources in the .AS files.\n\nThis way everything gets into source control and designers don't even look at a line of ActionScript. Off course I'm oversimplifying the process but I hope you'll get the idea.\n" ]
[ 4, 1, 1, 0 ]
[]
[]
[ "actionscript", "flash" ]
stackoverflow_0000025356_actionscript_flash.txt
Q: Self Testing Systems I had an idea I was mulling over with some colleagues. None of us knew whether or not it exists currently. The Basic Premise is to have a system that has 100% uptime but can become more efficient dynamically. Here is the scenario: * So we hash out a system quickly to a specified set of interfaces, it has zero optimizations, yet we are confident that it is 100% stable though (dubious, but for the sake of this scenario please play along) * We then profile the original classes, and start to program replacements for the bottlenecks. * The original and the replacement are initiated simultaneously and synchronized. * An original is allowed to run to completion: if a replacement hasn´t completed it is vetoed by the system as a replacement for the original. * A replacement must always return the same value as the original, for a specified number of times, and for a specific range of values, before it is adopted as a replacement for the original. * If exception occurs after a replacement is adopted, the system automatically tries the same operation with a class which was superseded by it. Have you seen a similar concept in practise? Critique Please ... Below are comments written after the initial question in regards to posts: * The system demonstrates a Darwinian approach to system evolution. * The original and replacement would run in parallel not in series. * Race-conditions are an inherent issue to multi-threaded apps and I acknowledge them. A: I believe this idea to be an interesting theoretical debate, but not very practical for the following reasons: To make sure the new version of the code works well, you need to have superb automatic tests, which is a goal that is very hard to achieve and one that many companies fail to develop. You can only go on with implementing the system after such automatic tests are in place. The whole point of this system is performance tuning, that is - a specific version of the code is replaced by a version that supersedes it in performance. For most applications today, performance is of minor importance. Meaning, the overall performance of most applications is adequate - just think about it, you probably rarely find yourself complaining that "this application is excruciatingly slow", instead you usually find yourself complaining on the lack of specific feature, stability issues, UI issues etc. Even when you do complain about slowness, it's usually an overall slowness of your system and not just a specific applications (there are exceptions, of course). For applications or modules where performance is a big issue, the way to improve them is usually to identify the bottlenecks, write a new version and test is independently of the system first, using some kind of benchmarking. Benchmarking the new version of the entire application might also be necessary of course, but in general I think this process would only take place a very small number of times (following the 20%-80% rule). Doing this process "manually" in these cases is probably easier and more cost-effective than the described system. What happens when you add features, fix non-performance related bugs etc.? You don't get any benefit from the system. Running the two versions in conjunction to compare their performance has far more problems than you might think - not only you might have race conditions, but if the input is not an appropriate benchmark, you might get the wrong result (e.g. if you get loads of small data packets and that is in 90% of the time the input is large data packets). Furthermore, it might just be impossible (for example, if the actual code changes the data, you can't run them in conjunction). The only "environment" where this sounds useful and actually "a must" is a "genetic" system that generates new versions of the code by itself, but that's a whole different story and not really widely applicable... A: A system that runs performance benchmarks while operating is going to be slower than one that doesn't. If the goal is to optimise speed, why wouldn't you benchmark independently and import the fastest routines once they are proven to be faster? And your idea of starting routines simultaneously could introduce race conditions. Also, if a goal is to ensure 100% uptime you would not want to introduce untested routines since they might generate uncatchable exceptions. Perhaps your ideas have merit as a harness for benchmarking rather than an operational system? A: Have I seen a similar concept in practice? No. But I'll propose an approach anyway. It seems like most of your objectives would be meet by some sort of super source control system, which could be implemented with CruiseControl. CruiseControl can run unit tests to ensure correctness of the new version. You'd have to write a CruiseControl builder pluggin that would execute the new version of your system against a series of existing benchmarks to ensure that the new version is an improvement. If the CruiseControl build loop passes, then the new version would be accepted. Such a process would take considerable effort to implement, but I think it feasible. The unit tests and benchmark builder would have to be pretty slick. A: I think an Inversion of Control Container like OSGi or Spring could do most of what you are talking about. (dynamic loading by name) You could build on top of their stuff. Then implement your code to divide work units into discrete modules / classes (strategy pattern) identify each module by unique name and associate a capability with it when a module is requested it is requested by capability and at random one of the modules with that capability is used. keep performance stats (get system tick before and after execution and store the result) if an exception occurs mark that module as do not use and log the exception. If the modules do their work by message passing you can store the message until the operation completes successfully and redo with another module if an exception occurs. A: For design ideas for high availability systems, check out Erlang. A: I don't think code will learn to be better, by itself. However, some runtime parameters can easily adjust onto optimal values, but that would be just regular programming, right? About the on-the-fly change, I've shared the wondering and would be building it on top of Lua, or similar dynamic language. One could have parts that are loaded, and if they are replaced, reloaded into use. No rocket science in that, either. If the "old code" is still running, it's perfectly all right, since unlike with DLL's, the file is needed only when reading it in, not while executing code that came from there. Usefulness? Naa...
Self Testing Systems
I had an idea I was mulling over with some colleagues. None of us knew whether or not it exists currently. The Basic Premise is to have a system that has 100% uptime but can become more efficient dynamically. Here is the scenario: * So we hash out a system quickly to a specified set of interfaces, it has zero optimizations, yet we are confident that it is 100% stable though (dubious, but for the sake of this scenario please play along) * We then profile the original classes, and start to program replacements for the bottlenecks. * The original and the replacement are initiated simultaneously and synchronized. * An original is allowed to run to completion: if a replacement hasn´t completed it is vetoed by the system as a replacement for the original. * A replacement must always return the same value as the original, for a specified number of times, and for a specific range of values, before it is adopted as a replacement for the original. * If exception occurs after a replacement is adopted, the system automatically tries the same operation with a class which was superseded by it. Have you seen a similar concept in practise? Critique Please ... Below are comments written after the initial question in regards to posts: * The system demonstrates a Darwinian approach to system evolution. * The original and replacement would run in parallel not in series. * Race-conditions are an inherent issue to multi-threaded apps and I acknowledge them.
[ "I believe this idea to be an interesting theoretical debate, but not very practical for the following reasons:\n\nTo make sure the new version of the code works well, you need to have superb automatic tests, which is a goal that is very hard to achieve and one that many companies fail to develop. You can only go on with implementing the system after such automatic tests are in place.\nThe whole point of this system is performance tuning, that is - a specific version of the code is replaced by a version that supersedes it in performance. For most applications today, performance is of minor importance. Meaning, the overall performance of most applications is adequate - just think about it, you probably rarely find yourself complaining that \"this application is excruciatingly slow\", instead you usually find yourself complaining on the lack of specific feature, stability issues, UI issues etc. Even when you do complain about slowness, it's usually an overall slowness of your system and not just a specific applications (there are exceptions, of course).\nFor applications or modules where performance is a big issue, the way to improve them is usually to identify the bottlenecks, write a new version and test is independently of the system first, using some kind of benchmarking. Benchmarking the new version of the entire application might also be necessary of course, but in general I think this process would only take place a very small number of times (following the 20%-80% rule). Doing this process \"manually\" in these cases is probably easier and more cost-effective than the described system.\nWhat happens when you add features, fix non-performance related bugs etc.? You don't get any benefit from the system.\nRunning the two versions in conjunction to compare their performance has far more problems than you might think - not only you might have race conditions, but if the input is not an appropriate benchmark, you might get the wrong result (e.g. if you get loads of small data packets and that is in 90% of the time the input is large data packets). Furthermore, it might just be impossible (for example, if the actual code changes the data, you can't run them in conjunction).\n\nThe only \"environment\" where this sounds useful and actually \"a must\" is a \"genetic\" system that generates new versions of the code by itself, but that's a whole different story and not really widely applicable...\n", "A system that runs performance benchmarks while operating is going to be slower than one that doesn't. If the goal is to optimise speed, why wouldn't you benchmark independently and import the fastest routines once they are proven to be faster? \nAnd your idea of starting routines simultaneously could introduce race conditions.\nAlso, if a goal is to ensure 100% uptime you would not want to introduce untested routines since they might generate uncatchable exceptions.\nPerhaps your ideas have merit as a harness for benchmarking rather than an operational system?\n", "Have I seen a similar concept in practice? No. But I'll propose an approach anyway.\nIt seems like most of your objectives would be meet by some sort of super source control system, which could be implemented with CruiseControl.\nCruiseControl can run unit tests to ensure correctness of the new version.\nYou'd have to write a CruiseControl builder pluggin that would execute the new version of your system against a series of existing benchmarks to ensure that the new version is an improvement. \nIf the CruiseControl build loop passes, then the new version would be accepted. Such a process would take considerable effort to implement, but I think it feasible. The unit tests and benchmark builder would have to be pretty slick. \n", "I think an Inversion of Control Container like OSGi or Spring could do most of what you are talking about. (dynamic loading by name)\nYou could build on top of their stuff. Then implement your code to \n\ndivide work units into discrete modules / classes (strategy pattern)\nidentify each module by unique name and associate a capability with it\nwhen a module is requested it is requested by capability and at random one of the modules with that capability is used.\nkeep performance stats (get system tick before and after execution and store the result)\nif an exception occurs mark that module as do not use and log the exception.\n\nIf the modules do their work by message passing you can store the message until the operation completes successfully and redo with another module if an exception occurs.\n", "For design ideas for high availability systems, check out Erlang. \n", "I don't think code will learn to be better, by itself. However, some runtime parameters can easily adjust onto optimal values, but that would be just regular programming, right?\nAbout the on-the-fly change, I've shared the wondering and would be building it on top of Lua, or similar dynamic language. One could have parts that are loaded, and if they are replaced, reloaded into use. No rocket science in that, either. If the \"old code\" is still running, it's perfectly all right, since unlike with DLL's, the file is needed only when reading it in, not while executing code that came from there.\nUsefulness? Naa...\n" ]
[ 3, 2, 2, 2, 1, 0 ]
[]
[]
[ "language_agnostic", "unit_testing" ]
stackoverflow_0000060478_language_agnostic_unit_testing.txt
Q: How do you prevent over complicated solutions or designs? Many times we find ourselves working on a problem, only to figure out the solution being created is far more complex than the problem requires. Are there controls, best practices, techniques, etc that help you control over complication in your workplace? A: Getting someone new to look at it. A: In my experience, designing for an overly general case tends to breed too much complexity. Engineering culture encourages designs that make fewer assumptions about the environment; this is usually a good thing, but some people take it too far. For example, it might be nice if your car design doesn't assume a specific gravitational pull, nobody is actually going to drive your car on the moon, and if they did, it wouldn't work, because there is no oxygen to make the fuel burn. The difficult part is that the guy who is developed the "works-on-any-planet" design is often regarded as clever, so you may have to work harder to argue that his design is too clever. Understanding trade-offs, so you can make the decision between good assumptions and bad assumptions, will go a long way into avoiding a needlessly complicated design. A: If its too hard to test, your design is too complicated. That's the first metric I use. A: Here are some ideas to get design more simpler: read some programming books and articles, and then apply them in your work and write code read lots of code (good and bad) written by other people (like Open Source projects) and learn to see what works and what does not build safety nets (unit tests) to enable experimentations with your code use version control to enable rollback, if those experimentations take wrong turn TDD (test driven development) and BDD (behaviour driven development) change your attitude, ask how you can make it so, that "it simply works" (convention over configuration could help there; or ask how Apple would do it) practice (like jazz players -- jam with code, try Code Kata) write same code multiple times, with different languages and after some time has passed learn new languages with new concepts (if you use static language, learn dynamic one; if you use procedural language, learn functional one; ...) [one language per year is about right] ask someone to review you code and actively ask how you can make your code simpler and more elegant (and then make it) get years under your belt by doing above things (time helps active mind) A: I create a design etc., and then I look at it and try and remove (agressively) everything that doesn't seem to be needed. If it turns out I need it later when I am polishing the design I add it back in. I do this over several iterations, refining as I go along. A: Read "Working Effectively With Legacy Code" by Michael C. Feathers. The point is, if you have code that works, and you need to change the design, nothing works better than making your code unit testable, and breaking your code into smaller pieces. A: Using Test Driven Development and following Robert C. Martin's Three Rules of TDD: You are not allowed to write any production code unless it is to make a failing unit test pass. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures. You are not allowed to write any more production code than is sufficient to pass the one failing unit test. In this way you are not likely to get much code that you don't need. You will always be focused on making one important thing work and won't ever get too far ahead of yourself in terms of complexity. A: Test first may help here, but it is not suitable for all situation. And it's not a panacea anyway. Start small is another great idea. Do you really need to stuff all 10 design patterns into this thing? Try first to do it "stupid way". Doesn't quite cut it? Okay, do it "slightly less stupid way". Etc. Get it reviewed. As someone else wrote, two pairs of eyes are better. Even better are two brains. Your mate may just see a room for simplification, or a problematic area you thought was fine just because you spend many hours hacking it. Use lean language. Languages such as Java, or sometimes C++ sometimes seem to encourage nasty, convoluted solutions. Simple things tend to span over multiple lines of code, and you just need to use 3 external libraries and a big framework to manage it all. Consider using Python, Ruby, etc. - if not for your project, then for some private use. It can change your mindset to favor simplicity, and to be assured that simplicity is possible. A: Reduce the amount of data you're working with by serialising the task into a series of smaller tasks. Most people can only hold half a dozen (plus or minus) conditions in their head while coding, so make that the unit of implementation. Design for all the tasks you need to accomplish, but then ruthlessly hack the design so that you never have to play with more than half a dozen paths though the module. This follows from Bendazo's post - simplify until it becomes easy. A: It is inevitable once you have been a programmer that this will happen. If you seriously have unestimated the effort or hit a problem where your solution just doesn't work then stop coding and get talking to your project manager. I always like to take the solutions with me to the meeting, problem is A, you can do x which will take 3 days or we can try y which will take 6 days. Don't make the choice yourself. A: Talk to other programmers every step of the way. The more eyes there are on the design, the more likely an overcomplicated aspect is revealed early, before it becomes too ossified in the codebase. Constantly ask yourself how you will use whatever you are currently working on. If the answer is that you're not sure, stop to rethink what you're doing. I've found it useful to jot down thoughts about how to potentially simplify something I'm currently working on. That way, once I actually have it working, it's easier to go back and refactor or redo as necessary instead of messing with something that's not even functional yet. A: This is a delicate balancing act: on the one hand you don't want something that takes too long to design and implement, on the other hand you don't want a hack that isn't complicated enough to deal with next week's problem, or even worse requires rewriting to adapt. A couple of techniques I find helpful: If something seems more complex than you would like then never sit down to implement it as soon as you have finished thinking about it. Find something else to do for the rest of the day. Numerous times I end up thinking of a different solution to an early part of the problem that removes a lot of the complexity later on. In a similar vein have someone else you can bounce ideas off. Make sure you can explain to them why the complexity is justified! If you are adding complexity because you think it will be justified in the future then try to establish when in the future you will use it. If you can't (realistically) imagine needing the complexity for a year or three then it probably isn't justifiable to pay for it now. A: I ask my customers why they need some feature. I try and get to the bottom of their request and identify the problem they are experiencing. This often lends itself to a simpler solution than I (or they) would think of. Of course, if you know your clients' work habits and what problems they have to tackle, you can understand their problems much better from the get-go. And if you "know them" know them, then you understand their speech better. So, develop a close working relationship with your users. It's step zero of engineering. A: Take time to name the concepts of the system well, and find names that are related, this makes the system more familiar. Don't be hesitant to rename concepts, the better the connection to the world you know, the better your brain can work with it. Ask for opinions from people who get their kicks from clean, simple solutions. Only implement concepts needed by the current project (a desire for future proofing or generic systems make your design bloated).
How do you prevent over complicated solutions or designs?
Many times we find ourselves working on a problem, only to figure out the solution being created is far more complex than the problem requires. Are there controls, best practices, techniques, etc that help you control over complication in your workplace?
[ "Getting someone new to look at it. \n", "In my experience, designing for an overly general case tends to breed too much complexity.\nEngineering culture encourages designs that make fewer assumptions about the environment; this is usually a good thing, but some people take it too far. For example, it might be nice if your car design doesn't assume a specific gravitational pull, nobody is actually going to drive your car on the moon, and if they did, it wouldn't work, because there is no oxygen to make the fuel burn.\nThe difficult part is that the guy who is developed the \"works-on-any-planet\" design is often regarded as clever, so you may have to work harder to argue that his design is too clever.\nUnderstanding trade-offs, so you can make the decision between good assumptions and bad assumptions, will go a long way into avoiding a needlessly complicated design.\n", "If its too hard to test, your design is too complicated. That's the first metric I use.\n", "Here are some ideas to get design more simpler:\n\nread some programming books and articles, and then apply them in your work and write code\nread lots of code (good and bad) written by other people (like Open Source projects) and learn to see what works and what does not\nbuild safety nets (unit tests) to enable experimentations with your code\nuse version control to enable rollback, if those experimentations take wrong turn\nTDD (test driven development) and BDD (behaviour driven development)\nchange your attitude, ask how you can make it so, that \"it simply works\" (convention over configuration could help there; or ask how Apple would do it)\npractice (like jazz players -- jam with code, try Code Kata)\nwrite same code multiple times, with different languages and after some time has passed\nlearn new languages with new concepts (if you use static language, learn dynamic one; if you use procedural language, learn functional one; ...) [one language per year is about right]\nask someone to review you code and actively ask how you can make your code simpler and more elegant (and then make it)\nget years under your belt by doing above things (time helps active mind)\n\n", "I create a design etc., and then I look at it and try and remove (agressively) everything that doesn't seem to be needed. If it turns out I need it later when I am polishing the design I add it back in. I do this over several iterations, refining as I go along.\n", "Read \"Working Effectively With Legacy Code\" by Michael C. Feathers.\nThe point is, if you have code that works, and you need to change the design, nothing works better than making your code unit testable, and breaking your code into smaller pieces.\n", "Using Test Driven Development and following Robert C. Martin's Three Rules of TDD:\n\nYou are not allowed to write any production code unless it is to make a failing unit test pass.\nYou are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.\nYou are not allowed to write any more production code than is sufficient to pass the one failing unit test.\n\nIn this way you are not likely to get much code that you don't need. You will always be focused on making one important thing work and won't ever get too far ahead of yourself in terms of complexity.\n", "Test first may help here, but it is not suitable for all situation. And it's not a panacea anyway.\nStart small is another great idea. Do you really need to stuff all 10 design patterns into this thing? Try first to do it \"stupid way\". Doesn't quite cut it? Okay, do it \"slightly less stupid way\". Etc.\nGet it reviewed. As someone else wrote, two pairs of eyes are better. Even better are two brains. Your mate may just see a room for simplification, or a problematic area you thought was fine just because you spend many hours hacking it.\nUse lean language. Languages such as Java, or sometimes C++ sometimes seem to encourage nasty, convoluted solutions. Simple things tend to span over multiple lines of code, and you just need to use 3 external libraries and a big framework to manage it all. Consider using Python, Ruby, etc. - if not for your project, then for some private use. It can change your mindset to favor simplicity, and to be assured that simplicity is possible.\n", "Reduce the amount of data you're working with by serialising the task into a series of smaller tasks. Most people can only hold half a dozen (plus or minus) conditions in their head while coding, so make that the unit of implementation. Design for all the tasks you need to accomplish, but then ruthlessly hack the design so that you never have to play with more than half a dozen paths though the module.\nThis follows from Bendazo's post - simplify until it becomes easy.\n", "It is inevitable once you have been a programmer that this will happen. If you seriously have unestimated the effort or hit a problem where your solution just doesn't work then stop coding and get talking to your project manager. I always like to take the solutions with me to the meeting, problem is A, you can do x which will take 3 days or we can try y which will take 6 days. Don't make the choice yourself.\n", "\nTalk to other programmers every step of the way. The more eyes there are on the design, the more likely an overcomplicated aspect is revealed early, before it becomes too ossified in the codebase.\nConstantly ask yourself how you will use whatever you are currently working on. If the answer is that you're not sure, stop to rethink what you're doing.\nI've found it useful to jot down thoughts about how to potentially simplify something I'm currently working on. That way, once I actually have it working, it's easier to go back and refactor or redo as necessary instead of messing with something that's not even functional yet.\n\n", "This is a delicate balancing act: on the one hand you don't want something that takes too long to design and implement, on the other hand you don't want a hack that isn't complicated enough to deal with next week's problem, or even worse requires rewriting to adapt.\nA couple of techniques I find helpful:\nIf something seems more complex than you would like then never sit down to implement it as soon as you have finished thinking about it. Find something else to do for the rest of the day. Numerous times I end up thinking of a different solution to an early part of the problem that removes a lot of the complexity later on.\nIn a similar vein have someone else you can bounce ideas off. Make sure you can explain to them why the complexity is justified!\nIf you are adding complexity because you think it will be justified in the future then try to establish when in the future you will use it. If you can't (realistically) imagine needing the complexity for a year or three then it probably isn't justifiable to pay for it now.\n", "I ask my customers why they need some feature. I try and get to the bottom of their request and identify the problem they are experiencing. This often lends itself to a simpler solution than I (or they) would think of.\nOf course, if you know your clients' work habits and what problems they have to tackle, you can understand their problems much better from the get-go. And if you \"know them\" know them, then you understand their speech better. So, develop a close working relationship with your users. It's step zero of engineering.\n", "Take time to name the concepts of the system well, and find names that are related, this makes the system more familiar. Don't be hesitant to rename concepts, the better the connection to the world you know, the better your brain can work with it.\nAsk for opinions from people who get their kicks from clean, simple solutions.\nOnly implement concepts needed by the current project (a desire for future proofing or generic systems make your design bloated).\n" ]
[ 14, 8, 7, 3, 2, 2, 2, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "analysis", "complexity_theory" ]
stackoverflow_0000086308_analysis_complexity_theory.txt
Q: How do I create a container file? I would like to create a file format for my app like Quake, OO, and MS Office 07 have. Basically a uncompressed zip folder, or tar file. I need this to be cross platform (mac and windows). Can I do something via command prompt and bash? A: If you want a single file that is portable to all platforms and which contain structured data, consider using sqlite. You'll get a full featured ACID compliant database that exists on disk as a single file. There are libraries you can link against to directly access the file, and there is a command line tool you can use as well. No matter what language you are using, most likely there is support for it. http://www.sqlite.org A: Zip is supported everywhere. If a container is all you need, than those are surely good options. A: Have a look at the open source 7Zip compression format. For your specific needs, you can use it in an "Archive" mode, zero compression but very fast. It provides a powerful SDK, LZMA, from the site: "LZMA is the default and general compression method of 7z format in the 7-Zip program. LZMA provides a high compression ratio and very fast decompression, so it is very suitable for embedded applications. For example, it can be used for ROM (firmware) compressing. The LZMA SDK provides the documentation, samples, header files, libraries, and tools you need to develop applications that use LZMA compression." A: SQLite is great. A single file, crossplatform, a tiny library, SQL access to data, transactions, the whole enchilada. you can use transactions to guarantee consistent return points in case of crashing. check uses for sqlite, they specifically advocate using it as a data model layer for desktop applications. also, there's a command-line tool to manually access the data. A: First thing you should ask yourself is, "Do I really need to make my own?" Depending on what you want to use it for, you are probably better off using a common format and some pre-made libraries which already handle one of those formats very well. Good places to start: http://www.destructor.de/libtar/index.htm (tar -- a the 'container' format) http://www.zlib.net/ (zlib -- a method of compressing data before or after you put it in the container) If you still really think you need to make your own, I would suggest studying something very simple first, like tar's format: http://en.wikipedia.org/wiki/Tar_(file_format) or http://schmidt.devlib.org/file-formats/tar-archive-file-format.html A: Instead of making a format, I'd just decide on a convention. One or more named files within the container have the metadata you need to access the rest of the files, and know what to do with them. The container itself, though, should just be some ubiquitous format, such as zip. No need to reinvent the wheel, here.
How do I create a container file?
I would like to create a file format for my app like Quake, OO, and MS Office 07 have. Basically a uncompressed zip folder, or tar file. I need this to be cross platform (mac and windows). Can I do something via command prompt and bash?
[ "If you want a single file that is portable to all platforms and which contain structured data, consider using sqlite. You'll get a full featured ACID compliant database that exists on disk as a single file.\nThere are libraries you can link against to directly access the file, and there is a command line tool you can use as well. No matter what language you are using, most likely there is support for it.\nhttp://www.sqlite.org\n", "Zip is supported everywhere. If a container is all you need, than those are surely good options.\n", "Have a look at the open source 7Zip compression format. For your specific needs, you can use it in an \"Archive\" mode, zero compression but very fast.\nIt provides a powerful SDK, LZMA, from the site:\n\"LZMA is the default and general compression method of 7z format in the 7-Zip program. LZMA provides a high compression ratio and very fast decompression, so it is very suitable for embedded applications. For example, it can be used for ROM (firmware) compressing.\nThe LZMA SDK provides the documentation, samples, header files, libraries, and tools you need to develop applications that use LZMA compression.\"\n", "SQLite is great.\nA single file, crossplatform, a tiny library, SQL access to data, transactions, the whole enchilada.\nyou can use transactions to guarantee consistent return points in case of crashing. check uses for sqlite, they specifically advocate using it as a data model layer for desktop applications.\nalso, there's a command-line tool to manually access the data.\n", "First thing you should ask yourself is, \"Do I really need to make my own?\"\nDepending on what you want to use it for, you are probably better off using a common format and some pre-made libraries which already handle one of those formats very well.\nGood places to start:\nhttp://www.destructor.de/libtar/index.htm (tar -- a the 'container' format)\nhttp://www.zlib.net/ (zlib -- a method of compressing data before or after you put it in the container)\nIf you still really think you need to make your own, I would suggest studying something very simple first, like tar's format:\nhttp://en.wikipedia.org/wiki/Tar_(file_format)\nor\nhttp://schmidt.devlib.org/file-formats/tar-archive-file-format.html\n", "Instead of making a format, I'd just decide on a convention. One or more named files within the container have the metadata you need to access the rest of the files, and know what to do with them. The container itself, though, should just be some ubiquitous format, such as zip. No need to reinvent the wheel, here.\n" ]
[ 4, 2, 2, 1, 0, 0 ]
[]
[]
[ "container_file", "file_format", "macos", "windows" ]
stackoverflow_0000079136_container_file_file_format_macos_windows.txt
Q: Suggestions for human editable data file format/parsing library For example, right now I have a roll-my-own solution that uses data files that include blocks like: PlayerCharacter Fighter Hitpoints 25 Strength 10 StartPosition (0, 0, 0) Art Model BigBuffGuy Footprint LargeFootprint end InventoryItem Sword InventoryItem Shield InventoryItem HealthPotion end human editable (w/ minimal junk characters, ideally) resilient to errors (fewest 'wow i can't parse anything useful anymore' style errors, and thus i've lost all of the data in the rest of the file) - but still able to identify and report them, of course. My example the only complete failure case is missing 'end's. nested structure style data array/list style data customizable foundation types fast Are there any well known solutions that meet/exceed these requirements? A: Yaml is a good solution and very close to what you have. Search for it. A: I second the YAML suggestion. It's extremely easy to edit, very forgiving of mistakes and widely supported (especially among the dynamic languages). A: I'd say the most common choices are: JSON (offical site) - very flexible, though the punctuation can take a bit for people to get used to INI - super simple to use, but a bit limited in data-types XML - pretty flexible, common, but way too verbose sometimes A: You could try JSON available at: http://www.json.org/ It was designed for javascript and web usage initially. But it's pretty clean, and supported in many languages. A: Lua was designed to be a programming language where the syntax lets you easily use it as a markup language as well, so that you include data files as if they were code. Many computer games use it for their scripting, such as World of Warcraft due to its speed and ease of use. However it's originally designed and maintained for the energy industry so there's a serious background. Scheme with its S-expressions is also a very nice but different-looking syntax for data. Finally, you've got XML that has the benefit of the most entry-level developers knowing it. You can also roll your own well-defined and efficient parser with a nice development suite such as ANTLR.
Suggestions for human editable data file format/parsing library
For example, right now I have a roll-my-own solution that uses data files that include blocks like: PlayerCharacter Fighter Hitpoints 25 Strength 10 StartPosition (0, 0, 0) Art Model BigBuffGuy Footprint LargeFootprint end InventoryItem Sword InventoryItem Shield InventoryItem HealthPotion end human editable (w/ minimal junk characters, ideally) resilient to errors (fewest 'wow i can't parse anything useful anymore' style errors, and thus i've lost all of the data in the rest of the file) - but still able to identify and report them, of course. My example the only complete failure case is missing 'end's. nested structure style data array/list style data customizable foundation types fast Are there any well known solutions that meet/exceed these requirements?
[ "Yaml is a good solution and very close to what you have. Search for it.\n", "I second the YAML suggestion. It's extremely easy to edit, very forgiving of mistakes and widely supported (especially among the dynamic languages).\n", "I'd say the most common choices are: \n\nJSON (offical site) - very flexible, though the punctuation can take a bit for people to get used to\nINI - super simple to use, but a bit limited in data-types\nXML - pretty flexible, common, but way too verbose sometimes\n\n", "You could try JSON available at: http://www.json.org/\nIt was designed for javascript and web usage initially. But it's pretty clean, and supported in many languages.\n", "Lua was designed to be a programming language where the syntax lets you easily use it as a markup language as well, so that you include data files as if they were code. Many computer games use it for their scripting, such as World of Warcraft due to its speed and ease of use. However it's originally designed and maintained for the energy industry so there's a serious background.\nScheme with its S-expressions is also a very nice but different-looking syntax for data. Finally, you've got XML that has the benefit of the most entry-level developers knowing it. You can also roll your own well-defined and efficient parser with a nice development suite such as ANTLR.\n" ]
[ 6, 1, 1, 0, 0 ]
[ "I would suggest JSON.\n\nJust as readable/editable as YAML\nIf you happen to use for Web then can be eval()'ed into JavaScript objects\nProbably as cross language as YAML\n\n" ]
[ -1 ]
[ "markup" ]
stackoverflow_0000087713_markup.txt
Q: NHIbernate: Difference between Restriction.In and Restriction.InG When creating a criteria in NHibernate I can use Restriction.In() or Restriction.InG() What is the difference between them? A: InG is the generic equivalent of In (for collections) The signatures of the methods are as follows (only the ICollection In overload is shown): In(string propertyName, ICollection values) vs. InG<T>(string propertyName, ICollection<T> values) Looking at NHibernate's source code (trunk) it seems that they both copy the collection to an object array and use that going forward, so I don't think there is a performance difference between them. I personally just use the In one most of the time - its easier to read. A: Restriction.In definately creates a subquery with whatever criteria you pass to the .In() method, but not sure what InG() does. never seen it.
NHIbernate: Difference between Restriction.In and Restriction.InG
When creating a criteria in NHibernate I can use Restriction.In() or Restriction.InG() What is the difference between them?
[ "InG is the generic equivalent of In (for collections)\nThe signatures of the methods are as follows (only the ICollection In overload is shown):\nIn(string propertyName, ICollection values)\n\nvs.\nInG<T>(string propertyName, ICollection<T> values)\n\nLooking at NHibernate's source code (trunk) it seems that they both copy the collection to an object array and use that going forward, so I don't think there is a performance difference between them.\nI personally just use the In one most of the time - its easier to read.\n", "Restriction.In definately creates a subquery with whatever criteria you pass to the .In() method, but not sure what InG() does. never seen it.\n" ]
[ 11, 0 ]
[]
[]
[ "c#", "nhibernate", "orm" ]
stackoverflow_0000031424_c#_nhibernate_orm.txt
Q: How to determine if the user's browser can view PDF files What's the best way for determining whether the user's browser can view PDF files? Ideally, it shouldn't matter on the browser or the operating system. Is there a specific way of doing it in ASP.NET, or would the answer be just JavaScript? A: Neither, none, don't try. Re dawnerd: Plug-in detection is not the right answer. I do not have a PDF plugin installed in my browser (Firefox on Ubuntu), yet I am able to view PDF files using the operating system's document viewer (which is not Acrobat Reader). Today, any operating system that can run a web browser can view PDF files out of the box. If a specific system does not have a PDF viewer installed and the browser configured to use it, that likely means that either it's a hand-made install of Windows, a very trimmed down alternate operating system, or something really retro. It is reasonable to assume that in any of those situation the user will know what a PDF file is and either deliberately choose not to be able to view them or know how to install the required software. If I am deluding myself, I would love to have it explained to me in which way I am wrong. A: A quick google search found this. Useful for all kinds of plugins. A: There are users that choose not to open PDF's in the browser and disable the plugin (this allows the file to be opened in the native application external of the browser window). It is better to let the user know that software is required to open something (whether it be PDF or not) than try to detect whether the plugin is available. Another problem with detection is that what you need to look for changes from version to version (for example, see: "PDF.PdfCtrl.*" vs "AcroPDF.PDF.*" for the Adobe PDF viewer) and different browser implementations (the previously mentioned strings are used in IE for example, while Firefox uses a totally different manner of detection. Then we need to think of Opera and Safari and ???). Also, there are different vendors (think Foxit and Ghostscript, though I am not sure if they supply a plugin for the browser) where there may be differences in detecting the plugin. For a script written in 2008 and some more information about the caveats see Detecting plugins in Internet Explorer (and a few hints for all the others). A: After initially ignoring the advise on this page the architect went ahead with Acrobat detection, causing an inevitable support nightmare. As ddaa mentions not all the scenarios can be accurately captured with Plug-in detection. Some users, for example, may choose to view PDF files with FoxIt Reader rather than acrobat. Some user's browsers don't flag that they are Acrobat ready, and certainly not always in the same way. A better solution would have been to give the user a choice on how they'd like to view the relevant document. Personally, I don't like to have any website rely on a plug-in - it spoils the beauty of the web.
How to determine if the user's browser can view PDF files
What's the best way for determining whether the user's browser can view PDF files? Ideally, it shouldn't matter on the browser or the operating system. Is there a specific way of doing it in ASP.NET, or would the answer be just JavaScript?
[ "Neither, none, don't try.\nRe dawnerd: Plug-in detection is not the right answer. I do not have a PDF plugin installed in my browser (Firefox on Ubuntu), yet I am able to view PDF files using the operating system's document viewer (which is not Acrobat Reader).\nToday, any operating system that can run a web browser can view PDF files out of the box.\nIf a specific system does not have a PDF viewer installed and the browser configured to use it, that likely means that either it's a hand-made install of Windows, a very trimmed down alternate operating system, or something really retro.\nIt is reasonable to assume that in any of those situation the user will know what a PDF file is and either deliberately choose not to be able to view them or know how to install the required software.\nIf I am deluding myself, I would love to have it explained to me in which way I am wrong.\n", "A quick google search found this. Useful for all kinds of plugins.\n", "There are users that choose not to open PDF's in the browser and disable the plugin (this allows the file to be opened in the native application external of the browser window). It is better to let the user know that software is required to open something (whether it be PDF or not) than try to detect whether the plugin is available.\nAnother problem with detection is that what you need to look for changes from version to version (for example, see: \"PDF.PdfCtrl.*\" vs \"AcroPDF.PDF.*\" for the Adobe PDF viewer) and different browser implementations (the previously mentioned strings are used in IE for example, while Firefox uses a totally different manner of detection. Then we need to think of Opera and Safari and ???). Also, there are different vendors (think Foxit and Ghostscript, though I am not sure if they supply a plugin for the browser) where there may be differences in detecting the plugin.\nFor a script written in 2008 and some more information about the caveats see Detecting plugins in Internet Explorer (and a few hints for all the others).\n", "After initially ignoring the advise on this page the architect went ahead with Acrobat detection, causing an inevitable support nightmare.\nAs ddaa mentions not all the scenarios can be accurately captured with Plug-in detection. Some users, for example, may choose to view PDF files with FoxIt Reader rather than acrobat. Some user's browsers don't flag that they are Acrobat ready, and certainly not always in the same way.\nA better solution would have been to give the user a choice on how they'd like to view the relevant document. Personally, I don't like to have any website rely on a plug-in - it spoils the beauty of the web.\n" ]
[ 19, 3, 3, 2 ]
[]
[]
[ "asp.net", "browser", "javascript", "pdf" ]
stackoverflow_0000076179_asp.net_browser_javascript_pdf.txt
Q: SVN checkout question I am about to move to SVN as my RCS of choice (after many years using CVS) and have a basic question... I have a number of shared projects - code that I want to use with lots of different projects. Is it possible to 'link' these shared folders to the projects that need them, so checking out a project will also checkout the shared code? For example, suppose my repository looks like this: root --project1 --project2 --shared --smtp When I checkout project1, I also want to checkout shared and smtp. Back in my CVS days I would of used a Unix symbolic link in one of the project folders, but as my new SVN repository won't necessarily be hosted on a Unix box, I can't do the same. A: SVN Externals are what you want to do. The SVN book explains it in great detail here. That's one thing I love about SVN, the wonderful documentation. A: You are looking for the "svn:externals" property. See this section of svnbook: Under the project that you want to use the shared project in, and then set a property on that directory named "svn:externals". This property contains the name of the directory which contains the external respository, and can have some other options so that you always get the same revision. Example (from svnbook, which is an EXCELLENT reference for svn questions): $ svn propget svn:externals calc third-party/sounds http://svn.example.com/repos/sounds third-party/skins -r148 http://svn.example.com/skinproj third-party/skins/toolkit -r21 http://svn.example.com/skin-maker in this example, third-party/sounds would be checked out from http://svn.example.com/repos/sounds. The -rNNN pins the checkout to a revision so that if you're doing more development on that, you can make sure your other projects don't randomly break. Generally instead of doing this revision thing, I external to a tag which holds a stable version. A: This is what svn "Externals" are for. A: Yes, have a look at SVN Externals. A: Yes, this mechanism is called "externals". See the book. A: SVN has a feature called "externals" which basically works the same way a symbolic UNIX link works (you point a certain directory that's one place in your repository to some other directory elsewhere in your repository). For further info on how to setup externals, see this: http://svnbook.red-bean.com/en/1.0/ch07s03.html Hope that helps.
SVN checkout question
I am about to move to SVN as my RCS of choice (after many years using CVS) and have a basic question... I have a number of shared projects - code that I want to use with lots of different projects. Is it possible to 'link' these shared folders to the projects that need them, so checking out a project will also checkout the shared code? For example, suppose my repository looks like this: root --project1 --project2 --shared --smtp When I checkout project1, I also want to checkout shared and smtp. Back in my CVS days I would of used a Unix symbolic link in one of the project folders, but as my new SVN repository won't necessarily be hosted on a Unix box, I can't do the same.
[ "SVN Externals are what you want to do. The SVN book explains it in great detail here. That's one thing I love about SVN, the wonderful documentation.\n", "You are looking for the \"svn:externals\" property. See this section of svnbook: \nUnder the project that you want to use the shared project in, and then set a property on that directory named \"svn:externals\". This property contains the name of the directory which contains the external respository, and can have some other options so that you always get the same revision. \nExample (from svnbook, which is an EXCELLENT reference for svn questions): \n$ svn propget svn:externals calc\nthird-party/sounds http://svn.example.com/repos/sounds\nthird-party/skins -r148 http://svn.example.com/skinproj\nthird-party/skins/toolkit -r21 http://svn.example.com/skin-maker\n\nin this example, third-party/sounds would be checked out from http://svn.example.com/repos/sounds. The -rNNN pins the checkout to a revision so that if you're doing more development on that, you can make sure your other projects don't randomly break. Generally instead of doing this revision thing, I external to a tag which holds a stable version. \n", "This is what svn \"Externals\" are for.\n", "Yes, have a look at SVN Externals.\n", "Yes, this mechanism is called \"externals\". See the book.\n", "SVN has a feature called \"externals\" which basically works the same way a symbolic UNIX link works (you point a certain directory that's one place in your repository to some other directory elsewhere in your repository).\nFor further info on how to setup externals, see this:\nhttp://svnbook.red-bean.com/en/1.0/ch07s03.html\nHope that helps.\n" ]
[ 11, 2, 0, 0, 0, 0 ]
[]
[]
[ "svn" ]
stackoverflow_0000087849_svn.txt
Q: How to match URIs in text? How would one go about spotting URIs in a block of text? The idea is to turn such runs of texts into links. This is pretty simple to do if one only considered the http(s) and ftp(s) schemes; however, I am guessing the general problem (considering tel, mailto and other URI schemes) is much more complicated (if it is even possible). I would prefer a solution in C# if possible. Thank you. A: Regexs may prove a good starting point for this, though URIs and URLs are notoriously difficult to match with a single pattern. To illustrate, the simplest of patterns looks fairly complicated (in Perl 5 notation): \w+:\/{2}[\d\w-]+(\.[\d\w-]+)*(?:(?:\/[^\s/]*))* This would match http://example.com/foo/bar-baz and ftp://192.168.0.1/foo/file.txt but would cause problems for at least these: mailto:[email protected] (no match - no //, but present @) ftp://192.168.0.1.2 (match, but too many numbers, so it's not a valid URI) ftp://1000.120.0.1 (match, but the IP address needs numbers between 0 and 255, so it's not a valid URI) nonexistantscheme://obvious.false.positive http://www.google.com/search?q=uri+regular+expression (match, but query isn't I think this is a case of the 80:20 rule. If you want to catch most things, then I would do as suggested an find a decent regular expression if you can't write one yourself. If you're looking at text pulled from fairly controlled sources (e.g. machine generated), then this will the best course of action. If you absolutely positively have to catch every URI that you encounter, and you're looking at text from the wild, then I think I would look for any word with a colon in it e.g. \s(\w:\S+)\s. Once you have a suitable candidate for a URI, then pass it to the a real URI parser in the URI class of whatever library you're using. If you're interested in why it's so hard to write a URI pattern, the I guess it would be that the definition of a URI is done with a Type-2 grammar, while regular expressions can only parse languages from Type-3 grammars. A: Whether or not something is a URI is context-dependent. In general the only thing they always have in common is that they start "scheme_name:". The scheme name can be anything (subject to legal characters). But other strings also contain colons without being URIs. So you need to decide what schemes you're interested in. Generally you can get away with searching for "scheme_name:", followed by characters up to a space, for each scheme you care about. Unfortunately URIs can contain spaces, so if they're embedded in text they are potentially ambiguous. There's nothing you can do to resolve the ambiguity - the person who wrote the text would have to fix it. URIs can optionally be enclosed in <>. Most people don't do that, though, so recognising that format will only occasionally help. The Wikipedia article for URI lists the relevant RFCs. [Edit to add: using regular expressions to fully validate URIs is a nightmare - even if you somehow find or create one that's correct, it will be very large and difficult to comment and maintain. Fortunately, if all you're doing is highlighting links, you probably don't care about the odd false positive, so you don't need to validate. Just look for "http://", "mailto:\S*@", etc] A: For a lot of the protocols you could just search for "://" without the quotes. Not sure about the others though. A: Here is a code snippet with regular expressions for various needs: http://snipplr.com/view/6889/regular-expressions-for-uri-validationparsing/ A: That is not easy to do, if you want to also match "something.tld", because normal text will have many instances of that pattern, but if you want to match only URIs that begin with a scheme, you can try this regular expression (sorry, I don't know how to plug it in C#) (http|https|ftp|mailto|tel):\S+[/a-zA-Z0-9] You can add more schemes there, and it will match the scheme until the next whitespace character, taking into account that the last character is not invalid (for example as in the very usual string "http://www.example.com.") A: the URL Tool for Ubiquity does the following: findURLs: function(text) { var urls = []; var matches = text.match(/(\S+\.{1}[^\s\,\.\!]+)/g); if (matches) { for each (var match in matches) { urls.push(match); } } return urls; },
How to match URIs in text?
How would one go about spotting URIs in a block of text? The idea is to turn such runs of texts into links. This is pretty simple to do if one only considered the http(s) and ftp(s) schemes; however, I am guessing the general problem (considering tel, mailto and other URI schemes) is much more complicated (if it is even possible). I would prefer a solution in C# if possible. Thank you.
[ "Regexs may prove a good starting point for this, though URIs and URLs are notoriously difficult to match with a single pattern. \nTo illustrate, the simplest of patterns looks fairly complicated (in Perl 5 notation):\n\\w+:\\/{2}[\\d\\w-]+(\\.[\\d\\w-]+)*(?:(?:\\/[^\\s/]*))*\nThis would match \nhttp://example.com/foo/bar-baz\nand \nftp://192.168.0.1/foo/file.txt\nbut would cause problems for at least these:\n\nmailto:[email protected] (no match - no //, but present @)\nftp://192.168.0.1.2 (match, but too many numbers, so it's not a valid URI)\nftp://1000.120.0.1 (match, but the IP address needs numbers between 0 and 255, so it's not a valid URI)\nnonexistantscheme://obvious.false.positive\nhttp://www.google.com/search?q=uri+regular+expression (match, but query isn't \nI think this is a case of the 80:20 rule. If you want to catch most things, then I would do as suggested an find a decent regular expression if you can't write one yourself. \n\nIf you're looking at text pulled from fairly controlled sources (e.g. machine generated), then this will the best course of action.\nIf you absolutely positively have to catch every URI that you encounter, and you're looking at text from the wild, then I think I would look for any word with a colon in it e.g. \\s(\\w:\\S+)\\s. Once you have a suitable candidate for a URI, then pass it to the a real URI parser in the URI class of whatever library you're using.\nIf you're interested in why it's so hard to write a URI pattern, the I guess it would be that the definition of a URI is done with a Type-2 grammar, while regular expressions can only parse languages from Type-3 grammars.\n", "Whether or not something is a URI is context-dependent. In general the only thing they always have in common is that they start \"scheme_name:\". The scheme name can be anything (subject to legal characters). But other strings also contain colons without being URIs.\nSo you need to decide what schemes you're interested in. Generally you can get away with searching for \"scheme_name:\", followed by characters up to a space, for each scheme you care about. Unfortunately URIs can contain spaces, so if they're embedded in text they are potentially ambiguous. There's nothing you can do to resolve the ambiguity - the person who wrote the text would have to fix it. URIs can optionally be enclosed in <>. Most people don't do that, though, so recognising that format will only occasionally help.\nThe Wikipedia article for URI lists the relevant RFCs.\n[Edit to add: using regular expressions to fully validate URIs is a nightmare - even if you somehow find or create one that's correct, it will be very large and difficult to comment and maintain. Fortunately, if all you're doing is highlighting links, you probably don't care about the odd false positive, so you don't need to validate. Just look for \"http://\", \"mailto:\\S*@\", etc]\n", "For a lot of the protocols you could just search for \"://\" without the quotes. Not sure about the others though.\n", "Here is a code snippet with regular expressions for various needs:\nhttp://snipplr.com/view/6889/regular-expressions-for-uri-validationparsing/\n", "That is not easy to do, if you want to also match \"something.tld\", because normal text will have many instances of that pattern, but if you want to match only URIs that begin with a scheme, you can try this regular expression (sorry, I don't know how to plug it in C#)\n(http|https|ftp|mailto|tel):\\S+[/a-zA-Z0-9]\n\nYou can add more schemes there, and it will match the scheme until the next whitespace character, taking into account that the last character is not invalid (for example as in the very usual string \"http://www.example.com.\")\n", "the URL Tool for Ubiquity does the following:\nfindURLs: function(text) {\n var urls = [];\n var matches = text.match(/(\\S+\\.{1}[^\\s\\,\\.\\!]+)/g);\n if (matches) {\n for each (var match in matches) {\n urls.push(match);\n }\n }\n return urls;\n},\n\n" ]
[ 7, 1, 0, 0, 0, 0 ]
[ "The following perl regexp should pull do the trick. Does c# have perl regexps?\n/\\w+:\\/\\/[\\w][\\w\\.\\/]*/\n\n" ]
[ -1 ]
[ "textmatching", "uri" ]
stackoverflow_0000082398_textmatching_uri.txt
Q: What is the equivalent of Oracle's REF CURSOR in Postgresql when using JDBC? In Oracle I can declare a reference cursor... TYPE t_spool IS REF CURSOR RETURN spool%ROWTYPE; ...and use it to pass a cursor as the return value... FUNCTION end_spool RETURN t_spool AS v_spool t_spool; BEGIN COMMIT; OPEN v_spool FOR SELECT * FROM spool WHERE key = g_spool_key ORDER BY seq; RETURN v_spool; END end_spool; ...and then capture it as a result set using JDBC... private Connection conn; private CallableStatement stmt; private OracleResultSet rset; [...clip...] stmt = conn.prepareCall("{ ? = call " + call + "}"); stmt.registerOutParameter(1, OracleTypes.CURSOR); stmt.execute(); rset = (OracleResultSet)stmt.getObject(1); What is the equivalent in Postgresql? A: Maybe this will help: http://jdbc.postgresql.org/documentation/83/callproc.html#callproc-resultset-setof I haven't really messed with that before :P
What is the equivalent of Oracle's REF CURSOR in Postgresql when using JDBC?
In Oracle I can declare a reference cursor... TYPE t_spool IS REF CURSOR RETURN spool%ROWTYPE; ...and use it to pass a cursor as the return value... FUNCTION end_spool RETURN t_spool AS v_spool t_spool; BEGIN COMMIT; OPEN v_spool FOR SELECT * FROM spool WHERE key = g_spool_key ORDER BY seq; RETURN v_spool; END end_spool; ...and then capture it as a result set using JDBC... private Connection conn; private CallableStatement stmt; private OracleResultSet rset; [...clip...] stmt = conn.prepareCall("{ ? = call " + call + "}"); stmt.registerOutParameter(1, OracleTypes.CURSOR); stmt.execute(); rset = (OracleResultSet)stmt.getObject(1); What is the equivalent in Postgresql?
[ "Maybe this will help: http://jdbc.postgresql.org/documentation/83/callproc.html#callproc-resultset-setof\nI haven't really messed with that before :P\n" ]
[ 4 ]
[]
[]
[ "java", "jdbc", "oracle", "plsql", "postgresql" ]
stackoverflow_0000087603_java_jdbc_oracle_plsql_postgresql.txt
Q: Avoiding double-thunking with C++/CLI properties I've read (in Nish Sivakumar's book C++/CLI In Action among other places) that you should use the __clrcall decorator on function calls to avoid double-thunking, in cases where you know that the method will never be called from unmanaged code. Nish also says that if the method signature contains any CLR types, then the JIT compiler will automatically add the __clrcall. What is not clear to me is if I need to include __clrcall when I create C++/CLI properties. In one sense, properties are only accessible from .NET languages, on the other hand the C++/CLI compiler (I think) just generates methods (e.g. ***_get() ) that are callable from both managed and unmanaged code. So do I need to use the __clrcall modifier on my properties, and if so, where does it go? On the get/set functions themselves? A: @Mike B - Thanks for the tip on ildasm - I didn't know about that tool. It appears that I misread/misunderstood Nish - the __clrcall modifier and the double-thunking problem it eliminates only apply to methods of NATIVE classes. All methods of Managed classes are __clrcall by default - which seems obvious in retrospect. Evidently Marcus Heege's book Expert C++/CLI is available as a free download, and it has a nice table on page 215 that summarizes the calling conventions.
Avoiding double-thunking with C++/CLI properties
I've read (in Nish Sivakumar's book C++/CLI In Action among other places) that you should use the __clrcall decorator on function calls to avoid double-thunking, in cases where you know that the method will never be called from unmanaged code. Nish also says that if the method signature contains any CLR types, then the JIT compiler will automatically add the __clrcall. What is not clear to me is if I need to include __clrcall when I create C++/CLI properties. In one sense, properties are only accessible from .NET languages, on the other hand the C++/CLI compiler (I think) just generates methods (e.g. ***_get() ) that are callable from both managed and unmanaged code. So do I need to use the __clrcall modifier on my properties, and if so, where does it go? On the get/set functions themselves?
[ "@Mike B - Thanks for the tip on ildasm - I didn't know about that tool.\nIt appears that I misread/misunderstood Nish - the __clrcall modifier and the double-thunking problem it eliminates only apply to methods of NATIVE classes. All methods of Managed classes are __clrcall by default - which seems obvious in retrospect.\nEvidently Marcus Heege's book Expert C++/CLI is available as a free download, and it has a nice table on page 215 that summarizes the calling conventions.\n" ]
[ 3 ]
[]
[]
[ ".net", "c++_cli", "properties" ]
stackoverflow_0000086977_.net_c++_cli_properties.txt
Q: Different versions of C++ libraries After compiling a simple C++ project using Visual Studio 2008 on vista, everything runs fine on the original vista machine and other vista computers. However, moving it over to an XP box results in an error message: "The application failed to start because the application configuration is incorrect". What do I have to do so my compiled EXE works on XP and Vista? I had this same problem a few months ago, and just fiddling with some settings on the project fixed it, but I don't remember which ones I changed. A: You need to install the Visual Studios 2008 runtime on the target computer: http://www.microsoft.com/downloads/details.aspx?FamilyID=9b2da534-3e03-4391-8a4d-074b9f2bc1bf&displaylang=en Alternatively, you could also link the run time statically, in the project properties window go to: c++ -> Code Generation -> Runtime Library and select "multi-threaded /MT" A: You need to install the runtime redistributable files onto the machine you are trying to run the app on. The redistributable for 2008 is here. The redistributable for 2005 is here. They can be installed side-by-side, in case you need both. A: You probably need to distribute the VC runtime with your application. There are a variety of ways to do this. This article from the Microsoft Visual C++ Team best explains the different ways to distribute these dependencies if you are using Visual Studio 2005 or 2008. As stated in the article, though you can download the Redistributable installer package and simply launch that on the client machine, that is almost always not the optimal option. There are usually better ways to include the required DLLs such as including the merge module if you are distributing via Windows Setup or App-Local copy if you just want to distribute a zipped folder. Another option is to statically link against the runtime libraries, instead of distributing them with your application. This option is only suitable for standalone EXEs that do not load other DLLs. You also cannot do this with DLLs that are loaded by other applications. A: It is much the simplest to link to the runtime statically. c++ -> Code Generation -> Runtime Library and select "multi-threaded /MT" However, this does make your executable a couple hundred KByte larger. This might be a problem if you are installing a large number of small programs, since each will be burdened by its very own copy of the runtime. The answer is to create an installer. New project -> "setup and deployment" -> "setup project" Load the output from your application projects ( defined using the DLL version of the runtime ) into the installer project and build it. The dependency on the runtime DLL will be noticed, included in the installer package, and neatly and unobtrusively installed in the correct place on the target machine. A: Visual studio 2005 actually has two The one for the original release and the one for SP1
Different versions of C++ libraries
After compiling a simple C++ project using Visual Studio 2008 on vista, everything runs fine on the original vista machine and other vista computers. However, moving it over to an XP box results in an error message: "The application failed to start because the application configuration is incorrect". What do I have to do so my compiled EXE works on XP and Vista? I had this same problem a few months ago, and just fiddling with some settings on the project fixed it, but I don't remember which ones I changed.
[ "You need to install the Visual Studios 2008 runtime on the target computer:\n\nhttp://www.microsoft.com/downloads/details.aspx?FamilyID=9b2da534-3e03-4391-8a4d-074b9f2bc1bf&displaylang=en\n\nAlternatively, you could also link the run time statically, in the project properties window go to:\n\nc++ -> Code Generation -> Runtime\n Library and select \"multi-threaded\n /MT\"\n\n", "You need to install the runtime redistributable files onto the machine you are trying to run the app on.\nThe redistributable for 2008 is here.\nThe redistributable for 2005 is here.\nThey can be installed side-by-side, in case you need both.\n", "You probably need to distribute the VC runtime with your application. There are a variety of ways to do this. This article from the Microsoft Visual C++ Team best explains the different ways to distribute these dependencies if you are using Visual Studio 2005 or 2008.\nAs stated in the article, though you can download the Redistributable installer package and simply launch that on the client machine, that is almost always not the optimal option. There are usually better ways to include the required DLLs such as including the merge module if you are distributing via Windows Setup or App-Local copy if you just want to distribute a zipped folder.\nAnother option is to statically link against the runtime libraries, instead of distributing them with your application. This option is only suitable for standalone EXEs that do not load other DLLs. You also cannot do this with DLLs that are loaded by other applications.\n", "It is much the simplest to link to the runtime statically.\nc++ -> Code Generation -> Runtime Library and select \"multi-threaded /MT\"\nHowever, this does make your executable a couple hundred KByte larger. This might be a problem if you are installing a large number of small programs, since each will be burdened by its very own copy of the runtime. The answer is to create an installer.\nNew project -> \"setup and deployment\" -> \"setup project\" \nLoad the output from your application projects ( defined using the DLL version of the runtime ) into the installer project and build it. The dependency on the runtime DLL will be noticed, included in the installer package, and neatly and unobtrusively installed in the correct place on the target machine.\n", "Visual studio 2005 actually has two\nThe one for the original release\nand the one for SP1\n" ]
[ 6, 1, 1, 0, 0 ]
[]
[]
[ "c++", "windows", "windows_vista", "windows_xp" ]
stackoverflow_0000087405_c++_windows_windows_vista_windows_xp.txt
Q: How do I use my own compiler with Nant? Nant seems very compiler-centric - which is guess is because it's considered a .NET development system. But I know it can be done! I've seen it. The platform we're building on has its own compiler and doesn't use 'cl.exe' for c++. We're building a C++ app on a different platform and would like to override with our own compiler. Can anyone point me at a way to do that or at least how to set up a target of my own that will use our target platform's compiler? A: Here is one I did for Delphi. Each 'arg' is a separate param with a value defined elsewhere. The target is called with the params set up before calling it. <target name="build.application"> <exec program="dcc32" basedir="${Delphi.Bin}" workingdir="${Application.Folder}" verbose="true"> <arg value="${Application.Compiler.Directive}" /> <arg value="-Q" /> <arg value="/B" /> <arg value="/E${Application.Output.Folder}" /> <arg value="/U${Application.Lib.Folder};${Application.Search.Folder}" /> <arg value="${Application.Folder}\${Delphi.Project}" /> </exec> </target> A: You need to write your own task. This is a nice reference. A: Initially, use the <exec> task to run an executable, passing in any required information as parameters and/or environment variables. For future use, you could also investigate writing your own task. I know with standard ant this is done with the <taskdef> task and a java class. I'm not sure of the Nant equivalent unfortunately. A: You could also use the <exec> task.
How do I use my own compiler with Nant?
Nant seems very compiler-centric - which is guess is because it's considered a .NET development system. But I know it can be done! I've seen it. The platform we're building on has its own compiler and doesn't use 'cl.exe' for c++. We're building a C++ app on a different platform and would like to override with our own compiler. Can anyone point me at a way to do that or at least how to set up a target of my own that will use our target platform's compiler?
[ "Here is one I did for Delphi. Each 'arg' is a separate param with a value defined elsewhere. The target is called with the params set up before calling it.\n<target name=\"build.application\">\n <exec program=\"dcc32\" basedir=\"${Delphi.Bin}\" workingdir=\"${Application.Folder}\" verbose=\"true\">\n <arg value=\"${Application.Compiler.Directive}\" />\n <arg value=\"-Q\" />\n <arg value=\"/B\" />\n <arg value=\"/E${Application.Output.Folder}\" />\n <arg value=\"/U${Application.Lib.Folder};${Application.Search.Folder}\" />\n <arg value=\"${Application.Folder}\\${Delphi.Project}\" />\n </exec>\n</target>\n\n", "You need to write your own task. This is a nice reference.\n", "Initially, use the <exec> task to run an executable, passing in any required information as parameters and/or environment variables.\nFor future use, you could also investigate writing your own task. I know with standard ant this is done with the <taskdef> task and a java class. I'm not sure of the Nant equivalent unfortunately.\n", "You could also use the <exec> task.\n" ]
[ 5, 3, 1, 0 ]
[]
[]
[ "build", "c++", "cross_platform", "makefile", "nant" ]
stackoverflow_0000087831_build_c++_cross_platform_makefile_nant.txt
Q: Copying Files over an Intermittent Network Connection I am looking for a robust way to copy files over a Windows network share that is tolerant of intermittent connectivity. The application is often used on wireless, mobile workstations in large hospitals, and I'm assuming connectivity can be lost either momentarily or for several minutes at a time. The files involved are typically about 200KB - 500KB in size. The application is written in VB6 (ugh), but we frequently end up using Windows DLL calls. Thanks! A: I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across. A: Try using BITS (Background Intelligent Transfer Service). It's the infrastructure that Windows Update uses, is accessible via the Win32 API, and is built specifically to address this. It's usually used for application updates, but should work well in any file moving situation. http://www.codeproject.com/KB/IP/bitsman.aspx A: I'm unclear as to what your actual problem is, so I'll throw out a few thoughts. Do you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at CopyFileEx with COPYFILERESTARTABLE Do you want verifiable copies? Sounds like you already have that by verifying hashes. Do you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, TransmitFile may help. Do you just want a fire and forget operation? I suppose shelling out to robocopy, or TeraCopy or something would work - but it seems a bit hacky to me. Do you want to know when the network comes back? IsNetworkAlive has your answer. Based on what I know so far, I think the following pseudo-code would be my approach: sourceFile = Compress("*.*"); destFile = "X:\files.zip"; int copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE; while (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) { do { // optionally, increment a failed counter to break out at some point Sleep(1000); while (!IsNetworkAlive(NETWORKALIVELAN)); } Compressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side. A: I agree with Robocopy as a solution...thats why the utility is called "Robust File Copy" I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across. And by default, a million retries. That should be plenty for your intermittent connection. It also does restartable transfers and you can even throttle transfers with a gap between packets assuing you don't want to use all the bandwidth as other programs are using the same connection (/IPG switch)?. A: How about simply sending a hash after or before you send the file, and comparing that with the file you received? That should at least make sure you have a correct file. If you want to go all out you could do the same process, but for small parts of the file. Then when you have all pieces, join them on the receiving end. A: You could use Microsoft SyncToy (free). http://www.microsoft.com/Downloads/details.aspx?familyid=C26EFA36-98E0-4EE9-A7C5-98D0592D8C52&displaylang=en A: Hm, seems rsync does it, and does not need server/daemon/install I thought it does - just $ rsync src dst.
Copying Files over an Intermittent Network Connection
I am looking for a robust way to copy files over a Windows network share that is tolerant of intermittent connectivity. The application is often used on wireless, mobile workstations in large hospitals, and I'm assuming connectivity can be lost either momentarily or for several minutes at a time. The files involved are typically about 200KB - 500KB in size. The application is written in VB6 (ugh), but we frequently end up using Windows DLL calls. Thanks!
[ "I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across.\n", "Try using BITS (Background Intelligent Transfer Service). It's the infrastructure that Windows Update uses, is accessible via the Win32 API, and is built specifically to address this.\nIt's usually used for application updates, but should work well in any file moving situation.\nhttp://www.codeproject.com/KB/IP/bitsman.aspx\n", "I'm unclear as to what your actual problem is, so I'll throw out a few thoughts.\n\nDo you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at CopyFileEx with COPYFILERESTARTABLE\nDo you want verifiable copies? Sounds like you already have that by verifying hashes.\nDo you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, TransmitFile may help.\nDo you just want a fire and forget operation? I suppose shelling out to robocopy, or TeraCopy or something would work - but it seems a bit hacky to me.\nDo you want to know when the network comes back? IsNetworkAlive has your answer.\n\nBased on what I know so far, I think the following pseudo-code would be my approach:\nsourceFile = Compress(\"*.*\");\ndestFile = \"X:\\files.zip\";\n\nint copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE;\nwhile (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) {\n do {\n // optionally, increment a failed counter to break out at some point\n Sleep(1000);\n while (!IsNetworkAlive(NETWORKALIVELAN));\n}\n\nCompressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side.\n", "I agree with Robocopy as a solution...thats why the utility is called \"Robust File Copy\"\n\nI've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across.\n\nAnd by default, a million retries. That should be plenty for your intermittent connection. \nIt also does restartable transfers and you can even throttle transfers with a gap between packets assuing you don't want to use all the bandwidth as other programs are using the same connection (/IPG switch)?.\n", "How about simply sending a hash after or before you send the file, and comparing that with the file you received? That should at least make sure you have a correct file.\nIf you want to go all out you could do the same process, but for small parts of the file. Then when you have all pieces, join them on the receiving end.\n", "You could use Microsoft SyncToy (free).\nhttp://www.microsoft.com/Downloads/details.aspx?familyid=C26EFA36-98E0-4EE9-A7C5-98D0592D8C52&displaylang=en\n", "Hm, seems rsync does it, and does not need server/daemon/install I thought it does - just $ rsync src dst.\n" ]
[ 14, 5, 5, 2, 0, 0, 0 ]
[ "SMS if it's available works.\n" ]
[ -1 ]
[ "intermittent", "network_programming", "vb6", "windows", "wireless" ]
stackoverflow_0000018172_intermittent_network_programming_vb6_windows_wireless.txt
Q: Anyone have sample code for a UserControl with pager controls to be used in a GridView's PagerTemplate? I've got several Gridviews in my application in which I use a custom PagerTemplate. I'd like to turn this custom template into a UserControl so that I don't need to replicate the same logic in multiple pages. I'm pretty sure that such a thing is possible, but I'm unsure of how exactly to wire the UserControl to the Gridview's events, and what interfaces my control may need to implement. I'm using ASP 2.0 frameworks. Has anyone done something like this? And if so, do you have any sample code for your usercontrol? A: Dave Anderson, a co-worker of mine, wrote this server control that could help you get started. Note that we're targeting .NET 3.5. [AspNetHostingPermission( SecurityAction.Demand, Level = AspNetHostingPermissionLevel.Minimal), AspNetHostingPermission(SecurityAction.InheritanceDemand, Level = AspNetHostingPermissionLevel.Minimal), DefaultProperty("Text"), ToolboxData("<{0}:Pager runat=\"server\"> </{0}:Pager>"), Designer(typeof(ServerControls.Design.PagerDesigner)) ] public class Pager : WebControl, INamingContainer { #region Private Constants private const string Command_First = "First"; private const string Command_Prev = "Prev"; private const string Command_Next = "Next"; private const string Command_Last = "Last"; #endregion #region Private members private Control PageableNamingContainer; private PropertyInfo PageCountInfo; private PropertyInfo PageIndexInfo; private DropDownList ddlCurrentPage; private Label lblPageCount; private Button btnFirst; private Button btnPrevious; private Button btnNext; private Button btnLast; #endregion #region Private Properties private int PageCount { get { int Result; if (InsideDataPager) Result = (int)Math.Ceiling((decimal)(TotalRowCount / PageSize)) + 1; else Result = (int)PageCountInfo.GetValue(PageableNamingContainer, null); return Result; } } private int PageIndex { get { int Result; if (InsideDataPager) Result = (int)Math.Floor((decimal)(StartRowIndex / PageSize)); else Result = (int)PageIndexInfo.GetValue(PageableNamingContainer, null); return Result; } } private int StartRowIndex { get { if (InsideDataPager) return MyDataPager.StartRowIndex; else throw new Exception("DataPager functionality requires DataPager."); } } private int TotalRowCount { get { if (InsideDataPager) return MyDataPager.TotalRowCount; else throw new Exception("DataPager functionality requires DataPager."); } } private int PageSize { get { if (InsideDataPager) return MyDataPager.PageSize; else throw new Exception("DataPager functionality requires DataPager."); } } private bool InsideDataPager { get { return ViewState["InsideDataPager"] == null ? false : (bool)ViewState["InsideDataPager"]; } set { ViewState["InsideDataPager"] = value; } } #region DataPager-Specific properties private DataPager MyDataPager { get { if (InsideDataPager) return (DataPager)PageableNamingContainer; else throw new Exception("DataPager functionality requires DataPager."); } } private int PrevPageStartIndex { get { return StartRowIndex >= PageSize ? StartRowIndex - PageSize : 0; } } private int NextPageStartIndex { get { return StartRowIndex + PageSize >= TotalRowCount ? LastPageStartIndex : StartRowIndex + PageSize; } } private int LastPageStartIndex { get { return (PageCount-1) * PageSize; } } #endregion #endregion #region Public Properties [ Category("Behavior"), DefaultValue(""), Description("The stylesheet class to use for the buttons") ] public bool HideInactiveButtons { get; set; } [ Category("Behavior"), DefaultValue("true"), Description("Indicates whether the controls will invoke validation routines") ] public bool CausesValidation { get; set; } [ Category("Appearance"), DefaultValue(""), Description("The stylesheet class to use for the buttons") ] public string ButtonCssClass { get; set; } [ Category("Appearance"), DefaultValue("<<"), Description("The text to be shown on the button that navigates to the First page") ] public string FirstText { get; set; } [ Category("Appearance"), DefaultValue("<"), Description("The text to be shown on the button that navigates to the Previous page") ] public string PreviousText { get; set; } [ Category("Appearance"), DefaultValue(">"), Description("The text to be shown on the button that navigates to the Next page") ] public string NextText { get; set; } [ Category("Appearance"), DefaultValue(">>"), Description("The text to be shown on the button that navigates to the Last page") ] public string LastText { get; set; } #endregion #region Overridden properties public override ControlCollection Controls { get { EnsureChildControls(); return base.Controls; } } #endregion #region Overridden methods/events protected override void OnLoad(EventArgs e) { base.OnLoad(e); if (!GetPageInfo(NamingContainer)) throw new Exception("Unable to locate the Pageable Container."); } protected override void OnPreRender(EventArgs e) { base.OnPreRender(e); if (PageableNamingContainer != null) { EnsureChildControls(); ddlCurrentPage.Items.Clear(); for (int i = 0; i < PageCount; i++) ddlCurrentPage.Items.Add(new ListItem((i + 1).ToString(), (i + 1).ToString())); lblPageCount.Text = PageCount.ToString(); if (HideInactiveButtons) { btnFirst.Visible = btnPrevious.Visible = (PageIndex > 0); btnLast.Visible = btnNext.Visible = (PageIndex < (PageCount - 1)); } else { btnFirst.Enabled = btnPrevious.Enabled = (PageIndex > 0); btnLast.Enabled = btnNext.Enabled = (PageIndex < (PageCount - 1)); } ddlCurrentPage.SelectedIndex = PageIndex; } else ddlCurrentPage.SelectedIndex = 0; } protected override bool OnBubbleEvent(object source, EventArgs args) { // We handle all our events inside this class when // we are inside a DataPager return InsideDataPager; } #endregion #region Event delegate protected void PagerEvent(object sender, EventArgs e) { if (InsideDataPager) { int NewStartingIndex; if (sender.GetType() == typeof(Button)) { string arg = ((Button)sender).CommandArgument.ToString(); switch (arg) { case Command_Prev: NewStartingIndex = PrevPageStartIndex; break; case Command_Next: NewStartingIndex = NextPageStartIndex; break; case Command_Last: NewStartingIndex = LastPageStartIndex; break; case Command_First: default: NewStartingIndex = 0; break; } } else { NewStartingIndex = Math.Min(((DropDownList)sender).SelectedIndex * PageSize, LastPageStartIndex); } MyDataPager.SetPageProperties(NewStartingIndex, MyDataPager.MaximumRows, true); } else { CommandEventArgs ea = new CommandEventArgs("Page", ((DropDownList)sender).SelectedValue); RaiseBubbleEvent(this, ea); } } #endregion #region GetPageableContainer private bool GetPageInfo(Control namingContainer) { if (namingContainer == null || namingContainer.GetType() == typeof(Page)) throw new Exception(this.GetType().ToString() + " must be used in a pageable container like a GridView."); /* * NOTE: If we are inside a DataPager, this will be * our first-level NamingContainer, so there * will never be any reflection in that case. */ if (namingContainer.GetType() == typeof(DataPagerFieldItem)) { InsideDataPager = true; PageableNamingContainer = ((DataPagerFieldItem)namingContainer).Pager; return true; } PageCountInfo = namingContainer.GetType().GetProperty("PageCount"); PageIndexInfo = namingContainer.GetType().GetProperty("PageIndex"); if (PageCountInfo == null || PageIndexInfo == null) return GetPageInfo(namingContainer.NamingContainer); else { PageableNamingContainer = namingContainer; return true; } } #endregion #region Control generation protected override void CreateChildControls() { Controls.Clear(); Controls.Add(BuildControlTable()); } private Table BuildControlTable() { Table ControlTable = new Table(); ControlTable.CssClass = CssClass; TableRow tr = new TableRow(); TableCell td = new TableCell(); td.Text = "Page"; tr.Cells.Add(td); td = new TableCell(); ddlCurrentPage = new DropDownList(); ddlCurrentPage.ID = "ddlCurrentPage"; ddlCurrentPage.AutoPostBack = true; ddlCurrentPage.SelectedIndexChanged += PagerEvent; ddlCurrentPage.CausesValidation = CausesValidation; td.Controls.Add(ddlCurrentPage); tr.Cells.Add(td); td = new TableCell(); td.Text = "of"; tr.Cells.Add(td); td = new TableCell(); lblPageCount = new Label(); td.Controls.Add(lblPageCount); tr.Cells.Add(td); AddButton(tr, ref btnFirst, string.IsNullOrEmpty(FirstText) ? "<<" : FirstText, Command_First); AddButton(tr, ref btnPrevious, string.IsNullOrEmpty(PreviousText) ? "<" : PreviousText, Command_Prev); AddButton(tr, ref btnNext, string.IsNullOrEmpty(NextText) ? ">" : NextText, Command_Next); AddButton(tr, ref btnLast, string.IsNullOrEmpty(LastText) ? ">>" : LastText, Command_Last); ControlTable.Rows.Add(tr); return ControlTable; } private void AddButton(TableRow row, ref Button button, string text, string argument) { button = new Button(); button.Text = text; button.CssClass = ButtonCssClass; button.CommandName = "Page"; button.CommandArgument = argument; button.CausesValidation = CausesValidation; if (InsideDataPager) button.Click += PagerEvent; TableCell td = new TableCell(); td.Controls.Add(button); row.Cells.Add(td); } #endregion }
Anyone have sample code for a UserControl with pager controls to be used in a GridView's PagerTemplate?
I've got several Gridviews in my application in which I use a custom PagerTemplate. I'd like to turn this custom template into a UserControl so that I don't need to replicate the same logic in multiple pages. I'm pretty sure that such a thing is possible, but I'm unsure of how exactly to wire the UserControl to the Gridview's events, and what interfaces my control may need to implement. I'm using ASP 2.0 frameworks. Has anyone done something like this? And if so, do you have any sample code for your usercontrol?
[ "Dave Anderson, a co-worker of mine, wrote this server control that could help you get started. Note that we're targeting .NET 3.5.\n[AspNetHostingPermission(\n SecurityAction.Demand,\n Level = AspNetHostingPermissionLevel.Minimal),\n AspNetHostingPermission(SecurityAction.InheritanceDemand,\n Level = AspNetHostingPermissionLevel.Minimal),\n DefaultProperty(\"Text\"),\n ToolboxData(\"<{0}:Pager runat=\\\"server\\\"> </{0}:Pager>\"),\n Designer(typeof(ServerControls.Design.PagerDesigner))\n]\npublic class Pager : WebControl, INamingContainer\n{\n #region Private Constants\n\n private const string Command_First = \"First\";\n private const string Command_Prev = \"Prev\";\n private const string Command_Next = \"Next\";\n private const string Command_Last = \"Last\";\n\n #endregion\n\n #region Private members\n\n private Control PageableNamingContainer;\n private PropertyInfo PageCountInfo;\n private PropertyInfo PageIndexInfo;\n private DropDownList ddlCurrentPage;\n private Label lblPageCount;\n private Button btnFirst;\n private Button btnPrevious;\n private Button btnNext;\n private Button btnLast;\n\n #endregion\n\n #region Private Properties\n\n private int PageCount\n {\n get\n {\n int Result;\n if (InsideDataPager)\n Result = (int)Math.Ceiling((decimal)(TotalRowCount / PageSize)) + 1;\n else\n Result = (int)PageCountInfo.GetValue(PageableNamingContainer, null);\n\n return Result;\n }\n }\n private int PageIndex\n {\n get\n {\n int Result;\n if (InsideDataPager)\n Result = (int)Math.Floor((decimal)(StartRowIndex / PageSize));\n else\n Result = (int)PageIndexInfo.GetValue(PageableNamingContainer, null);\n\n return Result;\n }\n }\n\n private int StartRowIndex\n {\n get\n {\n if (InsideDataPager)\n return MyDataPager.StartRowIndex;\n else\n throw new Exception(\"DataPager functionality requires DataPager.\");\n }\n }\n private int TotalRowCount\n {\n get\n {\n if (InsideDataPager)\n return MyDataPager.TotalRowCount;\n else\n throw new Exception(\"DataPager functionality requires DataPager.\");\n }\n }\n private int PageSize\n {\n get\n {\n if (InsideDataPager)\n return MyDataPager.PageSize;\n else\n throw new Exception(\"DataPager functionality requires DataPager.\");\n }\n }\n\n private bool InsideDataPager\n {\n get { return ViewState[\"InsideDataPager\"] == null ? false : (bool)ViewState[\"InsideDataPager\"]; }\n set { ViewState[\"InsideDataPager\"] = value; }\n }\n\n #region DataPager-Specific properties\n\n private DataPager MyDataPager\n {\n get\n {\n if (InsideDataPager)\n return (DataPager)PageableNamingContainer;\n else\n throw new Exception(\"DataPager functionality requires DataPager.\");\n }\n }\n private int PrevPageStartIndex\n {\n get { return StartRowIndex >= PageSize ? StartRowIndex - PageSize : 0; }\n }\n private int NextPageStartIndex\n {\n get { return StartRowIndex + PageSize >= TotalRowCount ? LastPageStartIndex : StartRowIndex + PageSize; }\n }\n private int LastPageStartIndex\n {\n get { return (PageCount-1) * PageSize; }\n }\n\n #endregion\n\n #endregion\n\n #region Public Properties\n\n [\n Category(\"Behavior\"),\n DefaultValue(\"\"),\n Description(\"The stylesheet class to use for the buttons\")\n ]\n public bool HideInactiveButtons { get; set; }\n\n [\n Category(\"Behavior\"),\n DefaultValue(\"true\"),\n Description(\"Indicates whether the controls will invoke validation routines\")\n ]\n public bool CausesValidation { get; set; }\n\n [\n Category(\"Appearance\"),\n DefaultValue(\"\"),\n Description(\"The stylesheet class to use for the buttons\")\n ]\n public string ButtonCssClass { get; set; }\n\n [\n Category(\"Appearance\"),\n DefaultValue(\"<<\"),\n Description(\"The text to be shown on the button that navigates to the First page\")\n ]\n public string FirstText { get; set; }\n [\n Category(\"Appearance\"),\n DefaultValue(\"<\"),\n Description(\"The text to be shown on the button that navigates to the Previous page\")\n ]\n public string PreviousText { get; set; }\n [\n Category(\"Appearance\"),\n DefaultValue(\">\"),\n Description(\"The text to be shown on the button that navigates to the Next page\")\n ]\n public string NextText { get; set; }\n [\n Category(\"Appearance\"),\n DefaultValue(\">>\"),\n Description(\"The text to be shown on the button that navigates to the Last page\")\n ]\n public string LastText { get; set; }\n\n #endregion\n\n #region Overridden properties\n\n public override ControlCollection Controls\n {\n get\n {\n EnsureChildControls();\n return base.Controls;\n }\n }\n\n #endregion\n\n #region Overridden methods/events\n\n protected override void OnLoad(EventArgs e)\n {\n base.OnLoad(e);\n if (!GetPageInfo(NamingContainer))\n throw new Exception(\"Unable to locate the Pageable Container.\");\n }\n\n protected override void OnPreRender(EventArgs e)\n {\n base.OnPreRender(e);\n if (PageableNamingContainer != null)\n {\n EnsureChildControls();\n\n ddlCurrentPage.Items.Clear();\n for (int i = 0; i < PageCount; i++)\n ddlCurrentPage.Items.Add(new ListItem((i + 1).ToString(), (i + 1).ToString()));\n\n lblPageCount.Text = PageCount.ToString();\n if (HideInactiveButtons)\n {\n btnFirst.Visible = btnPrevious.Visible = (PageIndex > 0);\n btnLast.Visible = btnNext.Visible = (PageIndex < (PageCount - 1));\n }\n else\n {\n btnFirst.Enabled = btnPrevious.Enabled = (PageIndex > 0);\n btnLast.Enabled = btnNext.Enabled = (PageIndex < (PageCount - 1));\n }\n ddlCurrentPage.SelectedIndex = PageIndex;\n }\n else\n ddlCurrentPage.SelectedIndex = 0;\n }\n\n protected override bool OnBubbleEvent(object source, EventArgs args)\n {\n // We handle all our events inside this class when\n // we are inside a DataPager\n return InsideDataPager;\n }\n\n #endregion\n\n #region Event delegate\n\n protected void PagerEvent(object sender, EventArgs e)\n {\n if (InsideDataPager)\n {\n int NewStartingIndex;\n\n if (sender.GetType() == typeof(Button))\n {\n string arg = ((Button)sender).CommandArgument.ToString();\n switch (arg)\n {\n case Command_Prev:\n NewStartingIndex = PrevPageStartIndex;\n break;\n case Command_Next:\n NewStartingIndex = NextPageStartIndex;\n break;\n case Command_Last:\n NewStartingIndex = LastPageStartIndex;\n break;\n case Command_First:\n default:\n NewStartingIndex = 0;\n break;\n }\n }\n else\n {\n NewStartingIndex = Math.Min(((DropDownList)sender).SelectedIndex * PageSize, LastPageStartIndex);\n }\n\n MyDataPager.SetPageProperties(NewStartingIndex, MyDataPager.MaximumRows, true);\n }\n else\n {\n CommandEventArgs ea = new CommandEventArgs(\"Page\", ((DropDownList)sender).SelectedValue);\n RaiseBubbleEvent(this, ea);\n }\n }\n\n #endregion\n\n #region GetPageableContainer\n\n private bool GetPageInfo(Control namingContainer)\n {\n if (namingContainer == null || namingContainer.GetType() == typeof(Page))\n throw new Exception(this.GetType().ToString() + \" must be used in a pageable container like a GridView.\");\n\n /* \n * NOTE: If we are inside a DataPager, this will be \n * our first-level NamingContainer, so there\n * will never be any reflection in that case.\n */\n if (namingContainer.GetType() == typeof(DataPagerFieldItem))\n {\n InsideDataPager = true;\n PageableNamingContainer = ((DataPagerFieldItem)namingContainer).Pager;\n return true;\n }\n\n PageCountInfo = namingContainer.GetType().GetProperty(\"PageCount\");\n PageIndexInfo = namingContainer.GetType().GetProperty(\"PageIndex\");\n if (PageCountInfo == null || PageIndexInfo == null)\n return GetPageInfo(namingContainer.NamingContainer);\n else\n {\n PageableNamingContainer = namingContainer;\n return true;\n }\n }\n\n #endregion\n\n #region Control generation\n\n protected override void CreateChildControls()\n {\n Controls.Clear();\n Controls.Add(BuildControlTable());\n }\n\n private Table BuildControlTable()\n {\n Table ControlTable = new Table();\n ControlTable.CssClass = CssClass;\n TableRow tr = new TableRow();\n TableCell td = new TableCell();\n\n td.Text = \"Page\";\n tr.Cells.Add(td);\n\n td = new TableCell();\n ddlCurrentPage = new DropDownList();\n ddlCurrentPage.ID = \"ddlCurrentPage\";\n ddlCurrentPage.AutoPostBack = true;\n ddlCurrentPage.SelectedIndexChanged += PagerEvent;\n ddlCurrentPage.CausesValidation = CausesValidation;\n td.Controls.Add(ddlCurrentPage);\n tr.Cells.Add(td);\n\n td = new TableCell();\n td.Text = \"of\";\n tr.Cells.Add(td);\n\n td = new TableCell();\n lblPageCount = new Label();\n td.Controls.Add(lblPageCount);\n tr.Cells.Add(td);\n\n AddButton(tr, ref btnFirst, string.IsNullOrEmpty(FirstText) ? \"<<\" : FirstText, Command_First);\n AddButton(tr, ref btnPrevious, string.IsNullOrEmpty(PreviousText) ? \"<\" : PreviousText, Command_Prev);\n AddButton(tr, ref btnNext, string.IsNullOrEmpty(NextText) ? \">\" : NextText, Command_Next);\n AddButton(tr, ref btnLast, string.IsNullOrEmpty(LastText) ? \">>\" : LastText, Command_Last);\n ControlTable.Rows.Add(tr);\n\n return ControlTable;\n }\n\n private void AddButton(TableRow row, ref Button button, string text, string argument)\n {\n button = new Button();\n button.Text = text;\n button.CssClass = ButtonCssClass;\n button.CommandName = \"Page\";\n button.CommandArgument = argument;\n button.CausesValidation = CausesValidation;\n if (InsideDataPager)\n button.Click += PagerEvent;\n TableCell td = new TableCell();\n td.Controls.Add(button);\n row.Cells.Add(td);\n }\n\n #endregion\n\n}\n\n" ]
[ 2 ]
[]
[]
[ "asp.net", "gridview", "pagertemplate", "user_controls" ]
stackoverflow_0000087853_asp.net_gridview_pagertemplate_user_controls.txt
Q: How do you deal with NULL values in columns of type boolean in MS Access? I was wondering if there is a better way to cope with MS-Access' inability to handle NULL for boolean-values other than change the column-data-type to integer. A: I think you must use a number, and so, it seems does Allen Browne, Access MVP. A: Not that I've found :( I haven't programmed Access in awhile, but what I remember involves quite a lot of isNull checks. A: I think it depends on how you want your app/solution to interpret said NULLs in your data.Do you want to simply "ignore" them in a report... i.e. have them print out as blank spaces or newlines? In that case you can use the handy IsNull function along with the "immediate if" iif() in either the SQL builder or a column in the regular Access query designer as follows: IIF(IsNull(BooleanColumnName), NewLine/BlankSpace/Whatever, BooleanColumnName)On the other hand, if you want to consider the NULLs as "False" values, you had better update the column and just change them with something like:Update table SET BooleanColumnName = FALSE WHERE BooleanColumnName IS NULL
How do you deal with NULL values in columns of type boolean in MS Access?
I was wondering if there is a better way to cope with MS-Access' inability to handle NULL for boolean-values other than change the column-data-type to integer.
[ "I think you must use a number, and so, it seems does Allen Browne, Access MVP.\n", "Not that I've found :( I haven't programmed Access in awhile, but what I remember involves quite a lot of isNull checks.\n", "I think it depends on how you want your app/solution to interpret said NULLs in your data.Do you want to simply \"ignore\" them in a report... i.e. have them print out as blank spaces or newlines? In that case you can use the handy IsNull function along with the \"immediate if\" iif() in either the SQL builder or a column in the regular Access query designer as follows:\nIIF(IsNull(BooleanColumnName), NewLine/BlankSpace/Whatever, BooleanColumnName)On the other hand, if you want to consider the NULLs as \"False\" values, you had better update the column and just change them with something like:Update table SET BooleanColumnName = FALSE WHERE BooleanColumnName IS NULL \n" ]
[ 2, 0, 0 ]
[]
[]
[ "boolean", "database", "ms_access", "null", "odbc" ]
stackoverflow_0000087712_boolean_database_ms_access_null_odbc.txt
Q: How to use stored procedures within a DTS data transformation task? I have a DTS package with a data transformation task (data pump). I’d like to source the data with the results of a stored procedure that takes parameters, but DTS won’t preview the result set and can’t define the columns in the data transformation task. Has anyone gotten this to work? Caveat: The stored procedure uses two temp tables (and cleans them up, of course) A: Enter some valid values for the stored procedure parameters so it runs and returns some data (or even no data, you just need the columns). Then you should be able to do the mapping/etc.. Then do a disconnected edit and change to the actual parameter values (I assume you are getting them from a global variable). DECLARE @param1 DataType1 DECLARE @param2 DataType2 SET @param1 = global variable SET @param2 = global variable (I forget exact syntax) --EXEC procedure @param1, @param2 EXEC dbo.proc value1, value2 Basically you run it like this so the procedure returns results. Do the mapping, then in disconnected edit comment out the second EXEC and uncomment the first EXEC and it should work. Basically you just need to make the procedure run and spit out results. Even if you get no rows back, it will still map the columns correctly. I don't have access to our production system (or even database) to create dts packages. So I create them in a dummy database and replace the stored procedure with something that returns the same columns that the production app would run, but no rows of data. Then after the mapping is done I move it to the production box with the real procedure and it works. This works great if you keep track of the database via scripts. You can just run the script to build an empty shell procedure and when done run the script to put back the true procedure. A: You would need to actually load them into a table, then you can use a SQL task to move it from that table into the perm location if you must make a translation. however, I have found that if working with a stored procedure to source the data, it is almost just as fast and easy to move it to its destination at the same time! A: Nope, I could only stored procedures with DTS by having them save the state in scrap tables.
How to use stored procedures within a DTS data transformation task?
I have a DTS package with a data transformation task (data pump). I’d like to source the data with the results of a stored procedure that takes parameters, but DTS won’t preview the result set and can’t define the columns in the data transformation task. Has anyone gotten this to work? Caveat: The stored procedure uses two temp tables (and cleans them up, of course)
[ "Enter some valid values for the stored procedure parameters so it runs and returns some data (or even no data, you just need the columns). Then you should be able to do the mapping/etc.. Then do a disconnected edit and change to the actual parameter values (I assume you are getting them from a global variable).\nDECLARE @param1 DataType1 \nDECLARE @param2 DataType2\nSET @param1 = global variable \nSET @param2 = global variable (I forget exact syntax) \n\n--EXEC procedure @param1, @param2 \nEXEC dbo.proc value1, value2\n\nBasically you run it like this so the procedure returns results. Do the mapping, then in disconnected edit comment out the second EXEC and uncomment the first EXEC and it should work.\nBasically you just need to make the procedure run and spit out results. Even if you get no rows back, it will still map the columns correctly. I don't have access to our production system (or even database) to create dts packages. So I create them in a dummy database and replace the stored procedure with something that returns the same columns that the production app would run, but no rows of data. Then after the mapping is done I move it to the production box with the real procedure and it works. This works great if you keep track of the database via scripts. You can just run the script to build an empty shell procedure and when done run the script to put back the true procedure.\n", "You would need to actually load them into a table, then you can use a SQL task to move it from that table into the perm location if you must make a translation.\nhowever, I have found that if working with a stored procedure to source the data, it is almost just as fast and easy to move it to its destination at the same time!\n", "Nope, I could only stored procedures with DTS by having them save the state in scrap tables.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "dts", "sql_server" ]
stackoverflow_0000087647_dts_sql_server.txt
Q: Connecting delegate classes in Objective-C I've got two controls in my Interface Builder file, and each of those controls I've created a separate delegate class for in code (Control1Delegate and Control2Delegate). I created two "Objects" in interface builder, made them of that type, and connected the controls to them as delegates. The delegates work just fine. My problem is, I need to share information from one delegate to the other delegate, and I'm not sure how. What is the best way to do this? Combine the two delegates into one class, or somehow access a third class that they can both read? Since I'm not actually initializing the class anywhere in my code, I'm not sure how to get a reference to the actual instance of it (if there is an actual instance of it), or even access the "main" class that the project came with. A: You can add outlets from either delegate to the other delegate. There are two ways to add an outlet to an object in IB (assuming you're using Xcode/IB version 3.0 or later: If you have not generated the code for your delegate classes yet, select the desired delegate, then open the "Object Identity" tab in the IB inspector. Add a "Class outlet" of type NSObject. You should then be able to set this new outlet to the other delegate. Of course you will have to generate the code for your delegate class and add the generated source files to your Xcode project before you can load the nib. If you've already generated the code for the delegate class (or added an NSObject to your NIB and set its Class to an existing class in your Xcode project), add an instance variable to the delegate class: IBOutlet id outletToOtherDelegate; As long as your Xcode project is open (as indicated by the green bubble in the lower-left of your NIB window), IB will automatically detect the new outlet and allow you to assign it to the other delegate object in your NIB. Cocoa automatically connects these outlets at NIB load time. Once awakeFromNib is called on instances of your delegate objects, you may assume that all the other objects in the NIB have been instantiated and all outlets have been connected. You should not assume an order on calls to awakeFromNib, however. A: I think you can create outlets on each one and cross-bind them so that they each have the same data all the time. If there's one model object they need to share, that's pretty tidy. I don't actually know how to do this; I think I saw it in an iPhone tutorial one time! A: I don't have my Mac in front of me currently since I'm at work, but would it be possible to bind an instance of one delegate to a member of the other delegate? This would be similar to binding an NSArrayController to a member of another controller class, for example. However, depending on what the delegate classes are doing, if the tasks are similar I would probably just combine them into once class. That would eliminate the problem altogether.
Connecting delegate classes in Objective-C
I've got two controls in my Interface Builder file, and each of those controls I've created a separate delegate class for in code (Control1Delegate and Control2Delegate). I created two "Objects" in interface builder, made them of that type, and connected the controls to them as delegates. The delegates work just fine. My problem is, I need to share information from one delegate to the other delegate, and I'm not sure how. What is the best way to do this? Combine the two delegates into one class, or somehow access a third class that they can both read? Since I'm not actually initializing the class anywhere in my code, I'm not sure how to get a reference to the actual instance of it (if there is an actual instance of it), or even access the "main" class that the project came with.
[ "You can add outlets from either delegate to the other delegate. There are two ways to add an outlet to an object in IB (assuming you're using Xcode/IB version 3.0 or later:\n\nIf you have not generated the code for your delegate classes yet, select the desired delegate, then open the \"Object Identity\" tab in the IB inspector. Add a \"Class outlet\" of type NSObject. You should then be able to set this new outlet to the other delegate. Of course you will have to generate the code for your delegate class and add the generated source files to your Xcode project before you can load the nib.\nIf you've already generated the code for the delegate class (or added an NSObject to your NIB and set its Class to an existing class in your Xcode project), add an instance variable to the delegate class:\nIBOutlet id outletToOtherDelegate;\nAs long as your Xcode project is open (as indicated by the green bubble in the lower-left of your NIB window), IB will automatically detect the new outlet and allow you to assign it to the other delegate object in your NIB.\n\nCocoa automatically connects these outlets at NIB load time. Once awakeFromNib is called on instances of your delegate objects, you may assume that all the other objects in the NIB have been instantiated and all outlets have been connected. You should not assume an order on calls to awakeFromNib, however.\n", "I think you can create outlets on each one and cross-bind them so that they each have the same data all the time. If there's one model object they need to share, that's pretty tidy. I don't actually know how to do this; I think I saw it in an iPhone tutorial one time!\n", "I don't have my Mac in front of me currently since I'm at work, but would it be possible to bind an instance of one delegate to a member of the other delegate? This would be similar to binding an NSArrayController to a member of another controller class, for example.\nHowever, depending on what the delegate classes are doing, if the tasks are similar I would probably just combine them into once class. That would eliminate the problem altogether.\n" ]
[ 8, 1, 1 ]
[]
[]
[ "cocoa", "interface_builder", "objective_c" ]
stackoverflow_0000086797_cocoa_interface_builder_objective_c.txt
Q: Enabling single sign-on between Desktop Application and Website We have a client/server application with a rich client front end (in .Net) and also an administration portal (Asp.Net). Currently users have to sign on in both the rich client and on the website. We'd like to enable them to sign into the rich client, but not have to sign on to the website if they launch it from within the client. How can we do that? Going the other way is less important, but would be nice if possible: signing on to the website, then not having to sign into the rich client. A: A possible solution is: sign-in to the rich client a random token is generated by the server and stored againsed the signed-in user rich client gets that token from the server that token is used in the url pointing to the website going to that url (using a link or a button from the rich client) will auto-login the user and reset the token A: simple token solutions suffer from a design flaw: you will have some sort of secret that can be reverse-engineered if wanted. this can be avoided entirely if dome correct. i would propose a challenge-response algorithm. *) user loggs in to rich client with pw *) from salt+password a sha256 or similar hash is calculated. (Hash A) *) user klicks "go to website" link. *) a http response is started (such as as a webservice), user fetches "get ticket number" for user account. this ticket will be valid for a short time (some minutes) *) client calculates hash(HashA+ticket) HashB *) finally - browser is pointed to www.example.org/?username=donkey&key=99754106633f94d350db34d548d6091a *) server checks if his calculated hash is the same as the one submitted. advantages of this method: -server never knows password, only salted hash. -even the hash is never transmitted through the wire -> no replay attacks. A: How about OAuth? An open protocol to allow secure API authorization in a simple and standard method from desktop and web applications. A: Make sure there is a timeout on the token, say 2-5 minutes, to ensure it is authentic. A: You could add a token, to identify them, to the URL which opens the site. You'll have to add some security to the token: TTL, a hash, a salt
Enabling single sign-on between Desktop Application and Website
We have a client/server application with a rich client front end (in .Net) and also an administration portal (Asp.Net). Currently users have to sign on in both the rich client and on the website. We'd like to enable them to sign into the rich client, but not have to sign on to the website if they launch it from within the client. How can we do that? Going the other way is less important, but would be nice if possible: signing on to the website, then not having to sign into the rich client.
[ "A possible solution is:\n\nsign-in to the rich client\na random token is generated by the server and stored againsed the signed-in user\nrich client gets that token from the server\nthat token is used in the url pointing to the website\ngoing to that url (using a link or a button from the rich client) will auto-login the user and reset the token\n\n", "simple token solutions suffer from a design flaw: you will have some sort of secret that can be reverse-engineered if wanted. this can be avoided entirely if dome correct.\ni would propose a challenge-response algorithm.\n*) user loggs in to rich client with pw\n*) from salt+password a sha256 or similar hash is calculated. (Hash A)\n*) user klicks \"go to website\" link.\n*) a http response is started (such as as a webservice), user fetches \"get ticket number\" for user account. this ticket will be valid for a short time (some minutes)\n*) client calculates hash(HashA+ticket) HashB\n*) finally - browser is pointed to www.example.org/?username=donkey&key=99754106633f94d350db34d548d6091a\n*) server checks if his calculated hash is the same as the one submitted.\nadvantages of this method:\n-server never knows password, only salted hash.\n-even the hash is never transmitted through the wire -> no replay attacks.\n", "How about OAuth?\n\nAn open protocol to allow secure API\n authorization in a simple and\n standard method from desktop and web\n applications.\n\n", "Make sure there is a timeout on the token, say 2-5 minutes, to ensure it is authentic.\n", "You could add a token, to identify them, to the URL which opens the site.\nYou'll have to add some security to the token: TTL, a hash, a salt\n" ]
[ 5, 2, 2, 1, 0 ]
[]
[]
[ ".net", "asp.net" ]
stackoverflow_0000081423_.net_asp.net.txt
Q: How to insert a row into a dataset using SSIS? I'm trying to create an SSIS package that takes data from an XML data source and for each row inserts another row with some preset values. Any ideas? I'm thinking I could use a DataReader source to generate the preset values by doing the following: SELECT 'foo' as 'attribute1', 'bar' as 'attribute2' The question is, how would I insert one row of this type for every row in the XML data source? A: I'm not sure if I understand the question... My assumption is that you have n number of records coming into SSIS from your data source, and you want your output to have n * 2 records. In order to do this, you can do the following: multicast to create multiple copies of your input data derived column transforms to set the "preset" values on the copies sort merge Am I on the right track w/ what you're trying to accomplish? A: I've never tried it, but it looks like you might be able to use a Derived Column transformation to do it: set the expression for attribute1 to "foo" and the expression for attribute2 to "bar". You'd then transform the original data source, then only use the derived columns in your destination. If you still need the original source, you can Multicast it to create a duplicate. At least I think this will work, based on the documentation. YMMV. A: I would probably switch to using a Script Task and place your logic in there. You may still be able leverage the File Reading and other objects in SSIS to save some code.
How to insert a row into a dataset using SSIS?
I'm trying to create an SSIS package that takes data from an XML data source and for each row inserts another row with some preset values. Any ideas? I'm thinking I could use a DataReader source to generate the preset values by doing the following: SELECT 'foo' as 'attribute1', 'bar' as 'attribute2' The question is, how would I insert one row of this type for every row in the XML data source?
[ "I'm not sure if I understand the question... My assumption is that you have n number of records coming into SSIS from your data source, and you want your output to have n * 2 records.\nIn order to do this, you can do the following:\n\nmulticast to create multiple copies of your input data\nderived column transforms to set the \"preset\" values on the copies\nsort\nmerge\n\nAm I on the right track w/ what you're trying to accomplish?\n", "I've never tried it, but it looks like you might be able to use a Derived Column transformation to do it: set the expression for attribute1 to \"foo\" and the expression for attribute2 to \"bar\".\nYou'd then transform the original data source, then only use the derived columns in your destination. If you still need the original source, you can Multicast it to create a duplicate.\nAt least I think this will work, based on the documentation. YMMV.\n", "I would probably switch to using a Script Task and place your logic in there. You may still be able leverage the File Reading and other objects in SSIS to save some code.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "sql_server", "ssis" ]
stackoverflow_0000051139_sql_server_ssis.txt
Q: How do you guarantee the ASPNET user gets assigned the correct default directory rights? I seem to make this mistake every time I set up a new development box. Is there a way to make sure you don't have to manually assign rights for the ASPNET user? I usually install .Net then IIS, then Visual Studio but it seems I still have to manually assign rights to the ASPNET user to get everything running correctly. Is my install order wrong? A: Install IIS, then .NET. The .NET installation will automatically register the needed things with IIS. If you install .NET first, run this: %windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i to run the registration parts, and %windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -ga userA to set up the security rights for userA A: If you install first IIS and then .Net, it'll be OK. In your scenario - use Aspnet_regiis.exe -qa user (not available for .Net < 2.0)
How do you guarantee the ASPNET user gets assigned the correct default directory rights?
I seem to make this mistake every time I set up a new development box. Is there a way to make sure you don't have to manually assign rights for the ASPNET user? I usually install .Net then IIS, then Visual Studio but it seems I still have to manually assign rights to the ASPNET user to get everything running correctly. Is my install order wrong?
[ "Install IIS, then .NET. The .NET installation will automatically register the needed things with IIS.\nIf you install .NET first, run this:\n%windir%\\Microsoft.NET\\Framework\\v2.0.50727\\aspnet_regiis.exe -i\n\nto run the registration parts, and \n%windir%\\Microsoft.NET\\Framework\\v2.0.50727\\aspnet_regiis.exe -ga userA\n\nto set up the security rights for userA\n", "If you install first IIS and then .Net, it'll be OK.\nIn your scenario - use Aspnet_regiis.exe -qa user (not available for .Net < 2.0)\n" ]
[ 2, 1 ]
[]
[]
[ ".net", "asp.net", "visual_studio" ]
stackoverflow_0000088094_.net_asp.net_visual_studio.txt
Q: How can I use NAnt to compile WPF controls I have a WPF project and I'm trying to setup a NAnt build script for it. The problem is that when it tries to compile the WPF controls, the .g.cs files are not being generated as they are when building from within Visual Studio. I'm using the csc build task. From my reading it seems that when Visual Studio builds, it performs a pre-build step that generates the .g.cs files. Is it possible to do this via NAnt? I found this post about WPF, .g.cs and baml: http://stuff.seans.com/2008/07/13/hello-wpf-world-part-2-why-xaml/ Any ideas? A: You might want to try using the msbuild task
How can I use NAnt to compile WPF controls
I have a WPF project and I'm trying to setup a NAnt build script for it. The problem is that when it tries to compile the WPF controls, the .g.cs files are not being generated as they are when building from within Visual Studio. I'm using the csc build task. From my reading it seems that when Visual Studio builds, it performs a pre-build step that generates the .g.cs files. Is it possible to do this via NAnt? I found this post about WPF, .g.cs and baml: http://stuff.seans.com/2008/07/13/hello-wpf-world-part-2-why-xaml/ Any ideas?
[ "You might want to try using the msbuild task\n" ]
[ 0 ]
[]
[]
[ "baml", "nant", "wpf" ]
stackoverflow_0000088121_baml_nant_wpf.txt
Q: How can I post a Cocoa "sheet" on another program's window? Using the Apple OS X Cocoa framework, how can I post a sheet (slide-down modal dialog) on the window of another process? Edit: Clarified a bit: My application is a Finder extension to do Subversion version control (http://scplugin.tigris.org/). Part of my application is a plug-in (a Contextual Menu Item for Finder); the bulk of my application, however, is in a separate daemon proces. For several reasons, we've chosen to put virtually all the code into the daemon; the plug-in only defines the menu itself, and Apple-Events over to the Daemon. Sometimes, the daemon needs to prompt the user for further information. It can toss a window on-screen for this, but that's disruptive (randomly positioned), and it seems to me the work flow here is legitimately modal, for example "select a file, pick 'commit' from the menu, provide commit comments, do the operation." Interprocess cooperation (such as passing a reference of some kind) is acceptable: both processes are mine, but I want to avoid binding the sheet's code into the primary process. A: Really, it sounds like you're trying to have your inter-process communication happen at the view level, which isn't really how Cocoa generally works. Things will be much easier if you separate your layers a bit more than that. Why don't you want to put the sheet code into the other process? It's view code, and view code is inherently process-specific. The right thing to do here is probably to add somewhat generic modal-sheet support to your plugin code, and an IPC call that your daemon can make to summon that code. Trying to ship view objects over to the remote process is going to be nightmarish if you can make it work at all. You're fighting the frameworks with this approach. A: You can't add a sheet to a window in another process, because you have at most only the most restricted access to the windows in the other process. A: Please don't do this. Make the interaction nonmodal if at all possible. Especially in something like a commit, it's much nicer to be able to browse around your files while you're writing commit comments. OS X does have window groups, but I don't think they can (easily) span applications. A: Another thing to consider is that in OS X it's possible to have many Finder windows open on the same folder (unlike in OS 9). Even if you did have sufficient privileges/APIs to add a sheet to a Finder window, it's not like the modality of that window would prevent the user from being able to continue working with the files. (My personal opinion as a long-time Mac user is that this kind of interaction would drive me right up the wall.)
How can I post a Cocoa "sheet" on another program's window?
Using the Apple OS X Cocoa framework, how can I post a sheet (slide-down modal dialog) on the window of another process? Edit: Clarified a bit: My application is a Finder extension to do Subversion version control (http://scplugin.tigris.org/). Part of my application is a plug-in (a Contextual Menu Item for Finder); the bulk of my application, however, is in a separate daemon proces. For several reasons, we've chosen to put virtually all the code into the daemon; the plug-in only defines the menu itself, and Apple-Events over to the Daemon. Sometimes, the daemon needs to prompt the user for further information. It can toss a window on-screen for this, but that's disruptive (randomly positioned), and it seems to me the work flow here is legitimately modal, for example "select a file, pick 'commit' from the menu, provide commit comments, do the operation." Interprocess cooperation (such as passing a reference of some kind) is acceptable: both processes are mine, but I want to avoid binding the sheet's code into the primary process.
[ "Really, it sounds like you're trying to have your inter-process communication happen at the view level, which isn't really how Cocoa generally works. Things will be much easier if you separate your layers a bit more than that.\nWhy don't you want to put the sheet code into the other process? It's view code, and view code is inherently process-specific. The right thing to do here is probably to add somewhat generic modal-sheet support to your plugin code, and an IPC call that your daemon can make to summon that code. Trying to ship view objects over to the remote process is going to be nightmarish if you can make it work at all.\nYou're fighting the frameworks with this approach.\n", "You can't add a sheet to a window in another process, because you have at most only the most restricted access to the windows in the other process.\n", "Please don't do this. Make the interaction nonmodal if at all possible. Especially in something like a commit, it's much nicer to be able to browse around your files while you're writing commit comments.\nOS X does have window groups, but I don't think they can (easily) span applications.\n", "Another thing to consider is that in OS X it's possible to have many Finder windows open on the same folder (unlike in OS 9). Even if you did have sufficient privileges/APIs to add a sheet to a Finder window, it's not like the modality of that window would prevent the user from being able to continue working with the files.\n(My personal opinion as a long-time Mac user is that this kind of interaction would drive me right up the wall.)\n" ]
[ 5, 2, 1, 1 ]
[]
[]
[ "cocoa", "daemon", "interaction", "interprocess", "modal_dialog" ]
stackoverflow_0000065343_cocoa_daemon_interaction_interprocess_modal_dialog.txt
Q: Using icons licensed under GPL or LGPL in a closed source commercial software? Is there a risk of legal trouble if you include GPL or LGPL licensed icons in a closed source software? Would it force it to become open source just to include the icon? Does it matter if the icon is compiled as a resource? Are the creative common licensed icons safe to use if you follow the attribution rules specified by the license? A: For GPL, yes. Any GPL Code/Content that's compiled into your Application or the Package will make it GPL. (Edit: What could be safe is if the Icon is a separate file and is used. That could be a grey area, as you are not using GPL Code to access it. But any attempt to embed it will force your program to GPL, it's one of the most restrictive licenses out there) LGPL is fine: Any modification to LGPL Content has to be released under LGPL, but using the Code/Content is safe. Addition: Like LGPL, CreativeCommons usually only affects the Content you're using. So if you're using a CC Icon and modify it, you will have to give out the modified item under CreativeCommons, but your Application is not affected. Just mind the "Non-Commercial" Clause if it exists. A: It's a tricky area, at a minimum you should probably arrange for the icons to be loaded at run time so that they can be replaced with other versions, this is at least the spirit of the GPL. An article discusses this at http://www.linux.com/feature/119212
Using icons licensed under GPL or LGPL in a closed source commercial software?
Is there a risk of legal trouble if you include GPL or LGPL licensed icons in a closed source software? Would it force it to become open source just to include the icon? Does it matter if the icon is compiled as a resource? Are the creative common licensed icons safe to use if you follow the attribution rules specified by the license?
[ "For GPL, yes. Any GPL Code/Content that's compiled into your Application or the Package will make it GPL. (Edit: What could be safe is if the Icon is a separate file and is used. That could be a grey area, as you are not using GPL Code to access it. But any attempt to embed it will force your program to GPL, it's one of the most restrictive licenses out there)\nLGPL is fine: Any modification to LGPL Content has to be released under LGPL, but using the Code/Content is safe.\nAddition: Like LGPL, CreativeCommons usually only affects the Content you're using. So if you're using a CC Icon and modify it, you will have to give out the modified item under CreativeCommons, but your Application is not affected. Just mind the \"Non-Commercial\" Clause if it exists.\n", "It's a tricky area, at a minimum you should probably arrange for the icons to be loaded at run time so that they can be replaced with other versions, this is at least the spirit of the GPL. \nAn article discusses this at http://www.linux.com/feature/119212\n" ]
[ 38, 2 ]
[]
[]
[ "gpl", "icons", "lgpl" ]
stackoverflow_0000047989_gpl_icons_lgpl.txt
Q: passing or reading .net cookie in php page Hi I am trying to find a way to read the cookie that i generated in .net web application to read that on the php page because i want the users to login once but they should be able to view .net and php pages ,until the cookie expires user should not need to login in again , but both .net and php web applications are on different servers , help me with this issue please , thanks A: You mention that : but both .net and php web applications are on different servers Are both applications running under the same domain name? (ie: www.mydomain.com) or are they on different domains? If they're on the same domain, then you can do what you're trying to do in PHP by using the $_COOKIE variable. Just get the cookie's value by $myCookie = $_COOKIE["cookie_name"]; Then you can do whatever you want with the value of $myCookie. But if they're on different domains (ie: foo.mydomain.com and bar.mydomain.com), you cannot access the cookie from both sites. The web browser will only send a cookie to pages on the domain that set the cookie. However, if you originally set the cookie with only the top-level domain (mydomain.com), then sub-domains (anything.mydomain.com) should be able to read the cookie. A: Are the two servers on the machine within the same domain? if so you should set the cookie scope to the domain rather than the FQDN; then both machines will be able to read them; Response.Cookies["domain"].Domain = "contoso.com"; would allow contoso.com, www.contoso.com, hotnakedhamsters.contoso.com etc to access it. A: any cookie given to a browser will be readable by server processing the request --- they're language agnostic. try $_COOKIE in PHP A: As long as the site is the same (i.e. www.example.com) then cookies are platform agnostic. As Todd Kennedy said try the super global $_COOKIE. If you site is different though you won't be able to read the cookies, they are supposed to be site specific and prevent this type of cross site access.
passing or reading .net cookie in php page
Hi I am trying to find a way to read the cookie that i generated in .net web application to read that on the php page because i want the users to login once but they should be able to view .net and php pages ,until the cookie expires user should not need to login in again , but both .net and php web applications are on different servers , help me with this issue please , thanks
[ "You mention that :\n\nbut both .net and php web applications are on different servers\n\nAre both applications running under the same domain name? (ie: www.mydomain.com) or are they on different domains?\nIf they're on the same domain, then you can do what you're trying to do in PHP by using the $_COOKIE variable. Just get the cookie's value by \n$myCookie = $_COOKIE[\"cookie_name\"];\n\nThen you can do whatever you want with the value of $myCookie. \nBut if they're on different domains (ie: foo.mydomain.com and bar.mydomain.com), you cannot access the cookie from both sites. The web browser will only send a cookie to pages on the domain that set the cookie. However, if you originally set the cookie with only the top-level domain (mydomain.com), then sub-domains (anything.mydomain.com) should be able to read the cookie.\n", "Are the two servers on the machine within the same domain? if so you should set the cookie scope to the domain rather than the FQDN; then both machines will be able to read them;\nResponse.Cookies[\"domain\"].Domain = \"contoso.com\";\n\nwould allow contoso.com, www.contoso.com, hotnakedhamsters.contoso.com etc to access it.\n", "any cookie given to a browser will be readable by server processing the request --- they're language agnostic.\ntry $_COOKIE in PHP \n", "As long as the site is the same (i.e. www.example.com) then cookies are platform agnostic. As Todd Kennedy said try the super global $_COOKIE. If you site is different though you won't be able to read the cookies, they are supposed to be site specific and prevent this type of cross site access.\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ ".net", "asp.net", "cookies", "php" ]
stackoverflow_0000087818_.net_asp.net_cookies_php.txt
Q: How do I correctly access static member classes? I have two classes, and want to include a static instance of one class inside the other and access the static fields from the second class via the first. This is so I can have non-identical instances with the same name. Class A { public static package1.Foo foo; } Class B { public static package2.Foo foo; } //package1 Foo { public final static int bar = 1; } // package2 Foo { public final static int bar = 2; } // usage assertEquals(A.foo.bar, 1); assertEquals(B.foo.bar, 2); This works, but I get a warning "The static field Foo.bar shoudl be accessed in a static way". Can someone explain why this is and offer a "correct" implementation. I realize I could access the static instances directly, but if you have a long package hierarchy, that gets ugly: assertEquals(net.FooCorp.divisions.A.package.Foo.bar, 1); assertEquals(net.FooCorp.divisions.B.package.Foo.bar, 2); A: You should use: Foo.bar And not: A.foo.bar That's what the warning means. The reason is that bar isn't a member of an instance of Foo. Rather, bar is global, on the class Foo. The compiler wants you to reference it globally rather than pretending it's a member of the instance. A: There is no sense in putting these two static variables in these to classes as long as you only need to access static members. The compiler expects you to access them trough class name prefixes like: package1.Foo.bar package2.Foo.bar A: Once you created the object in: public static package1.Foo foo; it isn't being accessed in a Static way. You will have to use the class name and, of course, the full package name to address the class since they have the same name on different packages A: I agree with others that you're probably thinking about this the wrong way. With that out of the way, this may work for you if you are only accessing static members: public class A { public static class Foo extends package1.Foo {} } public class B { public static class Foo extends package2.Foo {} } A: It's true that a Foo instance has access to Foo's static fields, but think about the word "static". It means "statically bound", at least in this case. Since A.foo is of type Foo, "A.foo.bar" is not going to ask the object for "bar", it's going to go straight to the class. That means that even if a subclass has a static field called "bar", and foo is an instance of that subclass, it's going to get Foo.bar, not FooSubclass.bar. Therefore it's a better idea to reference it by the class name, since if you try to take advantage of inheritance you'll shoot yourself in the foot.
How do I correctly access static member classes?
I have two classes, and want to include a static instance of one class inside the other and access the static fields from the second class via the first. This is so I can have non-identical instances with the same name. Class A { public static package1.Foo foo; } Class B { public static package2.Foo foo; } //package1 Foo { public final static int bar = 1; } // package2 Foo { public final static int bar = 2; } // usage assertEquals(A.foo.bar, 1); assertEquals(B.foo.bar, 2); This works, but I get a warning "The static field Foo.bar shoudl be accessed in a static way". Can someone explain why this is and offer a "correct" implementation. I realize I could access the static instances directly, but if you have a long package hierarchy, that gets ugly: assertEquals(net.FooCorp.divisions.A.package.Foo.bar, 1); assertEquals(net.FooCorp.divisions.B.package.Foo.bar, 2);
[ "You should use:\nFoo.bar\n\nAnd not:\nA.foo.bar\n\nThat's what the warning means.\nThe reason is that bar isn't a member of an instance of Foo. Rather, bar is global, on the class Foo. The compiler wants you to reference it globally rather than pretending it's a member of the instance.\n", "There is no sense in putting these two static variables in these to classes as long as you only need to access static members.\nThe compiler expects you to access them trough class name prefixes like:\npackage1.Foo.bar\npackage2.Foo.bar\n\n", "Once you created the object in: \npublic static package1.Foo foo;\n\nit isn't being accessed in a Static way. You will have to use the class name and, of course, the full package name to address the class since they have the same name on different packages\n", "I agree with others that you're probably thinking about this the wrong way. With that out of the way, this may work for you if you are only accessing static members:\npublic class A {\n public static class Foo extends package1.Foo {}\n}\npublic class B {\n public static class Foo extends package2.Foo {}\n}\n\n", "It's true that a Foo instance has access to Foo's static fields, but think about the word \"static\". It means \"statically bound\", at least in this case. Since A.foo is of type Foo, \"A.foo.bar\" is not going to ask the object for \"bar\", it's going to go straight to the class. That means that even if a subclass has a static field called \"bar\", and foo is an instance of that subclass, it's going to get Foo.bar, not FooSubclass.bar. Therefore it's a better idea to reference it by the class name, since if you try to take advantage of inheritance you'll shoot yourself in the foot.\n" ]
[ 10, 2, 2, 1, 0 ]
[]
[]
[ "java", "static" ]
stackoverflow_0000086607_java_static.txt
Q: Setting Environment Variables for Mercurial Hook I am trying to call a shell script that sets a bunch of environment variables on our server from a mercurial hook. The shell script gets called fine when a new changegroup comes in, but the environment variables aren't carrying over past the call to the shell script. My hgrc file on the respository looks like this: [hooks] changegroup = shell_script changegroup.env = env I can see the output of the shell script, and then the output of the env command, but the env command doesn't include the new environment variables set by the shell script. I have verified that the shell script works fine when run by itself but when run in the context of the mercurial hook it does not properly set the environment. A: Shell scripts can't modify their enviroment. http://tldp.org/LDP/abs/html/gotchas.html A script may not export variables back to its parent process, the shell, or to the environment. Just as we learned in biology, a child process can inherit from a parent, but not vice versa $ cat > eg.sh export FOO="bar"; ^D $ bash eg.sh $ echo $FOO; $ also, the problem is greater, as you have multiple calls of bash bash 1 -> hg -> bash 2 ( shell script ) -> bash 3 ( env call ) it would be like thinking I could set a variable in one php script and then magically get it with another simply by running one after the other.
Setting Environment Variables for Mercurial Hook
I am trying to call a shell script that sets a bunch of environment variables on our server from a mercurial hook. The shell script gets called fine when a new changegroup comes in, but the environment variables aren't carrying over past the call to the shell script. My hgrc file on the respository looks like this: [hooks] changegroup = shell_script changegroup.env = env I can see the output of the shell script, and then the output of the env command, but the env command doesn't include the new environment variables set by the shell script. I have verified that the shell script works fine when run by itself but when run in the context of the mercurial hook it does not properly set the environment.
[ "Shell scripts can't modify their enviroment. \nhttp://tldp.org/LDP/abs/html/gotchas.html\n\nA script may not export variables back to its parent process, the shell, or to the environment. Just as we learned in biology, a child process can inherit from a parent, but not vice versa\n\n$ cat > eg.sh \nexport FOO=\"bar\";\n^D\n$ bash eg.sh \n$ echo $FOO; \n\n$\n\nalso, the problem is greater, as you have multiple calls of bash \nbash 1 -> hg -> bash 2 ( shell script ) \n -> bash 3 ( env call )\n\nit would be like thinking I could set a variable in one php script and then magically get it with another simply by running one after the other. \n" ]
[ 2 ]
[]
[]
[ "mercurial", "mercurial_hook", "python", "shell" ]
stackoverflow_0000088194_mercurial_mercurial_hook_python_shell.txt
Q: C#: Import/Export Settings into/from a File What's the best way to import/export app internal settings into a file from within an app? I have the Settings.settings file, winform UI tied to the settings file, and I want to import/export settings, similar to Visual Studio Import/Export Settings feature. A: If you are using the Settings.settings file, it's saving to the config file. By calling YourNamespace.Properties.Settings.Save() after updating your settings, they will be saved to the config files. However, I have no idea what you mean by "multiple sets of settings." If the settings are user settings, each user will have its own set of settings. If you are having multiple sets of settings for a single user, you probably should not use the .settings files; instead you'll want to use a database. A: You can use DataSet, which you bind to the form. And you can save/restore it. A: You could just use sections, or are you breaking out to other files for a specific reason? A: A tried and tested way I have used is to design a settings container class. This container class can have sub-classes for different types of setting categories. It works well since you reference your "settings" via property name and therefore if something changes in future, you will get compile time errors. It is also expandible, since you can always create new settings by adding more properties to your individual setting classes and assign default values to the private variable of a property that will be used should that specific setting not exist in an older version of your application. Once the new container is saved, the new settings will be persisted as well. Another advantage is the obvious human / computer readability of XML which is nice for settings. To save, serialize the container object to XML data, then write the data to file. To load, read the data from file and deserialize back into your settings container class. To serialize via standard C# code: public static string SerializeToXMLString(object ObjectToSerialize) MemoryStream mem = new MemoryStream(); System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(ObjectToSerialize.GetType()); ser.Serialize(mem,ObjectToSerialize); ASCIIEncoding ascii = new ASCIIEncoding(); return ascii.GetString(mem.ToArray()); To deserialize via standard C# code: public static object DeSerializeFromXMLString(System.Type TypeToDeserialize, string xmlString) byte[] bytes = System.Text.Encoding.UTF8.GetBytes(xmlString); MemoryStream mem = new MemoryStream(bytes); System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(TypeToDeserialize); return ser.Deserialize(mem); Once last nice thing about a serializable settings class is because it is an object, you can use IntelliSense to quickly navigate to a particular setting. Note: After you instantiated your settings container class, you should make it a static property of another static managing class (you can call it SettingsManager if you want) This managing class allows you to access your settings from anywhere in your application (since its static) and you can also have static functions to handle the loading and saving of the class.
C#: Import/Export Settings into/from a File
What's the best way to import/export app internal settings into a file from within an app? I have the Settings.settings file, winform UI tied to the settings file, and I want to import/export settings, similar to Visual Studio Import/Export Settings feature.
[ "If you are using the Settings.settings file, it's saving to the config file. By calling YourNamespace.Properties.Settings.Save() after updating your settings, they will be saved to the config files.\nHowever, I have no idea what you mean by \"multiple sets of settings.\" If the settings are user settings, each user will have its own set of settings. If you are having multiple sets of settings for a single user, you probably should not use the .settings files; instead you'll want to use a database.\n", "You can use DataSet, which you bind to the form. And you can save/restore it.\n", "You could just use sections, or are you breaking out to other files for a specific reason?\n", "A tried and tested way I have used is to design a settings container class. \nThis container class can have sub-classes for different types of setting categories.\nIt works well since you reference your \"settings\" via property name and therefore if something changes in future, you will get compile time errors. It is also expandible, since you can always create new settings by adding more properties to your individual setting classes and assign default values to the private variable of a property that will be used should that specific setting not exist in an older version of your application. Once the new container is saved, the new settings will be persisted as well.\nAnother advantage is the obvious human / computer readability of XML which is nice for settings. \nTo save, serialize the container object to XML data, then write the data to file. To load, read the data from file and deserialize back into your settings container class.\nTo serialize via standard C# code:\npublic static string SerializeToXMLString(object ObjectToSerialize)\nMemoryStream mem = new MemoryStream(); \nSystem.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(ObjectToSerialize.GetType());\nser.Serialize(mem,ObjectToSerialize); \nASCIIEncoding ascii = new ASCIIEncoding();\nreturn ascii.GetString(mem.ToArray());\n\nTo deserialize via standard C# code:\npublic static object DeSerializeFromXMLString(System.Type TypeToDeserialize, string xmlString)\nbyte[] bytes = System.Text.Encoding.UTF8.GetBytes(xmlString);\nMemoryStream mem = new MemoryStream(bytes); \nSystem.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(TypeToDeserialize);\nreturn ser.Deserialize(mem);\n\nOnce last nice thing about a serializable settings class is because it is an object, you can use IntelliSense to quickly navigate to a particular setting.\nNote: After you instantiated your settings container class, you should make it a static property of another static managing class (you can call it SettingsManager if you want)\nThis managing class allows you to access your settings from anywhere in your application (since its static) and you can also have static functions to handle the loading and saving of the class.\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ ".net", "c#", "visual_studio", "visual_studio_2005" ]
stackoverflow_0000088030_.net_c#_visual_studio_visual_studio_2005.txt
Q: Will a VS2008 setup project update Net 3.5 SP1? I just started using the WPF WebBrowser that is included in Net 3.5 SP1. I built my setup project (which I have been using prior to moving to 3.5 SP1) and installed it on a test machine but the WebBrowser was not available. What must I do to be sure that the setup.exe/msi combination checks for and installs SP1? A: Open the properties of the Setup Project, then click on the Prerequesites button. Then check the prerequisites to install. Then you can define how the user gets the pre-reqs. Here is a link to framework version information and an excerpt from Scott Hanselman's blog: Online/Download Experience The best way to get a user with reasonable Internet connectivity up on the 3.5 SP1 .NET Framework is with the 2.7 Meg "bootstrapper." This will detect what they need and only download what they need. The worst-case scenario for a x86 machine is around 60 megs, as seen in the table above. What's the "Client Profile?" The Client Profile is an even smaller install option for .NET 3.5 SP1 on XP. It's small 277k bootstrapper. When it's run on a Windows XP SP2 machines with no .NET Framework installed, it will download a 28 meg payload and give you a client-specific subset of .NET 3.5. If the Client Profile bootstrapper is run on a machine with any version of .NET on it, it'll act the same as the 3.5 SP1 web installer and detect what it needs to download, then go get it. There's more details in the Client Profile Deployment Guide. http://www.hanselman.com/blog/CommentView.aspx?guid=af453d70-64b3-417e-9492-d115f929195d A: On my way to answering my own question. Double-clicking on the Microsoft .net Framework in the Detected dependencies one can choose the version. Now the question is which is appropriate, 3.5.30729 or 3.5 SP1 Client? EDIT: 3.5.30729 works. Any ideas of the difference between the two? EDIT: Double-clicking on the .net Framework above shows .NET Framework as a Launch condition. This is where I changed the version. (I'd add a screenshot, but I don't have one at a URL, only on my desktop. A: In the setup project, add some launch conditions. This page shows you how exactly: http://jelle.druyts.net/2005/04/09/CheckingForNET11ServicePack1InAnMSI.aspx
Will a VS2008 setup project update Net 3.5 SP1?
I just started using the WPF WebBrowser that is included in Net 3.5 SP1. I built my setup project (which I have been using prior to moving to 3.5 SP1) and installed it on a test machine but the WebBrowser was not available. What must I do to be sure that the setup.exe/msi combination checks for and installs SP1?
[ "Open the properties of the Setup Project, then click on the Prerequesites button. Then check the prerequisites to install.\n\nThen you can define how the user gets the pre-reqs.\nHere is a link to framework version information and an excerpt from Scott Hanselman's blog:\n\nOnline/Download Experience\n The best way to get a user with reasonable Internet connectivity up on the 3.5 SP1 .NET Framework is with the 2.7 Meg \"bootstrapper.\" This will detect what they need and only download what they need. The worst-case scenario for a x86 machine is around 60 megs, as seen in the table above. \nWhat's the \"Client Profile?\"\n The Client Profile is an even smaller install option for .NET 3.5 SP1 on XP. It's small 277k bootstrapper. When it's run on a Windows XP SP2 machines with no .NET Framework installed, it will download a 28 meg payload and give you a client-specific subset of .NET 3.5. If the Client Profile bootstrapper is run on a machine with any version of .NET on it, it'll act the same as the 3.5 SP1 web installer and detect what it needs to download, then go get it. There's more details in the Client Profile Deployment Guide. \n\nhttp://www.hanselman.com/blog/CommentView.aspx?guid=af453d70-64b3-417e-9492-d115f929195d\n", "On my way to answering my own question. Double-clicking on the Microsoft .net Framework in the Detected dependencies one can choose the version. \nNow the question is which is appropriate, 3.5.30729 or 3.5 SP1 Client?\nEDIT: 3.5.30729 works. Any ideas of the difference between the two?\nEDIT: Double-clicking on the .net Framework above shows .NET Framework as a Launch condition. This is where I changed the version. (I'd add a screenshot, but I don't have one at a URL, only on my desktop.\n", "In the setup project, add some launch conditions. This page shows you how exactly:\nhttp://jelle.druyts.net/2005/04/09/CheckingForNET11ServicePack1InAnMSI.aspx\n" ]
[ 3, 0, 0 ]
[]
[]
[ ".net_3.5", "setup_project", "visual_studio_2008" ]
stackoverflow_0000088136_.net_3.5_setup_project_visual_studio_2008.txt
Q: C#: Is Implicit Arraylist assignment possible? I'd like to populate an arraylist by specifying a list of values just like I would an integer array, but am unsure of how to do so without repeated calls to the "add" method. For example, I want to assign { 1, 2, 3, "string1", "string2" } to an arraylist. I know for other arrays you can make the assignment like: int[] IntArray = {1,2,3}; Is there a similar way to do this for an arraylist? I tried the addrange method but the curly brace method doesn't implement the ICollection interface. A: Depending on the version of C# you are using, you have different options. C# 3.0 has collection initializers, detail at Scott Gu's Blog Here is an example of your problem. ArrayList list = new ArrayList {1,2,3}; And if you are initializing a collection object, most have constructors that take similar components to AddRange, although again as you mentioned this may not be an option. A: Array list has ctor which accepts ICollection, which is implemented by the Array class. object[] myArray = new object[] {1,2,3,"string1","string2"}; ArrayList myArrayList = new ArrayList(myArray); A: (kind of answering my own question but...) The closest thing I've found to what I want is to make use of the ArrayList.Adapter method: object[] values = { 1, 2, 3, "string1", "string2" }; ArrayList AL = new ArrayList(); AL = ArrayList.Adapter(values); //or during intialization ArrayList AL2 = ArrayList.Adapter(values); This is sufficient for what I need, but I was hoping it could be done in one line without creating the temporary array as someone else had suggested. A: Your comments imply you chose ArrayList because it was the first component you found. Assuming you are simply looking for a list of integers, this is probably the best way of doing that. List<int> list = new List<int>{1,2,3}; And if you are using C# 2.0 (Which has generics, but not collection initializers). List<int> list = new List<int>(new int[] {1, 2, 3}); Although the int[] format may not be correct in older versions, you may have to specify the number of items in the array. A: I assume you're not using C# 3.0, which has collection initializers. If you're not bothered about the overhead of creating a temp array, you could do it like this in 1.1/2.0: ArrayList list = new ArrayList(new object[] { 1, 2, 3, "string1", "string2"});
C#: Is Implicit Arraylist assignment possible?
I'd like to populate an arraylist by specifying a list of values just like I would an integer array, but am unsure of how to do so without repeated calls to the "add" method. For example, I want to assign { 1, 2, 3, "string1", "string2" } to an arraylist. I know for other arrays you can make the assignment like: int[] IntArray = {1,2,3}; Is there a similar way to do this for an arraylist? I tried the addrange method but the curly brace method doesn't implement the ICollection interface.
[ "Depending on the version of C# you are using, you have different options.\nC# 3.0 has collection initializers, detail at Scott Gu's Blog\nHere is an example of your problem.\nArrayList list = new ArrayList {1,2,3};\n\nAnd if you are initializing a collection object, most have constructors that take similar components to AddRange, although again as you mentioned this may not be an option.\n", "Array list has ctor which accepts ICollection, which is implemented by the Array class.\nobject[] myArray = new object[] {1,2,3,\"string1\",\"string2\"};\nArrayList myArrayList = new ArrayList(myArray);\n\n", "(kind of answering my own question but...)\nThe closest thing I've found to what I want is to make use of the ArrayList.Adapter method:\nobject[] values = { 1, 2, 3, \"string1\", \"string2\" };\nArrayList AL = new ArrayList();\nAL = ArrayList.Adapter(values);\n\n//or during intialization\nArrayList AL2 = ArrayList.Adapter(values);\n\nThis is sufficient for what I need, but I was hoping it could be done in one line without creating the temporary array as someone else had suggested.\n", "Your comments imply you chose ArrayList because it was the first component you found.\nAssuming you are simply looking for a list of integers, this is probably the best way of doing that.\nList<int> list = new List<int>{1,2,3};\n\nAnd if you are using C# 2.0 (Which has generics, but not collection initializers).\nList<int> list = new List<int>(new int[] {1, 2, 3});\n\nAlthough the int[] format may not be correct in older versions, you may have to specify the number of items in the array.\n", "I assume you're not using C# 3.0, which has collection initializers. If you're not bothered about the overhead of creating a temp array, you could do it like this in 1.1/2.0:\nArrayList list = new ArrayList(new object[] { 1, 2, 3, \"string1\", \"string2\"});\n\n" ]
[ 13, 8, 1, 1, 0 ]
[]
[]
[ "arraylist", "c#" ]
stackoverflow_0000087970_arraylist_c#.txt
Q: Finding differences between versions of a Java class file I am working with a large Java web application from a commercial vendor. I've received a patch from the vendor in the form of a new .class file that is supposed to resolve an issue we're having with the software. In the past, applying patches from this vendor have caused new and completely unrelated problems to arise, so I want to understand the change being made even before applying it to a test instance. I've got the two .class files side by side, the one extracted from the currently running version and the updated one from the vendor. JAD and JReversePro both decompile and disassemble (respectively) the two versions to the same output. However, the .class files are different sizes and I see differences in the output of od -x, so they're definitely not identical. What other steps could I take to determine the difference between the two files? Conclusion: Thanks for the great responses. Since javap -c output is also identical for the two class files, I am going to conclude that Davr's right and the vendor sent me a placebo. While I'm accepting Davr's answer for that reason, it was Chris Marshall and John Meagher who turned me on to javap, so thanks to all three of you. A: It's possible that they just compiled it with a new version of the java compiler, or with different optimization settings etc, so that the functionality is the same, and the code is the same, but the output bytecode is slightly different. A: If you are looking for API level differences the javap tool can be a big help. It will output the method signatures and those can be output to a plain text files and compared using normal diff tools. A: You could try using a diff tool (such as SourceGear's free DiffMerge tool) on the decompiled sources. That should pick up the file differences, although it will likely pick up "insignificant" differences, for example if variables have been named differently in the two versions. http://www.sourcegear.com/diffmerge/ A: You can use javap (in $JDK_HOME/bin) to decompile java .class files. It will tell you (for example) the class file version among other things
Finding differences between versions of a Java class file
I am working with a large Java web application from a commercial vendor. I've received a patch from the vendor in the form of a new .class file that is supposed to resolve an issue we're having with the software. In the past, applying patches from this vendor have caused new and completely unrelated problems to arise, so I want to understand the change being made even before applying it to a test instance. I've got the two .class files side by side, the one extracted from the currently running version and the updated one from the vendor. JAD and JReversePro both decompile and disassemble (respectively) the two versions to the same output. However, the .class files are different sizes and I see differences in the output of od -x, so they're definitely not identical. What other steps could I take to determine the difference between the two files? Conclusion: Thanks for the great responses. Since javap -c output is also identical for the two class files, I am going to conclude that Davr's right and the vendor sent me a placebo. While I'm accepting Davr's answer for that reason, it was Chris Marshall and John Meagher who turned me on to javap, so thanks to all three of you.
[ "It's possible that they just compiled it with a new version of the java compiler, or with different optimization settings etc, so that the functionality is the same, and the code is the same, but the output bytecode is slightly different.\n", "If you are looking for API level differences the javap tool can be a big help. It will output the method signatures and those can be output to a plain text files and compared using normal diff tools. \n", "You could try using a diff tool (such as SourceGear's free DiffMerge tool) on the decompiled sources. That should pick up the file differences, although it will likely pick up \"insignificant\" differences, for example if variables have been named differently in the two versions.\nhttp://www.sourcegear.com/diffmerge/\n", "You can use javap (in $JDK_HOME/bin) to decompile java .class files. It will tell you (for example) the class file version among other things\n" ]
[ 7, 6, 1, 1 ]
[]
[]
[ "decompiling", "java" ]
stackoverflow_0000088216_decompiling_java.txt
Q: How do I turn a ColdFusion page into a PDF download? I would like to turn the HTML generated by my CFM page into a PDF, and have the user prompted with the standard "Save As" prompt when navigating to my page. A: You should use the cfdocument tag (with format="PDF") to generate the PDF by placing it around the page you are generating. You'll want to specify a filename attribute, otherwise the document will just stream right to your browser. After you have saved the content as a PDF, use cfheader and cfcontent in combination to output the PDF as an attachment ("Save As") and add the file to the response stream. I also added deletefile="Yes" on the cfcontent tag to keep the file system clean of the files. <cfdocument format="PDF" filename="file.pdf" overwrite="Yes"> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>Hello World</title> </head> <body> Hello World </body> </html> </cfdocument> <cfheader name="Content-Disposition" value="attachment;filename=file.pdf"> <cfcontent type="application/octet-stream" file="#expandPath('.')#\file.pdf" deletefile="Yes"> As an aside: I'm just using file.pdf for the filename in the example below, but you might want to use some random or session generated string for the filename to avoid problems resulting from race conditions. A: If you want to avoid storing the PDF at all, using cfdocument without a filename will send the pdf (of flashpaper) directly to the browser without using the cfheader and cfcontent. Caveat: Like with using cfheader/cfcontent, you need to do this before the cache gets flushed to the browser, since it's basically doing the same thing without having to store the file. To get the content, I would probably use cfsavecontent wrapped around the same calls/includes/etc. that generate the page, with two major exceptions. cfdocument seems to have issues with external stylesheets, so using an include to put the styles directly into the document is probably a good idea. You can try using an @import instead -- it works for some people. Also, I'd be careful about relative links to images, as they can sometimes break. A: The <cfdocument> approach is the sanctioned way to get it done, however it does not offer everything possible in the way of manipulating existing PDF documents. I had a project where I needed to generate coupons based using a pre-designed, print-resolution PDF template. <cfdocument> would have let me approximate the output, but only with bitmap images embedded in HTML. True, I could fake print-resolution by making a large image and scaling it in HTML, but the original was a nice, clean, vector-image file and I wanted to use that instead. I ended up using a copy of <cfx_pdf> to get the job done. (Developer's Site, CF Tag Store) It's a CF wrapper around a Java PDF library that lets you manipulate existing PDF documents, including filling out PDF forms, setting permissions, merging files, drawing vector graphics, tables, and text, use custom fonts, etc, etc. If you are willing to work with it, you can get some pretty spectacular results. The one drawback is that it appears the developer has left this product out to pasture for a long time. The developer site is still copyright 2003 and doesn't mention anything past ColdFusion MX 6.1. I ended up having to break some of the encrypted templates in order to fix a couple of bugs and make it work as I needed it to. Nontheless, it is a powerful tool. A: I'm not that familiar with ColdFusion, but what you need to do is set the Content-Type of the page when the user requests it to be application/octet-stream. This will prompt them for a download every time. Hope this helps!
How do I turn a ColdFusion page into a PDF download?
I would like to turn the HTML generated by my CFM page into a PDF, and have the user prompted with the standard "Save As" prompt when navigating to my page.
[ "You should use the cfdocument tag (with format=\"PDF\") to generate the PDF by placing it around the page you are generating. You'll want to specify a filename attribute, otherwise the document will just stream right to your browser.\nAfter you have saved the content as a PDF, use cfheader and cfcontent in combination to output the PDF as an attachment (\"Save As\") and add the file to the response stream. I also added deletefile=\"Yes\" on the cfcontent tag to keep the file system clean of the files.\n<cfdocument format=\"PDF\" filename=\"file.pdf\" overwrite=\"Yes\">\n<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n<html>\n<head>\n <title>Hello World</title>\n</head>\n<body>\n Hello World\n</body>\n</html>\n</cfdocument>\n<cfheader name=\"Content-Disposition\" value=\"attachment;filename=file.pdf\">\n<cfcontent type=\"application/octet-stream\" file=\"#expandPath('.')#\\file.pdf\" deletefile=\"Yes\">\n\nAs an aside: I'm just using file.pdf for the filename in the example below, but you might want to use some random or session generated string for the filename to avoid problems resulting from race conditions.\n", "If you want to avoid storing the PDF at all, using cfdocument without a filename will send the pdf (of flashpaper) directly to the browser without using the cfheader and cfcontent.\nCaveat: Like with using cfheader/cfcontent, you need to do this before the cache gets flushed to the browser, since it's basically doing the same thing without having to store the file.\nTo get the content, I would probably use cfsavecontent wrapped around the same calls/includes/etc. that generate the page, with two major exceptions. cfdocument seems to have issues with external stylesheets, so using an include to put the styles directly into the document is probably a good idea. You can try using an @import instead -- it works for some people. Also, I'd be careful about relative links to images, as they can sometimes break. \n", "The <cfdocument> approach is the sanctioned way to get it done, however it does not offer everything possible in the way of manipulating existing PDF documents. I had a project where I needed to generate coupons based using a pre-designed, print-resolution PDF template. <cfdocument> would have let me approximate the output, but only with bitmap images embedded in HTML. True, I could fake print-resolution by making a large image and scaling it in HTML, but the original was a nice, clean, vector-image file and I wanted to use that instead. \nI ended up using a copy of <cfx_pdf> to get the job done. (Developer's Site, CF Tag Store) It's a CF wrapper around a Java PDF library that lets you manipulate existing PDF documents, including filling out PDF forms, setting permissions, merging files, drawing vector graphics, tables, and text, use custom fonts, etc, etc. If you are willing to work with it, you can get some pretty spectacular results. \nThe one drawback is that it appears the developer has left this product out to pasture for a long time. The developer site is still copyright 2003 and doesn't mention anything past ColdFusion MX 6.1. I ended up having to break some of the encrypted templates in order to fix a couple of bugs and make it work as I needed it to. Nontheless, it is a powerful tool.\n", "I'm not that familiar with ColdFusion, but what you need to do is set the Content-Type of the page when the user requests it to be application/octet-stream. This will prompt them for a download every time. \nHope this helps!\n" ]
[ 19, 4, 3, 1 ]
[]
[]
[ "coldfusion", "pdf" ]
stackoverflow_0000073964_coldfusion_pdf.txt
Q: Time Synchronization Ubuntu Server Under Parallels I've installed Ubuntu Server (8.04) into Parallels and found that the system time/clock ran fast to the extent that it would gain hours over time. A: What about using an NTP service to keep it sync'd? A: You could just want to install ntpd, it works well enough on real servers, should also do the trick on virtual ones. Another possibility is to check if Parallels has a configuration option like "Sync guest clock to host clock".
Time Synchronization Ubuntu Server Under Parallels
I've installed Ubuntu Server (8.04) into Parallels and found that the system time/clock ran fast to the extent that it would gain hours over time.
[ "What about using an NTP service to keep it sync'd?\n", "You could just want to install ntpd, it works well enough on real servers, should also do the trick on virtual ones. Another possibility is to check if Parallels has a configuration option like \"Sync guest clock to host clock\".\n" ]
[ 2, 0 ]
[]
[]
[ "parallels", "ubuntu", "virtual_machine" ]
stackoverflow_0000088289_parallels_ubuntu_virtual_machine.txt
Q: Make git-svn work on Slackware 12.1 It is obviosly some Perl extensions. Perl version is 5.8.8. I found Error.pm, but now I'm looking for Core.pm. While we're at it: how do you guys search for those modules. I tried Google, but that didn't help much. Thanks. And finally, after I built everything, running: ./Build install gives me: Running make install-lib /bin/ginstall -c -d /usr/lib/perl5/site_perl/5.8.8/i486-linux-thread-multi/Alien/SVN --prefix=/usr /bin/ginstall: unrecognized option `--prefix=/usr' Try `/bin/ginstall --help' for more information. make: *** [install-fsmod-lib] Error 1 installing libs failed at inc/My/SVN/Builder.pm line 165. Looks like Slackware's 'ginstall' really does not have that option. I think I'm going to Google a little bit now, to see how to get around this. A: Base class package "Module::Build" is empty. (Perhaps you need to 'use' the module which defines that package first.) at inc/My/SVN/Builder.pm line 5 BEGIN failed--compilation aborted at inc/My/SVN/Builder.pm line 5. Compilation failed in require at Build.PL line 6. BEGIN failed--compilation aborted at Build.PL line 6. is a (rather poor) way of asking you to install Module::Build. Once you do that, it's perl Build.PL ./Build ./Build test ./Build install A: how do you guys search for those modules http://search.cpan.org/ A: now I'm looking for Core.pm That’s SVN::Core, which is a bit of a problem. Try installing Alien::SVN from CPAN. That worked for me on my freshly installed Slackware 12.0 on my laptop, but I have yet to get it to install on my workstation. A: It should be compatible. The CPAN Tester's matrix shows no failures for Perl 5.8.8 on any platform. Per the README, you can install it by doing: perl Makefile.pl make make test make install A: https://metacpan.org/ is your first port of call for Perl modules. A: I'm guessing you're running on Slackware so the cpan command is what you want to be using to install any Perl modules. It will pull in all dependencies for you. If you're running it for the first time it will have to do some cofiguration, but newer versions of cpan will ask if you want it to automatically configure it. $ sudo cpan cpan> install Alien::SVN Additionally, if there's a package management application for Slackware, you should try that first to install new Perl modules. A: What do you mean by "does not seem to be compatible"? Can you post the error message? If the latest version does not work, you can select an older version in the "other releases" drop down and download that. Edit: to those reading this, the author updated the question, so my answer seems a bit out of left field :) A: The place to search is http://search.cpan.org. I have my browser (Firefox) set up so that I can type "cpan foo" in the address bar and it will search CPAN for modules matching "foo." You can do this with either a keyword bookmark or by assigning a keyword to a search plugin.
Make git-svn work on Slackware 12.1
It is obviosly some Perl extensions. Perl version is 5.8.8. I found Error.pm, but now I'm looking for Core.pm. While we're at it: how do you guys search for those modules. I tried Google, but that didn't help much. Thanks. And finally, after I built everything, running: ./Build install gives me: Running make install-lib /bin/ginstall -c -d /usr/lib/perl5/site_perl/5.8.8/i486-linux-thread-multi/Alien/SVN --prefix=/usr /bin/ginstall: unrecognized option `--prefix=/usr' Try `/bin/ginstall --help' for more information. make: *** [install-fsmod-lib] Error 1 installing libs failed at inc/My/SVN/Builder.pm line 165. Looks like Slackware's 'ginstall' really does not have that option. I think I'm going to Google a little bit now, to see how to get around this.
[ "Base class package \"Module::Build\" is empty.\n (Perhaps you need to 'use' the module which defines that package first.)\n at inc/My/SVN/Builder.pm line 5\nBEGIN failed--compilation aborted at inc/My/SVN/Builder.pm line 5.\nCompilation failed in require at Build.PL line 6.\nBEGIN failed--compilation aborted at Build.PL line 6.\n\nis a (rather poor) way of asking you to install Module::Build.\nOnce you do that, it's\nperl Build.PL\n./Build\n./Build test\n./Build install\n\n", "\nhow do you guys search for those modules\n\nhttp://search.cpan.org/\n", "\nnow I'm looking for Core.pm\n\nThat’s SVN::Core, which is a bit of a problem. Try installing Alien::SVN from CPAN. That worked for me on my freshly installed Slackware 12.0 on my laptop, but I have yet to get it to install on my workstation.\n", "It should be compatible. The CPAN Tester's matrix shows no failures for Perl 5.8.8 on any platform.\nPer the README, you can install it by doing:\nperl Makefile.pl\nmake\nmake test\nmake install\n\n", "https://metacpan.org/ is your first port of call for Perl modules.\n", "I'm guessing you're running on Slackware so the cpan command is what you want to be using to install any Perl modules. It will pull in all dependencies for you. If you're running it for the first time it will have to do some cofiguration, but newer versions of cpan will ask if you want it to automatically configure it.\n$ sudo cpan\ncpan> install Alien::SVN\nAdditionally, if there's a package management application for Slackware, you should try that first to install new Perl modules.\n", "What do you mean by \"does not seem to be compatible\"? Can you post the error message?\nIf the latest version does not work, you can select an older version in the \"other releases\" drop down and download that.\nEdit: to those reading this, the author updated the question, so my answer seems a bit out of left field :)\n", "The place to search is http://search.cpan.org.\nI have my browser (Firefox) set up so that I can type \"cpan foo\" in the address bar and it will search CPAN for modules matching \"foo.\" You can do this with either a keyword bookmark or by assigning a keyword to a search plugin.\n" ]
[ 3, 2, 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "git_svn", "perl", "slackware" ]
stackoverflow_0000087224_git_svn_perl_slackware.txt
Q: Page index is not working Help me ..my page index is not working in visual studio. my page load is as follows: protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { CustomerView.DataSource = Customer.GetAll(); CustomerView.DataBind(); } } protected void CustomerView_PageIndexChanging(object sender, System.Web.UI.WebControls.GridViewPageEventArgs e) { int newPageNumber = e.NewPageIndex + 1; CustomerView.DataSource = Customer.GetAll(); CustomerView.DataBind(); } what am i doing wrong my page index in not working. A: Try this. I think you have to set the GridView's PageIndex property manually. protected void CustomerView_PageIndexChanging(object sender, System.Web.UI.WebControls.GridViewPageEventArgs e) { CustomerView.PageIndex = e.NewPageIndex; CustomerView.DataSource = Customer.GetAll(); CustomerView.DataBind(); }
Page index is not working
Help me ..my page index is not working in visual studio. my page load is as follows: protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { CustomerView.DataSource = Customer.GetAll(); CustomerView.DataBind(); } } protected void CustomerView_PageIndexChanging(object sender, System.Web.UI.WebControls.GridViewPageEventArgs e) { int newPageNumber = e.NewPageIndex + 1; CustomerView.DataSource = Customer.GetAll(); CustomerView.DataBind(); } what am i doing wrong my page index in not working.
[ "Try this. I think you have to set the GridView's PageIndex property manually.\nprotected void CustomerView_PageIndexChanging(object sender, System.Web.UI.WebControls.GridViewPageEventArgs e)\n{\n CustomerView.PageIndex = e.NewPageIndex;\n CustomerView.DataSource = Customer.GetAll();\n CustomerView.DataBind();\n}\n\n" ]
[ 0 ]
[]
[]
[ "c#" ]
stackoverflow_0000088231_c#.txt
Q: Sockets and Processes in Java In Java, what would the best way be to have a constantly listening port open, and still send upon receipt of a packet. I am not particularly savvy with network programming at the moment, so the tutorials I have found on the net aren't particularly helpful. Would it make sense to have the listening socket as a serversocket and run it in a separate thread to the socket I'm using to send data to the server? In a loosely related question. Does anyone know if programming simply for java, in netbeans then exporting it for use on a blackberry (using a plugin) the sockets would still work ? A: If you can afford the threading, try this (keep in mind I've left out some details like exception handling and playing nice with threads). You may want to look into SocketChannels and/or NIO async sockets / selectors. This should get you started. boolean finished = false; int port = 10000; ServerSocket server = new ServerSocket(port); while (!finished) { // This will block until a connection is made Socket s = server.accept(); // Spawn off some thread (or use a thread pool) to handle this socket // Server will continue to listen } A: As for connecting to a Blackberry, this is problematic since in most cases the Blackberry won't have a public IP address and will instead be behind a WAP gateway or wireless provider access point server. RIM provides the Mobile Data Server (MDS) to get around this and provide "Push" data which uses ServerSocket semantics on the Blackberry. The MDS is available with the Blackberry Enterprise Server (BES) and the Unite Server. Once set up data can be sent to a particular unit via the MDS using the HTTP protocol. There is an excellent description of the Push protocol here with LAMP source code. The parameter PORT=7874 in pushout.pl connects to the Blackberry Browser Push server socket. By changing that parameter the payload can be sent to an arbitrary port where your own ServerSocket is accepting connections. A: If your socket code has to run on a BlackBerry, you cannot using standard Java sockets. You have to use the J2ME Connector.open API for creating both types of sockets (those that initiate connections from the BlackBerry, and those that listen for connections/pushes on the BlackBerry). Have a look at the examples that come with RIM's JDE. A: I'd need to go back to the basics for this one too. I'd recommend O'Reilly's excellent Java in a Nutshell that includes code examples for just such a case (available online as well). See Chapter 7 for a pretty good overview of the decisions you'd want to make early on.
Sockets and Processes in Java
In Java, what would the best way be to have a constantly listening port open, and still send upon receipt of a packet. I am not particularly savvy with network programming at the moment, so the tutorials I have found on the net aren't particularly helpful. Would it make sense to have the listening socket as a serversocket and run it in a separate thread to the socket I'm using to send data to the server? In a loosely related question. Does anyone know if programming simply for java, in netbeans then exporting it for use on a blackberry (using a plugin) the sockets would still work ?
[ "If you can afford the threading, try this (keep in mind I've left out some details like exception handling and playing nice with threads). You may want to look into SocketChannels and/or NIO async sockets / selectors. This should get you started.\nboolean finished = false;\nint port = 10000;\nServerSocket server = new ServerSocket(port);\n\nwhile (!finished) {\n // This will block until a connection is made\n Socket s = server.accept();\n // Spawn off some thread (or use a thread pool) to handle this socket\n // Server will continue to listen\n}\n\n", "As for connecting to a Blackberry, this is problematic since in most cases the Blackberry won't have a public IP address and will instead be behind a WAP gateway or wireless provider access point server. RIM provides the Mobile Data Server (MDS) to get around this and provide \"Push\" data which uses ServerSocket semantics on the Blackberry. The MDS is available with the Blackberry Enterprise Server (BES) and the Unite Server.\nOnce set up data can be sent to a particular unit via the MDS using the HTTP protocol. There is an excellent description of the Push protocol here with LAMP source code. The parameter PORT=7874 in pushout.pl connects to the Blackberry Browser Push server socket. By changing that parameter the payload can be sent to an arbitrary port where your own ServerSocket is accepting connections. \n", "If your socket code has to run on a BlackBerry, you cannot using standard Java sockets. You have to use the J2ME Connector.open API for creating both types of sockets (those that initiate connections from the BlackBerry, and those that listen for connections/pushes on the BlackBerry). Have a look at the examples that come with RIM's JDE.\n", "I'd need to go back to the basics for this one too. I'd recommend O'Reilly's excellent Java in a Nutshell that includes code examples for just such a case (available online as well). See Chapter 7 for a pretty good overview of the decisions you'd want to make early on.\n" ]
[ 12, 2, 2, 1 ]
[]
[]
[ "blackberry", "java", "networking", "sockets" ]
stackoverflow_0000045623_blackberry_java_networking_sockets.txt
Q: Xml or Sqlite, When to drop Xml for a Database? I really like Xml for saving data, but when does sqlite/database become the better option? eg, when the xml has more than x items or is greater than y MB? I am coding an rss reader and I believe I made the wrong choice in using xml over a sqlite database to store a cache of all the feeds items. There are some feeds which have an xml file of ~1mb after a month, another has over 700 items, while most only have ~30 items and are ~50kb in size after a several months. I currently have no plans to implement a cap because I like to be able to search through everything. So, my questions are: When is the overhead of sqlite/databases justified over using xml? Are the few large xml files justification enough for the database when there are a lot of small ones, though even the small ones will grow over time? (a long long time) updated (more info) Every time a feed is selected in the GUI I reload all the items from that feeds xml file. I also need to modify the read/unread status which seems really hacky when I loop through all nodes in the xml to find the item and then set it to read/unread. A: Man do I have experience with this. I work on a project where we originally stored all of our data using XML, then moved to SQLite. There are many pros and cons to each technology, but it was performance that caused the switchover. Here is what we observed. For small databases (a few meg or smaller), XML was much faster, and easier to deal with. Our data was naturally in a tree format, which made XML much more attractive, and XPath allowed us to do many queries in one simple line rather than having to walk down an ancestry tree. We were programming in a Win32 environment, and used the standard Microsoft DOM library. We would load all the data into memory, parse it into a DOM tree and search, add, modify on the in memory copy. We would periodically save the data, and needed to rotate copies in case the machine crashed in the middle of a write. We also needed to build up some "indexes" by hand using C++ tree maps. This, of course would be trivial to do with SQL. Note that the size of the data on the filesystem was a factor of 2-4 smaller than the "in memory" DOM tree. By the time the data got to 10M-100M size, we started to have real problems. Interestingly enough, at all data sizes, XML processing was much faster than SQLite turned out to be (because it was in memory, not on the hard drive)! The problem was actually twofold- first, loadup time really started to get long. We would need to wait a minute or so before the data was in memory and the maps were built. Of course once loaded the program was very fast. The second problem was that all of this memory was tied up all the time. Systems with only a few hundred meg would be unresponsive in other apps even though we ran very fast. We actually looking into using a filesystem based XML database. There are a couple open sourced versions XML databases, we tried them. I have never tried to use a commercial XML database, so I can't comment on them. Unfortunately, we could never get the XML databases to work well at all. Even the act of populating the database with hundreds of meg of XML took hours.... Perhaps we were using it incorrectly. Another problem was that these databases were pretty heavyweight. They required Java and had full client server architecture. We gave up on this idea. We found SQLite then. It solved our problems, but at a price. When we initially plugged SQLite in, the memory and load time problems were gone. Unfortunately, since all processing was now done on the harddrive, the background processing load went way up. While earlier we never even noticed the CPU load, now the processor usage was way up. We needed to optimize the code, and still needed to keep some data in memory. We also needed to rewrite many simple XPath queries as complicated multiquery algorithms. So here is a summary of what we learned. For tree data, XML is much easier to query and modify using XPath. For small datasets (less than 10M), XML blew away SQLite in performance. For large datasets (greater than 10M-100M), XML load time and memory usage became a big problem, to the point that some computers become unusable. We couldn't get any opensource XML database to fix the problems associated with large datasets. SQLite doesn't have the memory problems of XML DOM, but it is generally slower in processing the data (it is on the hard drive, not in memory). (note- SQLite tables can be stored in memory, perhaps this would make it as fast.... We didn't try this because we wanted to get the data out of memory.) Storing and querying tree data in a table is not enjoyable. However, managing transactions and indexing partially makes up for it. A: I basically agree with Mitchel, that this can be highly specific depending on what are you going to do with XML and SQLite. For your case (cache), it seems to me that using SQLite (or other embedded databases) makes more sense. First I don't really think that SQLite will need more overhead than XML. And I mean both development time overhead and runtime overhead. Only problem is that you have a dependence on SQLite library. But since you would need some library for XML anyway it doesn't matter (I assume project is in C/C++). Advantages of SQLite over XML: everything in one file, performance loss is lower than XML as cache gets bigger, you can keep feed metadata separate from cache itself (other table), but accessible in the same way, SQL is probably easier to work with than XPath for most people. Disadvantages of SQLite: can be problematic with multiple processes accessing same database (probably not your case), you should know at least basic SQL. Unless there will be hundreds of thousands of items in cache, I don't think you will need to optimize it much, maybe in some way it can be more dangerous from security standpoint (SQL injection). On the other hand, you are not coding web app, so this should not happen. Other things are on par for both solutions probably. To sum it up, answers to your questions respectively: You will not know, unless you test your specific application with both back ends. Otherwise it's always just a guess. Basic support for both caches should not be a problem to code. Then benchmark and compare. Because of the way XML files are organized, SQLite searches should always be faster (barring some corner cases where it doesn't matter anyway because it's blazingly fast). Speeding up searches in XML would require index database anyway, in your case that would mean having cache for cache, not a particularly good idea. But with SQLite you can have indexing as part of database. A: Don't forget that you have a great database at your fingertips: the filesystem! Lots of programmers forget that a decent directory-file structure is/has: It's fast as hell It's portable It has a tiny runtime footprint People are talking about splitting up XML files into multiple XML files... I would consider splitting your XML into multiple directories and multiple plaintext files. Give it a go. It's refreshingly fast. A: Use XML for data that the application should know - configuration, logging and what not. Use databases(oracle, SQL server etc) for data that the user interacts with directly or indirectly - real data Use SQLite if the user data is more of a serialized collection - like huge list of files and their content or collection of email items etc. SQLite is good at that. Depends on the kind and the size of the data. A: I wouldn't use XML for storing RSS items. A feed reader makes constant updates as it receives data. With XML, you need to load the data from file first, parse it, then store it for easy search/retrieval/update. Sounds like a database... Also, what happens if your application crashes? if you use XML, what state is the data in the XML file versus the data in memory. At least with SQLite you get atomicity, so you are assured that your application will start with the same state as when the last database write was made. A: XML is best used as an interchange format when you need to move data from your application to somewhere else or share information between applications. A database should be the preferred method of storage for almost any size application. A: When should XML be used for data persistence instead of a database? Almost never. XML is a data transport language. It is slow to parse and awkward to query. Parse the XML (don't shred it!) and convert the resulting data into domain objects. Then persist the domain objects. A major advantage of a database for persistence is SQL which means unstructured queries and access to common tools and optimization techniques. A: I have made the switch to SQLite and I feel much better knowing it's in a database. There are a lot of other benefits from this: Adding new items is really simple Sorting by multiple columns Removing duplicates with a unique index I've created 2 views, one for unread items and one for all items, not sure if this is the best use of views, but I really wanted to try using them. I also benchmarked the xml vs sqlite using the StopWatch class, and the sqlite is faster, although it could just be that my way of parsing xml files wasn't the fastest method. Small # items and size (25 items, 30kb) ~1.5 ms sqlite ~8.0 ms xml Large # of items (700 items, 350kb) ~20 ms sqlite ~25 ms xml Large file size (850 items, 1024kb) ~45 ms sqlite ~60 ms xml A: To me it really depends on what you are doing with them, how many users/processes need access to them at the same time etc. I work with large XML files all the time, but they are single process, import style items, that multi-user, or performance are not really needs. SO really it is a balance. A: If any time you will need to scale, use databases. A: XML is good for storing data which is not completely structured and you typically want to exchange it with another application. I prefer to use a SQL database for data. XML is error prone as you can cause subtle errors due to typos or ommissions in the data itself. Some open source application frameworks use too many xml files for configuration, data, etc. I prefer to have it in SQL. Since you ask for a rule of thumb, I would say that use XML based application data, configuration, etc if you are going to set it up once and not access/search it much. For active searches and updations, its best to go with SQL. For example, a web server stores application data in a XML file and you dont really need to perform complex search, update the file. The web server starts, reads the xml file and thats that. So XML is perfect here. Suppose you use a framework like Struts. You need to use XML and the action configurations dont change much once the application is developed and deployed. So again, the XML file is a good way. Now if your Struts developed application allows extensive searches and updations, deletions, then SQL is the optimal way. Offcourse, you will surely meet one or two developers in your organisation who will chant XML or SQL only and proclaim XML or SQL as the only way to go. Beware of such folks and do what 'feels' right for your application. Dont just follow a 'technology religion'. Think of things like how often you need to update the data, how often you need to search the data. Then you will have your answer on what to use - XML or SQL. A: I agree with @Bradley. XML is very slow and not particularly useful as a storage format. Why bother? Will you be editing the data by hand using a text editor? If so, XML still isn't a very convenient format compared to something like YAML. With something like SQlite, queries are easier to write, and there's a well defined API for getting your data in and out. XML is fine if you need to send data around between programs. But in the name of efficiency, you should probably produce the XML at sending time, and parse it into "real data" at receive time. All the above means that your question about "when the overhead of a database is justified" is kind of moot. XML has a way higher overhead, all the time, than SQlite does. (Full-on databases like MSSQL are heavier, especially in administrative overhead, but that's a totally different question.) A: XML can be stored as text and as a binary file format. If your primary goal is to let a computer read / write a file format effeciently you should work with a binary file format. Databases are an easy to use way of storing and maintaining data. They are not the fastest way to store data that is a binary file format. What can speed things up is using an in memory database / database type. Sqlite has this option. And this sounds like the best way to do it for you. A: My opinion is that you should use SQLite (or another appropriate embedded database) anytime you don't need a pure-text file format. Note, this is a pretty big exception. There are a lot of scenarios that require, or are benefited by, pure-text file formats. As far as overhead goes, SQLite compiles to something like 250 k with normal flags. Many XML parsing libraries are larger than SQLite. You get no concurrency gains using XML. The SQLite binary file format is going to support much more efficient writes (largely because you can't append to the end of a well-formatted XML file). And even reading data, most of which I assume is fairly random access, is going to be faster using SQLite. And to top it all off, you get access to the benefits of SQL like transactions and indexes. Edit: Forgot to mention. One benefit of SQLite (as opposed to many databases) is that it allows any type in any row in any column. Basically, with SQLite you get the same freedom you have with XML in terms of datatypes. This also means that you don't have to worry about putting limits on text columns. A: You should note that many large Relational DBs (Oracle and SQLServer) have XML datatypes to store data within a database and use XPath within the SQL statement to gain access to that data. Also, there are native XML databases which work very much like SQLite in the sense they are one binary file holding a collection of documents (which could roughly be a table) then you can either XPath/XQuery on a single document or the whole collection. So with an XML database you can do things like store the days data as a separate XML document in the collection... so you just need to use that one document when your dealing with the data for today. But write an XQuery to figure out historical data on the collection of documents for that person. Slick. I've used Berkeley XMLDB (now backed by Oracle). There are others if you search google for "Native XML Database". I've not seen a performance problem with storing/retrieving data in this manner. XQuery is a different beast (but well worth learning), however you may be able to just use the XPaths you currently use with slight modifications. A: A database is great as part of your program. If quering the data is part of your business logic. XML is best as a file format, especially if you data format is: 1, Hierarchal 2, Likely to change in the future in ways you can't guess 3, The data is going to live longer than the program A: I say it's not a matter of data size, but of data type. If your data is structured, use a relational database. If your data is semi-structured, use XML or - if the data amounts really grow too large - an XML database. A: If your searching go with a db. You could split the xml files up into directories to ease seeking, but the managerial overhead easily gets quite heavy. You also get a lot more than just performance with a sql db...
Xml or Sqlite, When to drop Xml for a Database?
I really like Xml for saving data, but when does sqlite/database become the better option? eg, when the xml has more than x items or is greater than y MB? I am coding an rss reader and I believe I made the wrong choice in using xml over a sqlite database to store a cache of all the feeds items. There are some feeds which have an xml file of ~1mb after a month, another has over 700 items, while most only have ~30 items and are ~50kb in size after a several months. I currently have no plans to implement a cap because I like to be able to search through everything. So, my questions are: When is the overhead of sqlite/databases justified over using xml? Are the few large xml files justification enough for the database when there are a lot of small ones, though even the small ones will grow over time? (a long long time) updated (more info) Every time a feed is selected in the GUI I reload all the items from that feeds xml file. I also need to modify the read/unread status which seems really hacky when I loop through all nodes in the xml to find the item and then set it to read/unread.
[ "Man do I have experience with this. I work on a project where we originally stored all of our data using XML, then moved to SQLite. There are many pros and cons to each technology, but it was performance that caused the switchover. Here is what we observed.\nFor small databases (a few meg or smaller), XML was much faster, and easier to deal with. Our data was naturally in a tree format, which made XML much more attractive, and XPath allowed us to do many queries in one simple line rather than having to walk down an ancestry tree.\nWe were programming in a Win32 environment, and used the standard Microsoft DOM library. We would load all the data into memory, parse it into a DOM tree and search, add, modify on the in memory copy. We would periodically save the data, and needed to rotate copies in case the machine crashed in the middle of a write.\nWe also needed to build up some \"indexes\" by hand using C++ tree maps. This, of course would be trivial to do with SQL.\nNote that the size of the data on the filesystem was a factor of 2-4 smaller than the \"in memory\" DOM tree.\nBy the time the data got to 10M-100M size, we started to have real problems. Interestingly enough, at all data sizes, XML processing was much faster than SQLite turned out to be (because it was in memory, not on the hard drive)! The problem was actually twofold- first, loadup time really started to get long. We would need to wait a minute or so before the data was in memory and the maps were built. Of course once loaded the program was very fast. The second problem was that all of this memory was tied up all the time. Systems with only a few hundred meg would be unresponsive in other apps even though we ran very fast.\nWe actually looking into using a filesystem based XML database. There are a couple open sourced versions XML databases, we tried them. I have never tried to use a commercial XML database, so I can't comment on them. Unfortunately, we could never get the XML databases to work well at all. Even the act of populating the database with hundreds of meg of XML took hours.... Perhaps we were using it incorrectly. Another problem was that these databases were pretty heavyweight. They required Java and had full client server architecture. We gave up on this idea.\nWe found SQLite then. It solved our problems, but at a price. When we initially plugged SQLite in, the memory and load time problems were gone. Unfortunately, since all processing was now done on the harddrive, the background processing load went way up. While earlier we never even noticed the CPU load, now the processor usage was way up. We needed to optimize the code, and still needed to keep some data in memory. We also needed to rewrite many simple XPath queries as complicated multiquery algorithms.\nSo here is a summary of what we learned.\n\nFor tree data, XML is much easier to query and modify using XPath.\n\nFor small datasets (less than 10M), XML blew away SQLite in performance.\n\nFor large datasets (greater than 10M-100M), XML load time and memory usage became a big problem, to the point that some computers become unusable.\n\nWe couldn't get any opensource XML database to fix the problems associated with large datasets.\n\nSQLite doesn't have the memory problems of XML DOM, but it is generally slower in processing the data (it is on the hard drive, not in memory). (note- SQLite tables can be stored in memory, perhaps this would make it as fast.... We didn't try this because we wanted to get the data out of memory.)\n\nStoring and querying tree data in a table is not enjoyable. However, managing transactions and indexing partially makes up for it.\n\n\n", "I basically agree with Mitchel, that this can be highly specific depending on what are you going to do with XML and SQLite. For your case (cache), it seems to me that using SQLite (or other embedded databases) makes more sense.\nFirst I don't really think that SQLite will need more overhead than XML. And I mean both development time overhead and runtime overhead. Only problem is that you have a dependence on SQLite library. But since you would need some library for XML anyway it doesn't matter (I assume project is in C/C++).\nAdvantages of SQLite over XML:\n\neverything in one file,\nperformance loss is lower than XML as cache gets bigger,\nyou can keep feed metadata separate from cache itself (other table), but accessible in the same way,\nSQL is probably easier to work with than XPath for most people.\n\nDisadvantages of SQLite:\n\ncan be problematic with multiple processes accessing same database (probably not your case),\nyou should know at least basic SQL. Unless there will be hundreds of thousands of items in cache, I don't think you will need to optimize it much,\nmaybe in some way it can be more dangerous from security standpoint (SQL injection). On the other hand, you are not coding web app, so this should not happen.\n\nOther things are on par for both solutions probably.\nTo sum it up, answers to your questions respectively:\n\nYou will not know, unless you test your specific application with both back ends. Otherwise it's always just a guess. Basic support for both caches should not be a problem to code. Then benchmark and compare.\n\nBecause of the way XML files are organized, SQLite searches should always be faster (barring some corner cases where it doesn't matter anyway because it's blazingly fast). Speeding up searches in XML would require index database anyway, in your case that would mean having cache for cache, not a particularly good idea. But with SQLite you can have indexing as part of database.\n\n\n", "Don't forget that you have a great database at your fingertips: the filesystem!\nLots of programmers forget that a decent directory-file structure is/has:\n\nIt's fast as hell\nIt's portable\nIt has a tiny runtime footprint\n\nPeople are talking about splitting up XML files into multiple XML files... I would consider splitting your XML into multiple directories and multiple plaintext files. \nGive it a go. It's refreshingly fast.\n", "\nUse XML for data that the\napplication should know -\nconfiguration, logging and what not.\nUse databases(oracle, SQL server etc) for data that the user\ninteracts with directly or\nindirectly - real data\nUse SQLite if the user data is more\nof a serialized collection - like\nhuge list of files and their content\nor collection of email items etc.\nSQLite is good at that.\n\nDepends on the kind and the size of the data.\n", "I wouldn't use XML for storing RSS items. A feed reader makes constant updates as it receives data.\nWith XML, you need to load the data from file first, parse it, then store it for easy search/retrieval/update. Sounds like a database...\nAlso, what happens if your application crashes? if you use XML, what state is the data in the XML file versus the data in memory. At least with SQLite you get atomicity, so you are assured that your application will start with the same state as when the last database write was made.\n", "XML is best used as an interchange format when you need to move data from your application to somewhere else or share information between applications. A database should be the preferred method of storage for almost any size application. \n", "When should XML be used for data persistence instead of a database? Almost never. XML is a data transport language. It is slow to parse and awkward to query. Parse the XML (don't shred it!) and convert the resulting data into domain objects. Then persist the domain objects. A major advantage of a database for persistence is SQL which means unstructured queries and access to common tools and optimization techniques.\n", "I have made the switch to SQLite and I feel much better knowing it's in a database. \nThere are a lot of other benefits from this: \n\nAdding new items is really simple\nSorting by multiple columns\nRemoving duplicates with a unique index\n\nI've created 2 views, one for unread items and one for all items, not sure if this is the best use of views, but I really wanted to try using them.\nI also benchmarked the xml vs sqlite using the StopWatch class, and the sqlite is faster, although it could just be that my way of parsing xml files wasn't the fastest method.\n\nSmall # items and size (25 items, 30kb)\n\n~1.5 ms sqlite\n~8.0 ms xml\n\nLarge # of items (700 items, 350kb)\n\n~20 ms sqlite\n~25 ms xml\n\nLarge file size (850 items, 1024kb)\n\n~45 ms sqlite\n~60 ms xml\n\n\n", "To me it really depends on what you are doing with them, how many users/processes need access to them at the same time etc.\nI work with large XML files all the time, but they are single process, import style items, that multi-user, or performance are not really needs.\nSO really it is a balance.\n", "If any time you will need to scale, use databases.\n", "XML is good for storing data which is not completely structured and you typically want to exchange it with another application. I prefer to use a SQL database for data. XML is error prone as you can cause subtle errors due to typos or ommissions in the data itself. Some open source application frameworks use too many xml files for configuration, data, etc. I prefer to have it in SQL.\nSince you ask for a rule of thumb, I would say that use XML based application data, configuration, etc if you are going to set it up once and not access/search it much. For active searches and updations, its best to go with SQL.\nFor example, a web server stores application data in a XML file and you dont really need to perform complex search, update the file. The web server starts, reads the xml file and thats that. So XML is perfect here. Suppose you use a framework like Struts. You need to use XML and the action configurations dont change much once the application is developed and deployed. So again, the XML file is a good way. Now if your Struts developed application allows extensive searches and updations, deletions, then SQL is the optimal way.\nOffcourse, you will surely meet one or two developers in your organisation who will chant XML or SQL only and proclaim XML or SQL as the only way to go. Beware of such folks and do what 'feels' right for your application. Dont just follow a 'technology religion'.\nThink of things like how often you need to update the data, how often you need to search the data. Then you will have your answer on what to use - XML or SQL. \n", "I agree with @Bradley.\nXML is very slow and not particularly useful as a storage format. Why bother? Will you be editing the data by hand using a text editor? If so, XML still isn't a very convenient format compared to something like YAML. With something like SQlite, queries are easier to write, and there's a well defined API for getting your data in and out.\nXML is fine if you need to send data around between programs. But in the name of efficiency, you should probably produce the XML at sending time, and parse it into \"real data\" at receive time.\nAll the above means that your question about \"when the overhead of a database is justified\" is kind of moot. XML has a way higher overhead, all the time, than SQlite does. (Full-on databases like MSSQL are heavier, especially in administrative overhead, but that's a totally different question.)\n", "XML can be stored as text and as a binary file format. \nIf your primary goal is to let a computer read / write a file format effeciently you should work with a binary file format. \nDatabases are an easy to use way of storing and maintaining data. \nThey are not the fastest way to store data that is a binary file format. \nWhat can speed things up is using an in memory database / database type. Sqlite has this option. \nAnd this sounds like the best way to do it for you. \n", "My opinion is that you should use SQLite (or another appropriate embedded database) anytime you don't need a pure-text file format. Note, this is a pretty big exception. There are a lot of scenarios that require, or are benefited by, pure-text file formats.\nAs far as overhead goes, SQLite compiles to something like 250 k with normal flags. Many XML parsing libraries are larger than SQLite. You get no concurrency gains using XML. The SQLite binary file format is going to support much more efficient writes (largely because you can't append to the end of a well-formatted XML file). And even reading data, most of which I assume is fairly random access, is going to be faster using SQLite.\nAnd to top it all off, you get access to the benefits of SQL like transactions and indexes.\nEdit: Forgot to mention. One benefit of SQLite (as opposed to many databases) is that it allows any type in any row in any column. Basically, with SQLite you get the same freedom you have with XML in terms of datatypes. This also means that you don't have to worry about putting limits on text columns.\n", "You should note that many large Relational DBs (Oracle and SQLServer) have XML datatypes to store data within a database and use XPath within the SQL statement to gain access to that data.\nAlso, there are native XML databases which work very much like SQLite in the sense they are one binary file holding a collection of documents (which could roughly be a table) then you can either XPath/XQuery on a single document or the whole collection. So with an XML database you can do things like store the days data as a separate XML document in the collection... so you just need to use that one document when your dealing with the data for today. But write an XQuery to figure out historical data on the collection of documents for that person. Slick.\nI've used Berkeley XMLDB (now backed by Oracle). There are others if you search google for \"Native XML Database\". I've not seen a performance problem with storing/retrieving data in this manner. \nXQuery is a different beast (but well worth learning), however you may be able to just use the XPaths you currently use with slight modifications.\n", "A database is great as part of your program. If quering the data is part of your business logic.\nXML is best as a file format, especially if you data format is:\n1, Hierarchal \n2, Likely to change in the future in ways you can't guess \n3, The data is going to live longer than the program\n", "I say it's not a matter of data size, but of data type. If your data is structured, use a relational database. If your data is semi-structured, use XML or - if the data amounts really grow too large - an XML database.\n", "If your searching go with a db. You could split the xml files up into directories to ease seeking, but the managerial overhead easily gets quite heavy. You also get a lot more than just performance with a sql db... \n" ]
[ 43, 23, 16, 7, 5, 5, 5, 5, 3, 2, 2, 1, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "database", "xml" ]
stackoverflow_0000077726_database_xml.txt
Q: In ColdFusion 8, can you declare a function as private using cfscript? Normally you create a function using cfscript like: <cfscript> function foo() { return "bar"; } </cfscript> Is there a way to declare this as a private function, available only to other methods inside the same cfc? I know you can do it with tags: <cffunction name="foo" access="private"> <cfreturn "bar"> </cffunction> But I don't want to have to rewrite this large function thats already written in cfscript. A: Not in ColdFusion 8. It was added in CF9, though. You don't need to rewrite the whole function, you can do this: <cffunction name="foo" returntype="string" output="false" access="private"> <cfscript> return "bar"; </cfscript> </cffunction> If you have access to CF9, the new syntax is: private string function foo() output="false" { return "bar"; }
In ColdFusion 8, can you declare a function as private using cfscript?
Normally you create a function using cfscript like: <cfscript> function foo() { return "bar"; } </cfscript> Is there a way to declare this as a private function, available only to other methods inside the same cfc? I know you can do it with tags: <cffunction name="foo" access="private"> <cfreturn "bar"> </cffunction> But I don't want to have to rewrite this large function thats already written in cfscript.
[ "Not in ColdFusion 8. It was added in CF9, though.\nYou don't need to rewrite the whole function, you can do this:\n<cffunction name=\"foo\" returntype=\"string\" output=\"false\" access=\"private\">\n <cfscript>\n return \"bar\";\n </cfscript>\n</cffunction>\n\nIf you have access to CF9, the new syntax is:\nprivate string function foo() output=\"false\" {\n return \"bar\";\n}\n\n" ]
[ 15 ]
[]
[]
[ "coldfusion", "coldfusion_8" ]
stackoverflow_0000088274_coldfusion_coldfusion_8.txt
Q: How do I unit test an __init__() method of a python class with assertRaises()? I have a class: class MyClass: def __init__(self, foo): if foo != 1: raise Error("foo is not equal to 1!") and a unit test that is supposed to make sure the incorrect arg passed to the constructor properly raises an error: def testInsufficientArgs(self): foo = 0 self.assertRaises((Error), myClass = MyClass(Error, foo)) But I get... NameError: global name 'Error' is not defined Why? Where should I be defining this Error object? I thought it was built-in as a default exception type, no? A: 'Error' in this example could be any exception object. I think perhaps you have read a code example that used it as a metasyntatic placeholder to mean, "The Appropriate Exception Class". The baseclass of all exceptions is called 'Exception', and most of its subclasses are descriptive names of the type of error involved, such as 'OSError', 'ValueError', 'NameError', 'TypeError'. In this case, the appropriate error is 'ValueError' (the value of foo was wrong, therefore a ValueError). I would recommend replacing 'Error' with 'ValueError' in your script. Here is a complete version of the code you are trying to write, I'm duplicating everything because you have a weird keyword argument in your original example that you seem to be conflating with an assignment, and I'm using the 'failUnless' function name because that's the non-aliased name of the function: class MyClass: def __init__(self, foo): if foo != 1: raise ValueError("foo is not equal to 1!") import unittest class TestFoo(unittest.TestCase): def testInsufficientArgs(self): foo = 0 self.failUnlessRaises(ValueError, MyClass, foo) if __name__ == '__main__': unittest.main() The output is: . ---------------------------------------------------------------------- Ran 1 test in 0.007s OK There is a flaw in the unit testing library 'unittest' that other unit testing frameworks fix. You'll note that it is impossible to gain access to the exception object from the calling context. If you want to fix this, you'll have to redefine that method in a subclass of UnitTest: This is an example of it in use: class TestFoo(unittest.TestCase): def failUnlessRaises(self, excClass, callableObj, *args, **kwargs): try: callableObj(*args, **kwargs) except excClass, excObj: return excObj # Actually return the exception object else: if hasattr(excClass,'__name__'): excName = excClass.__name__ else: excName = str(excClass) raise self.failureException, "%s not raised" % excName def testInsufficientArgs(self): foo = 0 excObj = self.failUnlessRaises(ValueError, MyClass, foo) self.failUnlessEqual(excObj[0], 'foo is not equal to 1!') I have copied the failUnlessRaises function from unittest.py from python2.5 and modified it slightly. A: How about this: class MyClass: def __init__(self, foo): if foo != 1: raise Exception("foo is not equal to 1!") import unittest class Tests(unittest.TestCase): def testSufficientArgs(self): foo = 1 MyClass(foo) def testInsufficientArgs(self): foo = 2 self.assertRaises(Exception, MyClass, foo) if __name__ == '__main__': unittest.main() A: I think you're thinking of Exceptions. Replace the word Error in your description with Exception and you should be good to go :-)
How do I unit test an __init__() method of a python class with assertRaises()?
I have a class: class MyClass: def __init__(self, foo): if foo != 1: raise Error("foo is not equal to 1!") and a unit test that is supposed to make sure the incorrect arg passed to the constructor properly raises an error: def testInsufficientArgs(self): foo = 0 self.assertRaises((Error), myClass = MyClass(Error, foo)) But I get... NameError: global name 'Error' is not defined Why? Where should I be defining this Error object? I thought it was built-in as a default exception type, no?
[ "'Error' in this example could be any exception object. I think perhaps you have read a code example that used it as a metasyntatic placeholder to mean, \"The Appropriate Exception Class\".\nThe baseclass of all exceptions is called 'Exception', and most of its subclasses are descriptive names of the type of error involved, such as 'OSError', 'ValueError', 'NameError', 'TypeError'.\nIn this case, the appropriate error is 'ValueError' (the value of foo was wrong, therefore a ValueError). I would recommend replacing 'Error' with 'ValueError' in your script.\nHere is a complete version of the code you are trying to write, I'm duplicating everything because you have a weird keyword argument in your original example that you seem to be conflating with an assignment, and I'm using the 'failUnless' function name because that's the non-aliased name of the function:\nclass MyClass:\n def __init__(self, foo):\n if foo != 1:\n raise ValueError(\"foo is not equal to 1!\")\n\nimport unittest\nclass TestFoo(unittest.TestCase):\n def testInsufficientArgs(self):\n foo = 0\n self.failUnlessRaises(ValueError, MyClass, foo)\n\nif __name__ == '__main__':\n unittest.main()\n\nThe output is:\n.\n----------------------------------------------------------------------\nRan 1 test in 0.007s\n\nOK\n\nThere is a flaw in the unit testing library 'unittest' that other unit testing frameworks fix. You'll note that it is impossible to gain access to the exception object from the calling context. If you want to fix this, you'll have to redefine that method in a subclass of UnitTest:\nThis is an example of it in use:\nclass TestFoo(unittest.TestCase):\n def failUnlessRaises(self, excClass, callableObj, *args, **kwargs):\n try:\n callableObj(*args, **kwargs)\n except excClass, excObj:\n return excObj # Actually return the exception object\n else:\n if hasattr(excClass,'__name__'): excName = excClass.__name__\n else: excName = str(excClass)\n raise self.failureException, \"%s not raised\" % excName\n\n def testInsufficientArgs(self):\n foo = 0\n excObj = self.failUnlessRaises(ValueError, MyClass, foo)\n self.failUnlessEqual(excObj[0], 'foo is not equal to 1!')\n\nI have copied the failUnlessRaises function from unittest.py from python2.5 and modified it slightly.\n", "How about this:\nclass MyClass:\n def __init__(self, foo):\n if foo != 1:\n raise Exception(\"foo is not equal to 1!\")\n\nimport unittest\n\nclass Tests(unittest.TestCase):\n def testSufficientArgs(self):\n foo = 1\n MyClass(foo)\n\n def testInsufficientArgs(self):\n foo = 2\n self.assertRaises(Exception, MyClass, foo)\n\nif __name__ == '__main__':\n unittest.main()\n\n", "I think you're thinking of Exceptions. Replace the word Error in your description with Exception and you should be good to go :-)\n" ]
[ 33, 7, 1 ]
[]
[]
[ "exception", "python", "unit_testing" ]
stackoverflow_0000088325_exception_python_unit_testing.txt
Q: Building a Table Dependency Graph With A Recursive Query I am trying to build a dependency graph of tables based on the foreign keys between them. This graph needs to start with an arbitrary table name as its root. I could, given a table name look up the tables that reference it using the all_constraints view, then look up the tables that reference them, and so on, but this would be horrible inefficient. I wrote a recursive query that does this for all tables, but when I add: START WITH Table_Name=:tablename It doesn't return the entire tree. A: select parent, child, level from ( select parent_table.table_name parent, child_table.table_name child from user_tables parent_table, user_constraints parent_constraint, user_constraints child_constraint, user_tables child_table where parent_table.table_name = parent_constraint.table_name and parent_constraint.constraint_type IN( 'P', 'U' ) and child_constraint.r_constraint_name = parent_constraint.constraint_name and child_constraint.constraint_type = 'R' and child_table.table_name = child_constraint.table_name and child_table.table_name != parent_table.table_name ) start with parent = 'DEPT' connect by prior child = parent should work (replace the table name, of course) assuming that everything is in the same schema. Use the DBA_ versions of the data dictionary tables and conditions for the OWNER and R_OWNER columns if you need to handle cross-schema dependencies. On further reflection, this does not account for self-referential constraints (i.e. a constraint on the EMP table that the MGR column references the EMPNO column) either, so you'd have to modify the code to handle that case if you need to deal with self-referential constraints. For testing purposes, I added a few new tables to the SCOTT schema that also reference the DEPT table (including a grandchild dependency) SQL> create table dept_child2 ( 2 deptno number references dept( deptno ) 3 ); Table created. SQL> create table dept_child3 ( 2 dept_child3_no number primary key, 3 deptno number references dept( deptno ) 4 ); Table created. SQL> create table dept_grandchild ( 2 dept_child3_no number references dept_child3( dept_child3_no ) 3 ); Table created. and verified that the query returned the expected output SQL> ed Wrote file afiedt.buf 1 select parent, child, level from ( 2 select parent_table.table_name parent, child_table.table_name child 3 from user_tables parent_table, 4 user_constraints parent_constraint, 5 user_constraints child_constraint, 6 user_tables child_table 7 where parent_table.table_name = parent_constraint.table_name 8 and parent_constraint.constraint_type IN( 'P', 'U' ) 9 and child_constraint.r_constraint_name = parent_constraint.constraint_name 10 and child_constraint.constraint_type = 'R' 11 and child_table.table_name = child_constraint.table_name 12 and child_table.table_name != parent_table.table_name 13 ) 14 start with parent = 'DEPT' 15* connect by prior child = parent SQL> / PARENT CHILD LEVEL ------------------------------ ------------------------------ ---------- DEPT DEPT_CHILD3 1 DEPT_CHILD3 DEPT_GRANDCHILD 2 DEPT DEPT_CHILD2 1 DEPT EMP 1 A: Simplest way to do this is to copy all the FK info into a simple, 2-column (parent,child) table, and then use the following algorithm: while (rows left in that table) list = rows where table name exists in child but not in parent print list remove list from rows that's all. Basically, you first print and remove all the nodes that don't depend on anything. After that being done, some other nodes will get free and you can repeat process. P.S. Make sure you don't insert self-referencing tables in the initial list (child=parent)
Building a Table Dependency Graph With A Recursive Query
I am trying to build a dependency graph of tables based on the foreign keys between them. This graph needs to start with an arbitrary table name as its root. I could, given a table name look up the tables that reference it using the all_constraints view, then look up the tables that reference them, and so on, but this would be horrible inefficient. I wrote a recursive query that does this for all tables, but when I add: START WITH Table_Name=:tablename It doesn't return the entire tree.
[ " select parent, child, level from (\nselect parent_table.table_name parent, child_table.table_name child\n from user_tables parent_table,\n user_constraints parent_constraint,\n user_constraints child_constraint,\n user_tables child_table\nwhere parent_table.table_name = parent_constraint.table_name\n and parent_constraint.constraint_type IN( 'P', 'U' )\n and child_constraint.r_constraint_name = parent_constraint.constraint_name\n and child_constraint.constraint_type = 'R'\n and child_table.table_name = child_constraint.table_name\n and child_table.table_name != parent_table.table_name\n)\nstart with parent = 'DEPT'\nconnect by prior child = parent\n\nshould work (replace the table name, of course) assuming that everything is in the same schema. Use the DBA_ versions of the data dictionary tables and conditions for the OWNER and R_OWNER columns if you need to handle cross-schema dependencies. On further reflection, this does not account for self-referential constraints (i.e. a constraint on the EMP table that the MGR column references the EMPNO column) either, so you'd have to modify the code to handle that case if you need to deal with self-referential constraints.\nFor testing purposes, I added a few new tables to the SCOTT schema that also reference the DEPT table (including a grandchild dependency)\nSQL> create table dept_child2 (\n 2 deptno number references dept( deptno )\n 3 );\n\nTable created.\n\nSQL> create table dept_child3 (\n 2 dept_child3_no number primary key,\n 3 deptno number references dept( deptno )\n 4 );\n\nTable created.\n\nSQL> create table dept_grandchild (\n 2 dept_child3_no number references dept_child3( dept_child3_no )\n 3 );\n\nTable created.\n\nand verified that the query returned the expected output\nSQL> ed\nWrote file afiedt.buf\n\n 1 select parent, child, level from (\n 2 select parent_table.table_name parent, child_table.table_name child\n 3 from user_tables parent_table,\n 4 user_constraints parent_constraint,\n 5 user_constraints child_constraint,\n 6 user_tables child_table\n 7 where parent_table.table_name = parent_constraint.table_name\n 8 and parent_constraint.constraint_type IN( 'P', 'U' )\n 9 and child_constraint.r_constraint_name = parent_constraint.constraint_name\n 10 and child_constraint.constraint_type = 'R'\n 11 and child_table.table_name = child_constraint.table_name\n 12 and child_table.table_name != parent_table.table_name\n 13 )\n 14 start with parent = 'DEPT'\n 15* connect by prior child = parent\nSQL> /\n\nPARENT CHILD LEVEL\n------------------------------ ------------------------------ ----------\nDEPT DEPT_CHILD3 1\nDEPT_CHILD3 DEPT_GRANDCHILD 2\nDEPT DEPT_CHILD2 1\nDEPT EMP 1\n\n", "Simplest way to do this is to copy all the FK info into a simple, 2-column (parent,child) table, and then use the following algorithm:\nwhile (rows left in that table)\n list = rows where table name exists in child but not in parent\n print list\n remove list from rows\n\nthat's all. Basically, you first print and remove all the nodes that don't depend on anything. After that being done, some other nodes will get free and you can repeat process.\nP.S. Make sure you don't insert self-referencing tables in the initial list (child=parent)\n" ]
[ 9, 2 ]
[]
[]
[ "oracle", "recursion", "recursive_query", "sql" ]
stackoverflow_0000087877_oracle_recursion_recursive_query_sql.txt
Q: Is there an elegant way to compare a checkbox and a textbox using ASP.NET validators? I have an Asp.Net repeater, which contains a textbox and a checkbox. I need to add client-side validation that verifies that when the checkbox is checked, the textbox can only accept a value of zero or blank. I would like to use one or more of Asp.Net's validator controls to accomplish this, to provide a consistent display for client side errors (server-side errors are handled by another subsystem). The Asp:CompareValidator doesn't seem to be flexible enough to perform this kind of complex comparison, so I'm left looking at the Asp:CustomValidator. The problem I'm running into is that there doesn't seem to be any way to pass custom information into the validation function. This is an issue because the ClientIds of the checkbox and the textbox are unknown to me at runtime (as they're part of a Repeater). So... My options seem to be: Pass the textbox and checkbox to the CustomValidator somehow (doesn't seem to be possible). Find the TextBox through JavaScript based on the arguments passed in by the CustomValidator. Is this even possible, what with the ClientId being ambiguous? Forget validation entirely, and emit custom JavaScript (allowing me to pass both ClientIds to a custom function). Any ideas on what might be a better way of implementing this? A: I think the best way would be to inherit BaseValidator in a new class, and pass those IDs to your control as attributes. You should be able to resolve the IDs within your validator, without knowing the full client side ID that is generated at runtime. You should get the data validating on the server first, and on the client second. A: Can you not put the CustomValidator inside the repeater? If not, you can create it dynamically when the repeater is bound and user FindControl() protected MyDataBound(object sender, RepeaterItemEventArgs e) { (CheckBox)cb = (CheckBox)e.Item.FindControl("myCheckboxName"); (TextBox)tb = (TextBox)e.Item.FindControl("myTextBox"); } ...or something like that. I did the code off the top of my head.
Is there an elegant way to compare a checkbox and a textbox using ASP.NET validators?
I have an Asp.Net repeater, which contains a textbox and a checkbox. I need to add client-side validation that verifies that when the checkbox is checked, the textbox can only accept a value of zero or blank. I would like to use one or more of Asp.Net's validator controls to accomplish this, to provide a consistent display for client side errors (server-side errors are handled by another subsystem). The Asp:CompareValidator doesn't seem to be flexible enough to perform this kind of complex comparison, so I'm left looking at the Asp:CustomValidator. The problem I'm running into is that there doesn't seem to be any way to pass custom information into the validation function. This is an issue because the ClientIds of the checkbox and the textbox are unknown to me at runtime (as they're part of a Repeater). So... My options seem to be: Pass the textbox and checkbox to the CustomValidator somehow (doesn't seem to be possible). Find the TextBox through JavaScript based on the arguments passed in by the CustomValidator. Is this even possible, what with the ClientId being ambiguous? Forget validation entirely, and emit custom JavaScript (allowing me to pass both ClientIds to a custom function). Any ideas on what might be a better way of implementing this?
[ "I think the best way would be to inherit BaseValidator in a new class, and pass those IDs to your control as attributes. You should be able to resolve the IDs within your validator, without knowing the full client side ID that is generated at runtime. You should get the data validating on the server first, and on the client second.\n", "Can you not put the CustomValidator inside the repeater? If not, you can create it dynamically when the repeater is bound and user FindControl()\nprotected MyDataBound(object sender, RepeaterItemEventArgs e) {\n (CheckBox)cb = (CheckBox)e.Item.FindControl(\"myCheckboxName\");\n (TextBox)tb = (TextBox)e.Item.FindControl(\"myTextBox\");\n}\n\n...or something like that. I did the code off the top of my head.\n" ]
[ 2, 0 ]
[]
[]
[ ".net", "asp.net", "c#", "validation" ]
stackoverflow_0000088361_.net_asp.net_c#_validation.txt
Q: Importing Access data into SQL Server using ColdFusion This should be simple. I'm trying to import data from Access into SQL Server. I don't have direct access to the SQL Server database - it's on GoDaddy and they only allow web access. So I can't use the Management Studio tools, or other third-party Access upsizing programs that require remote access to the database. I wrote a query on the Access database and I'm trying to loop through and insert each record into the corresponding SQL Server table. But it keeps erroring out. I'm fairly certain it's because of the HTML and God knows what other weird characters are in one of the Access text fields. I tried using CFQUERYPARAM but that doesn't seem to help either. Any ideas would be helpful. Thanks. A: Try using the GoDaddy SQL backup/restore tool to get a local copy of the database. At that point, use the SQL Server DTS tool to import the data. It's an easy to use, drag-and-drop graphical interface. A: What error(s) get(s) thrown? What odd characters are you using? Are you referring to HTML markup, or extended (eg UTF-8) characters? If possible, turn on Robust Error Reporting. If the problem is the page timing out, you can either increase the timeout using the Admin, using the cfsetting tag, or rewrite your script to run a certain number of lines, and then forward to itself at the next start point. A: You should be able to execute saved DTS packages in MS SQL Server from the application server's command line. Since this is the case, you can use <cfexecute> to issue a request to DTSRUNNUI.EXE. (See example) This is of course assuming you are on a server where the command is available.
Importing Access data into SQL Server using ColdFusion
This should be simple. I'm trying to import data from Access into SQL Server. I don't have direct access to the SQL Server database - it's on GoDaddy and they only allow web access. So I can't use the Management Studio tools, or other third-party Access upsizing programs that require remote access to the database. I wrote a query on the Access database and I'm trying to loop through and insert each record into the corresponding SQL Server table. But it keeps erroring out. I'm fairly certain it's because of the HTML and God knows what other weird characters are in one of the Access text fields. I tried using CFQUERYPARAM but that doesn't seem to help either. Any ideas would be helpful. Thanks.
[ "Try using the GoDaddy SQL backup/restore tool to get a local copy of the database. At that point, use the SQL Server DTS tool to import the data. It's an easy to use, drag-and-drop graphical interface.\n", "What error(s) get(s) thrown? What odd characters are you using? Are you referring to HTML markup, or extended (eg UTF-8) characters?\nIf possible, turn on Robust Error Reporting.\nIf the problem is the page timing out, you can either increase the timeout using the Admin, using the cfsetting tag, or rewrite your script to run a certain number of lines, and then forward to itself at the next start point.\n", "You should be able to execute saved DTS packages in MS SQL Server from the application server's command line. Since this is the case, you can use <cfexecute> to issue a request to DTSRUNNUI.EXE. (See example) This is of course assuming you are on a server where the command is available.\n" ]
[ 1, 1, 0 ]
[ "It's never advisable to loop through records when a SQL Update can be used.\nIt's not clear from your question what database interface layer you are using, but it is possible with the right interfaces to insert data from a source outside a database if the interface being used supports both types of databases. This can be done in the FROM clause of your SQL statement by specifying not just the table name, but the connect string for the database. Assuming that your web host has ODBC drivers for Jet data (you're not actually using Access, which is the app development part -- you're only using the Jet database engine), the connect string should be sufficient.\nEDIT: If you use the Jet database engine to do this, you should be able to specify the source table something like this (where tblSQLServer is a table in your Jet MDB that is linked via ODBC to your SQL Server):\nINSERT INTO tblSQLServer (ID, OtherField ) \nSELECT ID, OtherField\nFROM [c:\\MyDBs\\Access.mdb].tblSQLServer \n\nThe key point is that you are leveraging the Jet db engine here to do all the heavy lifting for you.\n" ]
[ -1 ]
[ "coldfusion", "ms_access", "sql_server" ]
stackoverflow_0000065940_coldfusion_ms_access_sql_server.txt
Q: Getting windows/domain credentials in asp.net while allowing anonymous access in IIS I have an asp.NET webapplication running in our datacenter in which we want the customer to logon with single sign-on. This would be very easy if we could use the IIS integrated security. However we can't do this. We don't have a trust to the domain controller of the customer. ANd we want to website to be available to the general internet. Only when people are connecting from within the clients network they should automatically login. What we have is a list of domain accounts and a way to query the DC via LDAP in asp.net code. When anonymous access is allowed in IIS, IIS never challenges the browser for credentials. And thus our application never gets the users credentials. Is there a way to force the browser into sending the credentials (and thus be able to use single sign-on) with IIS accepting anonymous request. Update: I tried sending 401: unauthorized, www-authenticate: NTLM headers by myself. What happens next (as Fiddler tells me) is that IIS takes complete control and handles the complete chain of request. As I understand from various sources is that IIS takes the username, sends a challenge back to the browser. The browser returns with encrypted reponse and IIS connects to the domain controller to authenticate the user with this response. However in my scenario IIS is in a different windows domain than the clients and have no way to authenticate the users. For that reason building a seperate site with windows authenticaion enabaled isn't going to work either. For now I have to options left which I'm researching: Creating a domain trust between our hosting domain and the clients domain (our IT department isn'tto happy with this) Using a NTML proxy to forward the IIS authentication requests to the clients domain controller (we have a VPN connection available to connect via LDAP) A: What you're asking for is called mixed mode authentication. I've recently used a two entry-point mechanism from Paul Glavich and it works perfectly. I guess it's the most elegant solution for this problem. A: Not sure that you'll easily get this to work. Unlike basic where the 401 challenge happens in-band of the user request - such that the creds appear in the headers, NTLM handshakes are done on a separate port - then forced onto the thread context by unmanaged code. You tried pulling apart the ASP.NET NTLM module in VS2008 (or reflector) to see what it does to extract the creds? Not really an answer - sorry...
Getting windows/domain credentials in asp.net while allowing anonymous access in IIS
I have an asp.NET webapplication running in our datacenter in which we want the customer to logon with single sign-on. This would be very easy if we could use the IIS integrated security. However we can't do this. We don't have a trust to the domain controller of the customer. ANd we want to website to be available to the general internet. Only when people are connecting from within the clients network they should automatically login. What we have is a list of domain accounts and a way to query the DC via LDAP in asp.net code. When anonymous access is allowed in IIS, IIS never challenges the browser for credentials. And thus our application never gets the users credentials. Is there a way to force the browser into sending the credentials (and thus be able to use single sign-on) with IIS accepting anonymous request. Update: I tried sending 401: unauthorized, www-authenticate: NTLM headers by myself. What happens next (as Fiddler tells me) is that IIS takes complete control and handles the complete chain of request. As I understand from various sources is that IIS takes the username, sends a challenge back to the browser. The browser returns with encrypted reponse and IIS connects to the domain controller to authenticate the user with this response. However in my scenario IIS is in a different windows domain than the clients and have no way to authenticate the users. For that reason building a seperate site with windows authenticaion enabaled isn't going to work either. For now I have to options left which I'm researching: Creating a domain trust between our hosting domain and the clients domain (our IT department isn'tto happy with this) Using a NTML proxy to forward the IIS authentication requests to the clients domain controller (we have a VPN connection available to connect via LDAP)
[ "What you're asking for is called mixed mode authentication. I've recently used a two entry-point mechanism from Paul Glavich and it works perfectly. I guess it's the most elegant solution for this problem.\n", "Not sure that you'll easily get this to work. Unlike basic where the 401 challenge happens in-band of the user request - such that the creds appear in the headers, NTLM handshakes are done on a separate port - then forced onto the thread context by unmanaged code.\nYou tried pulling apart the ASP.NET NTLM module in VS2008 (or reflector) to see what it does to extract the creds?\nNot really an answer - sorry...\n" ]
[ 3, 0 ]
[ "This solution is about forms authentication, but it details the 401 issue.\n\nThe solution was simply to attach a\n handler to the Application's\n EndRequest event by putting the\n following in Global.asax: \n\nprotected void Application_EndRequest(object sender, EventArgs e) {\n if (Context.Items[\"Send401\"] != null)\n {\n Response.StatusCode = 401;\n Response.StatusDescription = \"Unauthorized\";\n } }\n\n\nThen, in order to trigger this code, all you have to do is put a \n\nContext.Items[\"Send401\"] = true;\n\nEdit:\nI've used this method with Anonymous and Integrated turned on to get the user's domain credentials. I'm not sure if it'll work in your situation, but I thought I was worth a shot.\n" ]
[ -1 ]
[ "asp.net", "c#", "iis", "single_sign_on" ]
stackoverflow_0000081347_asp.net_c#_iis_single_sign_on.txt
Q: Project in Ruby I've been coding alot of web-stuff all my life, rails lately. And i can always find a website to code, but i'm kind of bored with it. Been taking alot of courses of Java and C lately so i've become a bit interested in desktop application programming. Problem: I can't for the life of me think of a thing to code for desktop. I just can't think of anything i can code that isn't already out there for download. So what do i do? I need some project suggestions that i can set as a goal. A: I would say you should roam through github or some other open source site and find an existing young or old project that you can contribute to. Maybe there is something that is barely off the ground, or maybe there is a mature project that could use some improvement. A: I find to complete a project, it needs to be something I am passionate about. I feel you need to find your own project I'm afraid. There is always the Netflix Prize though! A: I would write a ray tracer. Oops, sorry... you're looking for an original idea. :) Ray tracers are still cool, though, and easy to get started on. Maybe you'll get an idea for a game while you're working on it. A: Visit shoooes.net for a UI toolkit that's easy and fun, and then the-shoebox.org to see the kinds of things people are doing with it. A: If you could make a Ruby ANSI (and xbin, and idf, and adf...) Editor, I would love you. Because that means you would have written ANSI parsing routines that I can hope you release to the open source community. ... but that is a selfish answer. Oh, and a cross-platform editor would be nice as well (although TundraDraw somewhat takes care of that).
Project in Ruby
I've been coding alot of web-stuff all my life, rails lately. And i can always find a website to code, but i'm kind of bored with it. Been taking alot of courses of Java and C lately so i've become a bit interested in desktop application programming. Problem: I can't for the life of me think of a thing to code for desktop. I just can't think of anything i can code that isn't already out there for download. So what do i do? I need some project suggestions that i can set as a goal.
[ "I would say you should roam through github or some other open source site and find an existing young or old project that you can contribute to. Maybe there is something that is barely off the ground, or maybe there is a mature project that could use some improvement.\n", "I find to complete a project, it needs to be something I am passionate about. I feel you need to find your own project I'm afraid. \nThere is always the Netflix Prize though!\n", "I would write a ray tracer.\nOops, sorry... you're looking for an original idea. :) Ray tracers are still cool, though, and easy to get started on. Maybe you'll get an idea for a game while you're working on it.\n", "Visit shoooes.net for a UI toolkit that's easy and fun, and then the-shoebox.org to see the kinds of things people are doing with it.\n", "If you could make a Ruby ANSI (and xbin, and idf, and adf...) Editor, I would love you. Because that means you would have written ANSI parsing routines that I can hope you release to the open source community.\n... but that is a selfish answer. Oh, and a cross-platform editor would be nice as well (although TundraDraw somewhat takes care of that).\n" ]
[ 2, 1, 1, 1, 0 ]
[]
[]
[ "desktop_application", "ruby" ]
stackoverflow_0000088438_desktop_application_ruby.txt
Q: Get the current mouse coordinates I have an iMac, and I want to be able to turn off the monitor when I go to sleep,. Alas, the iMac has no switch for this. I do not want to put the iMac into sleep mode, i want to write a "expose" like application or service, which when the mouse is put into the upper left hand corner of my screen, the display will sleep. Likewise, if i move the mouse away, it comes back. Does anyone have experience with tracking mouse movements within the Windows and Display APIs I'd need to look up. I just need some direction to get started. Cheers! Chris I've been asked to clarrify. Sorry if i'm confusing anyone. I'm running Windows Vista 32 via Bootcamp. I like that Mac OSX has a "hot corners" feature via Expose. I have noticed that besides power managment which runs on a time metric, there is no way to sleep the display at will in Vista. I would like to write my own tool for this. I might be a glutton for punishment, but i'm a coder, and it's a good excuse to learn something new. A: In Leopard, you can just go to "System Preferences" and "Desktop & Screensaver". Click the Screensaver tab, click "Hot Corners", selected the corner you want to change, then chose "Sleep display". Does that not work? A: If it's an old CRT iMac then you can't switch off the screen without switching the computer off - the convection from the CRT is used to cool the processor! A: Not really the answer you seem to be looking for, but cant you do this via the power save option and/or the screen saver - can it be set to nothing. A: Can you not use the monitor power button? A: Thanks for the clarification, Chris. I would reiterate: just use a pre-existing solution like this: http://www.southbaypc.com/HotCorners/ (untested anything that does the same thing would work). If it allows you to run your pre-selected screensaver, then all you need to do is ... ... make an exe that does what you want (sleep the screen) and then rename it whatever.scr http://computer.howstuffworks.com/screensaver.htm/printable Do you have this working yet? Once you get that working (and you can enjoy a Windows version of your desired OS X hot corners functionality) then worry about how hot corners are implemented. Your Win32 API question is still a good question but like you said you sound like you want to build it yourself. If that is the case, I would post a new question "Hot corners in Windows Win32 API Low level mouse tracking" or something to that effect and just ask: "how do these Hot Corners programs detect hot corner mouse-over events?" By the way my brother used the low level API to move the mouse cursor and simulate clicks so I know what you're asking is probably possible. It's just that your REAL question seems barried in all this discussion.
Get the current mouse coordinates
I have an iMac, and I want to be able to turn off the monitor when I go to sleep,. Alas, the iMac has no switch for this. I do not want to put the iMac into sleep mode, i want to write a "expose" like application or service, which when the mouse is put into the upper left hand corner of my screen, the display will sleep. Likewise, if i move the mouse away, it comes back. Does anyone have experience with tracking mouse movements within the Windows and Display APIs I'd need to look up. I just need some direction to get started. Cheers! Chris I've been asked to clarrify. Sorry if i'm confusing anyone. I'm running Windows Vista 32 via Bootcamp. I like that Mac OSX has a "hot corners" feature via Expose. I have noticed that besides power managment which runs on a time metric, there is no way to sleep the display at will in Vista. I would like to write my own tool for this. I might be a glutton for punishment, but i'm a coder, and it's a good excuse to learn something new.
[ "In Leopard, you can just go to \"System Preferences\" and \"Desktop & Screensaver\". Click the Screensaver tab, click \"Hot Corners\", selected the corner you want to change, then chose \"Sleep display\". Does that not work?\n", "If it's an old CRT iMac then you can't switch off the screen without switching the computer off - the convection from the CRT is used to cool the processor!\n", "Not really the answer you seem to be looking for, but cant you do this via the power save option and/or the screen saver - can it be set to nothing.\n", "Can you not use the monitor power button?\n", "Thanks for the clarification, Chris. I would reiterate:\n\njust use a pre-existing solution like this: http://www.southbaypc.com/HotCorners/ (untested anything that does the same thing would work). If it allows you to run your pre-selected screensaver, then all you need to do is ...\n... make an exe that does what you want (sleep the screen) and then rename it whatever.scr http://computer.howstuffworks.com/screensaver.htm/printable Do you have this working yet?\nOnce you get that working (and you can enjoy a Windows version of your desired OS X hot corners functionality) then worry about how hot corners are implemented. Your Win32 API question is still a good question but like you said you sound like you want to build it yourself. If that is the case, I would post a new question \"Hot corners in Windows Win32 API Low level mouse tracking\" or something to that effect and just ask: \"how do these Hot Corners programs detect hot corner mouse-over events?\" By the way my brother used the low level API to move the mouse cursor and simulate clicks so I know what you're asking is probably possible. It's just that your REAL question seems barried in all this discussion.\n\n" ]
[ 2, 1, 0, 0, 0 ]
[]
[]
[ "c++", "lcd", "mouse", "winapi" ]
stackoverflow_0000069926_c++_lcd_mouse_winapi.txt
Q: JavaScript and why capital letters sometimes work and sometimes don't In Notepad++, I was writing a JavaScript file and something didn't work: an alert had to be shown when a button was clicked, but it wasn't working. I has used the auto-complete plugin provided with Notepad++, which presented me with onClick. When I changed the capital C to a small c, it did work. So first of all, when looking at the functions in the auto-completion, I noticed a lot of functions using capitals. But when you change getElementById to getelementbyid, you also get an error, and to make matters worse, my handbook from school writes all the stuff with capital letters but the solutions are all done in small letters. So what is it with JavaScript and its selective nature towards which functions can have capital letters in them and which can't? A: Javascript is ALWAYS case-sensitive, html is not. It sounds as thought you are talking about whether html attributes (e.g. onclick) are or are not case-sensitive. The answer is that the attributes are not case sensitive, but the way that we access them through the DOM is. So, you can do this: <div id='divYo' onClick="alert('yo!');">Say Yo</div> // Upper-case 'C' or: <div id='divYo' onclick="alert('yo!');">Say Yo</div> // Lower-case 'C' but through the DOM you must use the correct case. So this works: getElementById('divYo').onclick = function() { alert('yo!'); }; // Lower-case 'C' but you cannot do this: getElementById('divYo').onClick = function() { alert('yo!'); }; // Upper-case 'C' EDIT: CMS makes a great point that most DOM methods and properties are in camelCase. The one exception that comes to mind are event handler properties and these are generally accepted to be the wrong way to attach to events anyway. Prefer using addEventListener as in: document.getElementById('divYo').addEventListener('click', modifyText, false); A: A few objects is IE aren't always case-sensitive, including some/most/all ActiveX -- why both XHR.onReadyStateChange and XHR.onreadystatechange would work fine in IE5 or IE6, but only the latter would work with the native XMLHttpRequest object in IE7, FF, etc. But, a quick reference for "standard" API casing: UPPERCASE - Constants (generally symbolic, since const isn't globally supported) Capitalized - Classes/Object functions lowercase - Events camelCase - everything else No 100% guarantees. But, majority-wise, this is accurate. A: JavaScript API methods are almost all called with lowerCamelCase names, and JavaScript is case-sensitive A: Javascript should always be case sensitive, but I've seen cases in Internet Explorer where it tolerates all upper case for some function names but not others. I think it is limited to functions that also exist in Visual Basic, as there is some odd inbreeding between the interpreters. Clearly this behavior should be avoided, unless of course your intention is to make code that only runs in one browser :)
JavaScript and why capital letters sometimes work and sometimes don't
In Notepad++, I was writing a JavaScript file and something didn't work: an alert had to be shown when a button was clicked, but it wasn't working. I has used the auto-complete plugin provided with Notepad++, which presented me with onClick. When I changed the capital C to a small c, it did work. So first of all, when looking at the functions in the auto-completion, I noticed a lot of functions using capitals. But when you change getElementById to getelementbyid, you also get an error, and to make matters worse, my handbook from school writes all the stuff with capital letters but the solutions are all done in small letters. So what is it with JavaScript and its selective nature towards which functions can have capital letters in them and which can't?
[ "Javascript is ALWAYS case-sensitive, html is not.\nIt sounds as thought you are talking about whether html attributes (e.g. onclick) are or are not case-sensitive. The answer is that the attributes are not case sensitive, but the way that we access them through the DOM is. \nSo, you can do this:\n<div id='divYo' onClick=\"alert('yo!');\">Say Yo</div> // Upper-case 'C'\n\nor:\n<div id='divYo' onclick=\"alert('yo!');\">Say Yo</div> // Lower-case 'C'\n\nbut through the DOM you must use the correct case. So this works:\ngetElementById('divYo').onclick = function() { alert('yo!'); }; // Lower-case 'C'\n\nbut you cannot do this:\ngetElementById('divYo').onClick = function() { alert('yo!'); }; // Upper-case 'C'\n\nEDIT: CMS makes a great point that most DOM methods and properties are in camelCase. The one exception that comes to mind are event handler properties and these are generally accepted to be the wrong way to attach to events anyway. Prefer using addEventListener as in:\ndocument.getElementById('divYo').addEventListener('click', modifyText, false);\n\n", "A few objects is IE aren't always case-sensitive, including some/most/all ActiveX -- why both XHR.onReadyStateChange and XHR.onreadystatechange would work fine in IE5 or IE6, but only the latter would work with the native XMLHttpRequest object in IE7, FF, etc.\nBut, a quick reference for \"standard\" API casing:\n\nUPPERCASE - Constants (generally symbolic, since const isn't globally supported)\nCapitalized - Classes/Object functions\nlowercase - Events\ncamelCase - everything else\n\nNo 100% guarantees. But, majority-wise, this is accurate.\n", "JavaScript API methods are almost all called with lowerCamelCase names, and JavaScript is case-sensitive\n", "Javascript should always be case sensitive, but I've seen cases in Internet Explorer where it tolerates all upper case for some function names but not others. I think it is limited to functions that also exist in Visual Basic, as there is some odd inbreeding between the interpreters. Clearly this behavior should be avoided, unless of course your intention is to make code that only runs in one browser :)\n" ]
[ 30, 6, 1, 1 ]
[]
[]
[ "case_sensitive", "dom", "html", "javascript" ]
stackoverflow_0000087934_case_sensitive_dom_html_javascript.txt