content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Building Flex projects in ant/nant
We have a recurring problem at my company with build breaks in our Flex projects. The problem primarily occurs because the build that the developers do on their local machines is fundamentally different from the build that occurs on the build machine. The devs are building the projects using FlexBuilder/eclipse and the build machine is using the command line compilers. Inevitably, the {projectname}-config.xml and/or the batch file that runs the build get out of sync with the project files used by eclipse, so the the build succeeds on the dev's machine, but fails on the build machine.
We started down the path of writing a utility program to convert FlexBuilder's project files into a {projectname}-config.xml file, but it's a) undocumented and b) a horrible hack.
I've looked into the -dump-config switch to get the config files, but this has a couple of problems: 1) The generated config file has absolute paths which doesn't work in our environment (some developers use macs, some windows machines), and 2) only works right when run from the IDE, so can't be build into the build process.
Tomorrow, we are going to discuss a couple of options, neither of which I'm terribly fond of:
a) Add a post check-in event to Subversion to remove these absolute references, or
b) add a pre-build process that removes the absolute reference.
I can't believe that we are the first group of developers to run across this issue, but I can't find any good solutions on Google. How have other groups dealt with this problem?
A:
I found that one of the undocumented requirements for using ant with Flexbuilder was to have the variable FLEX_HOME set within your ant script. Typically within build.xml have the following:
<!– Module properties –>
<property environment=”env”/>
<property name=”build.dir” value=”build”/>
<property name=”swf.name” value=”MyProjectSwf”/>
<property name=”root.mxml” value=”Main.mxml”/>
<property name=”locale” value=”en_US”/>
<property name=”FLEX_HOME” value=”${env.FLEX_HOME}”/>
This may seem like a hassle but it is a far more reasonable approach to obtaining consistency across platforms and environments if you are using multiple platforms for your developers.
HTH
A:
While not a solution to your specific problem, a workaround is to use a continuous integration server.
Using something like Cruise Control you can have an automated build kick off every time someone submits something to source control. Then if the build fails for any reason (including environment inconsistencies) it's up to the developer who broke it to fix it. You can configure it to send emails on failure/success in various ways.
|
Building Flex projects in ant/nant
|
We have a recurring problem at my company with build breaks in our Flex projects. The problem primarily occurs because the build that the developers do on their local machines is fundamentally different from the build that occurs on the build machine. The devs are building the projects using FlexBuilder/eclipse and the build machine is using the command line compilers. Inevitably, the {projectname}-config.xml and/or the batch file that runs the build get out of sync with the project files used by eclipse, so the the build succeeds on the dev's machine, but fails on the build machine.
We started down the path of writing a utility program to convert FlexBuilder's project files into a {projectname}-config.xml file, but it's a) undocumented and b) a horrible hack.
I've looked into the -dump-config switch to get the config files, but this has a couple of problems: 1) The generated config file has absolute paths which doesn't work in our environment (some developers use macs, some windows machines), and 2) only works right when run from the IDE, so can't be build into the build process.
Tomorrow, we are going to discuss a couple of options, neither of which I'm terribly fond of:
a) Add a post check-in event to Subversion to remove these absolute references, or
b) add a pre-build process that removes the absolute reference.
I can't believe that we are the first group of developers to run across this issue, but I can't find any good solutions on Google. How have other groups dealt with this problem?
|
[
"I found that one of the undocumented requirements for using ant with Flexbuilder was to have the variable FLEX_HOME set within your ant script. Typically within build.xml have the following:\n<!– Module properties –>\n<property environment=”env”/>\n<property name=”build.dir” value=”build”/>\n<property name=”swf.name” value=”MyProjectSwf”/>\n<property name=”root.mxml” value=”Main.mxml”/>\n<property name=”locale” value=”en_US”/>\n<property name=”FLEX_HOME” value=”${env.FLEX_HOME}”/>\n\nThis may seem like a hassle but it is a far more reasonable approach to obtaining consistency across platforms and environments if you are using multiple platforms for your developers.\nHTH\n",
"While not a solution to your specific problem, a workaround is to use a continuous integration server.\nUsing something like Cruise Control you can have an automated build kick off every time someone submits something to source control. Then if the build fails for any reason (including environment inconsistencies) it's up to the developer who broke it to fix it. You can configure it to send emails on failure/success in various ways.\n"
] |
[
1,
1
] |
[] |
[] |
[
"apache_flex",
"build_process",
"flexbuilder",
"nant"
] |
stackoverflow_0000058000_apache_flex_build_process_flexbuilder_nant.txt
|
Q:
How to export data from SQL Server 2005 to MySQL
I've been banging my head against SQL Server 2005 trying to get a lot of data out. I've been given a database with nearly 300 tables in it and I need to turn this into a MySQL database. My first call was to use bcp but unfortunately it doesn't produce valid CSV - strings aren't encapsulated, so you can't deal with any row that has a string with a comma in it (or whatever you use as a delimiter) and I would still have to hand write all of the create table statements, as obviously CSV doesn't tell you anything about the data types.
What would be better is if there was some tool that could connect to both SQL Server and MySQL, then do a copy. You lose views, stored procedures, trigger, etc, but it isn't hard to copy a table that only uses base types from one DB to another... is it?
Does anybody know of such a tool? I don't mind how many assumptions it makes or what simplifications occur, as long as it supports integer, float, datetime and string. I have to do a lot of pruning, normalising, etc. anyway so I don't care about keeping keys, relationships or anything like that, but I need the initial set of data in fast!
A:
The best way that I have found is the MySQL Migration Toolkit provided by MySQL. I have used it successfully for some large migration projects.
A:
SQL Server 2005 "Standard", "Developer" and "Enterprise" editions have SSIS, which replaced DTS from SQL server 2000. SSIS has a built-in connection to its own DB, and you can find a connection that someone else has written for MySQL. Here is one example. Once you have your connections, you should be able to create an SSIS package that moves data between the two.
I ddin't have to move data from SQLServer to MySQL, but I imagine that once the MySQL connection is installed, it works the same as moving data between two SQLServer DBs, which is pretty straight forward.
A:
Using MSSQL Management Studio i've transitioned tables with the MySQL OLE DB. Right click on your database and go to "Tasks->Export Data" from there you can specify a MsSQL OLE DB source, the MySQL OLE DB source and create the column mappings between the two data sources.
You'll most likely want to setup the database and tables in advance on the MySQL destination (the export will want to create the tables automatically, but this often results in failure). You can quickly create the tables in MySQL using the "Tasks->Generate Scripts" by right clicking on the database. Once your creation scripts are generated you'll need to step through and search/replace for keywords and types that exist in MSSQL to MYSQL.
Of course you could also backup the database like normal and find a utility which will restore the MSSQL backup on MYSQL. I'm not sure if one exists however.
A:
Rolling your own PHP solution will certainly work though I'm not sure if there is a good way to automatically duplicate the schema from one DB to the other (maybe this was your question).
If you are just copying data, and/or you need custom code anyway to convert between modified schemas between the two DB's, I would recommend using PHP 5.2+ and the PDO libraries. You'll be able to connect using PDO ODBC (and use MSSQL drivers). I had a lot of problems getting large text fields and multi-byte characters from MSSQL into PHP using other libraries.
A:
Another tool to try would be the SQLMaestro suite. It is a little tricky nailing down the precise tool, but they have a variety of tools, both free and for purchase that handle a wide variety of tasks for multiple database platforms. I'd suggest trying the Data Wizard tool first for MySQL, since I believe that will have the proper "import" tool you need.
|
How to export data from SQL Server 2005 to MySQL
|
I've been banging my head against SQL Server 2005 trying to get a lot of data out. I've been given a database with nearly 300 tables in it and I need to turn this into a MySQL database. My first call was to use bcp but unfortunately it doesn't produce valid CSV - strings aren't encapsulated, so you can't deal with any row that has a string with a comma in it (or whatever you use as a delimiter) and I would still have to hand write all of the create table statements, as obviously CSV doesn't tell you anything about the data types.
What would be better is if there was some tool that could connect to both SQL Server and MySQL, then do a copy. You lose views, stored procedures, trigger, etc, but it isn't hard to copy a table that only uses base types from one DB to another... is it?
Does anybody know of such a tool? I don't mind how many assumptions it makes or what simplifications occur, as long as it supports integer, float, datetime and string. I have to do a lot of pruning, normalising, etc. anyway so I don't care about keeping keys, relationships or anything like that, but I need the initial set of data in fast!
|
[
"The best way that I have found is the MySQL Migration Toolkit provided by MySQL. I have used it successfully for some large migration projects.\n",
"SQL Server 2005 \"Standard\", \"Developer\" and \"Enterprise\" editions have SSIS, which replaced DTS from SQL server 2000. SSIS has a built-in connection to its own DB, and you can find a connection that someone else has written for MySQL. Here is one example. Once you have your connections, you should be able to create an SSIS package that moves data between the two.\nI ddin't have to move data from SQLServer to MySQL, but I imagine that once the MySQL connection is installed, it works the same as moving data between two SQLServer DBs, which is pretty straight forward.\n",
"Using MSSQL Management Studio i've transitioned tables with the MySQL OLE DB. Right click on your database and go to \"Tasks->Export Data\" from there you can specify a MsSQL OLE DB source, the MySQL OLE DB source and create the column mappings between the two data sources. \nYou'll most likely want to setup the database and tables in advance on the MySQL destination (the export will want to create the tables automatically, but this often results in failure). You can quickly create the tables in MySQL using the \"Tasks->Generate Scripts\" by right clicking on the database. Once your creation scripts are generated you'll need to step through and search/replace for keywords and types that exist in MSSQL to MYSQL. \nOf course you could also backup the database like normal and find a utility which will restore the MSSQL backup on MYSQL. I'm not sure if one exists however.\n",
"Rolling your own PHP solution will certainly work though I'm not sure if there is a good way to automatically duplicate the schema from one DB to the other (maybe this was your question).\nIf you are just copying data, and/or you need custom code anyway to convert between modified schemas between the two DB's, I would recommend using PHP 5.2+ and the PDO libraries. You'll be able to connect using PDO ODBC (and use MSSQL drivers). I had a lot of problems getting large text fields and multi-byte characters from MSSQL into PHP using other libraries.\n",
"Another tool to try would be the SQLMaestro suite. It is a little tricky nailing down the precise tool, but they have a variety of tools, both free and for purchase that handle a wide variety of tasks for multiple database platforms. I'd suggest trying the Data Wizard tool first for MySQL, since I believe that will have the proper \"import\" tool you need.\n"
] |
[
59,
9,
9,
8,
6
] |
[] |
[] |
[
"bcp",
"csv",
"mysql",
"sql_server",
"sql_server_2005"
] |
stackoverflow_0000000129_bcp_csv_mysql_sql_server_sql_server_2005.txt
|
Q:
Figure out div that is visible out of four divs
I need to figure out what div is visible out of four possible divs using jQuery. Only one of those div's will be visible at any given time.
This is what I have that works so far:
$("#FeatureImage1:visible, #FeatureImage2:visible, #FeatureImage3:visible, #FeatureImage4:visible").attr("id");
Is there a way to refactor this? Is there an easier way to figure this out?
A:
Assign the same class to each div then:
$("div.myClass:visible").attr("id");
A:
When applicable, it's better to use contextual selectors rather than add spurious classes. For instance, if the <div> elements are the only children of an element with id="foo", then using $("#foo > div:visible").attr("id") would better reflect the purpose of the code.
|
Figure out div that is visible out of four divs
|
I need to figure out what div is visible out of four possible divs using jQuery. Only one of those div's will be visible at any given time.
This is what I have that works so far:
$("#FeatureImage1:visible, #FeatureImage2:visible, #FeatureImage3:visible, #FeatureImage4:visible").attr("id");
Is there a way to refactor this? Is there an easier way to figure this out?
|
[
"Assign the same class to each div then:\n$(\"div.myClass:visible\").attr(\"id\");\n\n",
"When applicable, it's better to use contextual selectors rather than add spurious classes. For instance, if the <div> elements are the only children of an element with id=\"foo\", then using $(\"#foo > div:visible\").attr(\"id\") would better reflect the purpose of the code.\n"
] |
[
15,
1
] |
[] |
[] |
[
"css_selectors",
"dom",
"jquery"
] |
stackoverflow_0000061486_css_selectors_dom_jquery.txt
|
Q:
Editing XML in Flex using e4x
In Flex, I have an xml document such as the following:
var xml:XML = <root><node>value1</node><node>value2</node><node>value3</node></root>
At runtime, I want to create a TextInput control for each node under root, and have the values bound to the values in the XML. As far as I can tell I can't use BindingUtils to bind to e4x nodes at runtime (please tell me if I'm wrong here!), so I'm trying to do this by hand:
for each (var node:XML in xml.node)
{
var textInput:TextInput = new TextInput();
var handler:Function = function(event:Event):void
{
node.setChildren(event.target.text);
};
textInput.text = node.text();
textInput.addEventListener(Event.CHANGE, handler);
this.addChild(pileHeightEditor);
}
My problem is that when the user edits one of the TextInputs, the node getting assigned to is always the last one encountered in the for loop. I am used to this pattern from C#, where each time an anonymous function is created, a "snapshot" of the values of the used values is taken, so "node" would be different in each handler function.
How do I "take a snapshot" of the current value of node to use in the handler? Or should I be using a different pattern in Flex?
A:
Unfortunately, function closures work weird/poorly in Actionscript. Variables only get a "snapshot" when they go out of scope. Unfortunately, variables are function scoped, and not block scoped. So it doesn't end up working like you want.
You could create a dictionary to map from TextInput -> node, or you could stash the node in the TextInput's data property.
I wish what you described did work correctly since it is an easy/powerful way of expressing that.
A:
The closure only captures a reference to the variable, not its current value. Since local variables are Function-scoped (not block-scoped) each iteration through the loop creates a closure that captures a reference to the same variable.
You could extract the TextInput creation code into a separate function, which would give you a separate variable instance to capture for the closure. Something like this:
for each (var node:XML in xml.node)
{
var textInput:TextInput = createTextInput(node);
this.addChild(pileHeightEditor);
}
...
private function createTextInput(node:XML) : TextInput {
var textInput:TextInput = new TextInput();
var handler:Function = function(event:Event):void
{
node.setChildren(event.target.text);
};
textInput.text = node.text();
textInput.addEventListener(Event.CHANGE, handler);
return textInput;
}
|
Editing XML in Flex using e4x
|
In Flex, I have an xml document such as the following:
var xml:XML = <root><node>value1</node><node>value2</node><node>value3</node></root>
At runtime, I want to create a TextInput control for each node under root, and have the values bound to the values in the XML. As far as I can tell I can't use BindingUtils to bind to e4x nodes at runtime (please tell me if I'm wrong here!), so I'm trying to do this by hand:
for each (var node:XML in xml.node)
{
var textInput:TextInput = new TextInput();
var handler:Function = function(event:Event):void
{
node.setChildren(event.target.text);
};
textInput.text = node.text();
textInput.addEventListener(Event.CHANGE, handler);
this.addChild(pileHeightEditor);
}
My problem is that when the user edits one of the TextInputs, the node getting assigned to is always the last one encountered in the for loop. I am used to this pattern from C#, where each time an anonymous function is created, a "snapshot" of the values of the used values is taken, so "node" would be different in each handler function.
How do I "take a snapshot" of the current value of node to use in the handler? Or should I be using a different pattern in Flex?
|
[
"Unfortunately, function closures work weird/poorly in Actionscript. Variables only get a \"snapshot\" when they go out of scope. Unfortunately, variables are function scoped, and not block scoped. So it doesn't end up working like you want.\nYou could create a dictionary to map from TextInput -> node, or you could stash the node in the TextInput's data property.\nI wish what you described did work correctly since it is an easy/powerful way of expressing that.\n",
"The closure only captures a reference to the variable, not its current value. Since local variables are Function-scoped (not block-scoped) each iteration through the loop creates a closure that captures a reference to the same variable. \nYou could extract the TextInput creation code into a separate function, which would give you a separate variable instance to capture for the closure. Something like this:\nfor each (var node:XML in xml.node)\n{\n var textInput:TextInput = createTextInput(node);\n this.addChild(pileHeightEditor);\n}\n... \n\nprivate function createTextInput(node:XML) : TextInput {\n var textInput:TextInput = new TextInput();\n var handler:Function = function(event:Event):void \n {\n node.setChildren(event.target.text);\n };\n textInput.text = node.text();\n textInput.addEventListener(Event.CHANGE, handler);\n return textInput;\n}\n\n"
] |
[
2,
2
] |
[] |
[] |
[
"apache_flex",
"e4x",
"javascript"
] |
stackoverflow_0000063181_apache_flex_e4x_javascript.txt
|
Q:
MultiMap in Scala
I'm trying to mixin the MultiMap trait with a HashMap like so:
val children:MultiMap[Integer, TreeNode] =
new HashMap[Integer, Set[TreeNode]] with MultiMap[Integer, TreeNode]
The definition for the MultiMap trait is:
trait MultiMap[A, B] extends Map[A, Set[B]]
Meaning that a MultiMap of types A & B is a Map of types A & Set[B], or so it seems to me. However, the compiler complains:
C:\...\TestTreeDataModel.scala:87: error: illegal inheritance; template $anon inherits different type instances of trait Map: scala.collection.mutable.Map[Integer,scala.collection.mutable.Set[package.TreeNode]] and scala.collection.mutable.Map[Integer,Set[package.TreeNode]]
new HashMap[Integer, Set[TreeNode]] with MultiMap[Integer, TreeNode]
^ one error found
It seems that generics are tripping me up again.
A:
I had to import scala.collection.mutable.Set. It seems the compiler thought the Set in HashMap[Integer, Set[TreeNode]] was scala.collection.Set. The Set in the MultiMap def is scala.collection.mutable.Set.
A:
That can be annoying, the name overloading in Scala's collections is one of its big weaknesses.
For what it's worth, if you had scala.collection._ imported, you could probably have written your HashMap type as:
new HashMap[ Integer, mutable.Set[ TreeNode ] ]
|
MultiMap in Scala
|
I'm trying to mixin the MultiMap trait with a HashMap like so:
val children:MultiMap[Integer, TreeNode] =
new HashMap[Integer, Set[TreeNode]] with MultiMap[Integer, TreeNode]
The definition for the MultiMap trait is:
trait MultiMap[A, B] extends Map[A, Set[B]]
Meaning that a MultiMap of types A & B is a Map of types A & Set[B], or so it seems to me. However, the compiler complains:
C:\...\TestTreeDataModel.scala:87: error: illegal inheritance; template $anon inherits different type instances of trait Map: scala.collection.mutable.Map[Integer,scala.collection.mutable.Set[package.TreeNode]] and scala.collection.mutable.Map[Integer,Set[package.TreeNode]]
new HashMap[Integer, Set[TreeNode]] with MultiMap[Integer, TreeNode]
^ one error found
It seems that generics are tripping me up again.
|
[
"I had to import scala.collection.mutable.Set. It seems the compiler thought the Set in HashMap[Integer, Set[TreeNode]] was scala.collection.Set. The Set in the MultiMap def is scala.collection.mutable.Set. \n",
"That can be annoying, the name overloading in Scala's collections is one of its big weaknesses.\nFor what it's worth, if you had scala.collection._ imported, you could probably have written your HashMap type as:\nnew HashMap[ Integer, mutable.Set[ TreeNode ] ]\n\n"
] |
[
26,
12
] |
[] |
[] |
[
"generics",
"scala"
] |
stackoverflow_0000050373_generics_scala.txt
|
Q:
Using Apache mod_rewrite to remove sub-directories from URL
I'm managing an instance of Wordpress where the URLs are in the following format:
http://www.example.com/example-category/blog-post-permalink/
The blog author did an inconsistent job of adding categories to posts, so while some of them had legitimate categories in their URLS, at least half are "uncategorised".
I can easily change Wordpress to render the URL without the category name (e.g., http://www.example.com/blog-post-permalink/), but I'd like to create a mod_rewrite rule to automatically redirect any requests for the previous format to the new, cleaner one.
How can I use a mod_rewrite recipe to handle this, taking into account that I want to honor requests for the real WordPress directories that are in my webroot?
A:
Something as simple as:
RewriteRule ^/[^/]+/([^/]+)/?$ /$2 [R]
Perhaps would do it?
That simple redirects /foo/bar/ to /bar.
|
Using Apache mod_rewrite to remove sub-directories from URL
|
I'm managing an instance of Wordpress where the URLs are in the following format:
http://www.example.com/example-category/blog-post-permalink/
The blog author did an inconsistent job of adding categories to posts, so while some of them had legitimate categories in their URLS, at least half are "uncategorised".
I can easily change Wordpress to render the URL without the category name (e.g., http://www.example.com/blog-post-permalink/), but I'd like to create a mod_rewrite rule to automatically redirect any requests for the previous format to the new, cleaner one.
How can I use a mod_rewrite recipe to handle this, taking into account that I want to honor requests for the real WordPress directories that are in my webroot?
|
[
"Something as simple as:\nRewriteRule ^/[^/]+/([^/]+)/?$ /$2 [R]\n\nPerhaps would do it? \nThat simple redirects /foo/bar/ to /bar.\n"
] |
[
2
] |
[] |
[] |
[
"apache",
"mod_rewrite",
"wordpress"
] |
stackoverflow_0000064380_apache_mod_rewrite_wordpress.txt
|
Q:
Refactoring dissassembled code
You write a function and, looking at the resulting assembly, you see it can be improved.
You would like to keep the function you wrote, for readability, but you would like to substitute your own assembly for the compiler's. Is there any way to establish a relationship between your high-livel language function and the new assembly?
A:
If you are looking at the assembly, then its fair to assume that you have a good understanding about how code gets compiled down. If you have this knowledge, then its sometimes possible to 'reverse enginer' the changes back up into the original language but its often better not to bother.
The optimisations that you make are likely to be very small in comparison to the time and effort required in first making these changes. I would suggest that you leave this kind of work to the compiler and go have a cup of tea. If the changes are significant, and the performance is critical, (as say in the embedded world) then you might want to mix the normal code with the assemblar in some fashion, however, on most computers and chips the performance is usually sufficient to avoid this headache.
If you really need more performance, then optimise the code not the assembly.
A:
None, I suppose. You've rejected the compiler's work in favor of your own. You might as well throw out the function you wrote in the compiled language, because now all you have is your assembler in that platform.
I would highly advise against engaging in this kind of optimization because unless you're sure, via profiling and analysis, that you truly are making a difference.
A:
It depends on the language you wrote your function in. Some languages like C are very low-level, translating each function call or statement to specific assembly statements. If you did use C, you can replace your function with inline assembly to improve performance.
Other high-level languages may convert each statement into macro routines or other more complex calls on the assembly side. Certain optimizations (like tail recursion, loop unrolling, etc) can be implemented easily on the source side, but others (like making more efficient use of the register file) may be impossible (again, depending on the language and the compiler you're using).
A:
Its tough to say there is any relationship between modified assembly and the source which generated the unmodified version. It will certainly confuse debugging tools: register contents will no longer match the source variables they were supposed to correspond to.
There are a number of places in packet processing code where I've examined the generated assembly and gone back to change the original source code in order to improve the result. Re-arranging source can reduce the number of branches, __attribute__ and compiler arguments can align branch points and functions to reduce I$ misses. In desperate cases a little inline assembly can be used, so that the binary can still be compiled from source.
A:
Something you could try is to separate your original function into its own file, and provide a make rule to build the assembler from there. Then update the assembler file with your improved version, and provide a make rule to build an object file from the assembler file. Then change your link rules to include that object file.
If you only ever change the assembler file, that will keep on being used. If you ever change the original higher-level language file, the assembler file will be rebuilt and the object file built from the new (unimproved) version.
This gives you a relationship between the two; you probably want to add a warning comment at the top of the higher-level language file to warn about the behaviour. Using some form of VCS will give you the ability to recover the improved assembler file if you make a mistake here.
A:
If you're writing a native compiled app in Visual C++, there are two methods:
Use the __asm { } block and write your assembler in there.
Write your functions in MASM assembler, assemble to .obj, and link it as an static library. In your C/C++ code, declare the function with an extern "C" declaration.
Other C/C++ compilers have similar approaches.
A:
In this situation, you generally have two options: optimize the code or rewrite the compiler. I can't see where breaking the link between source and op is ever going to be the correct solution.
|
Refactoring dissassembled code
|
You write a function and, looking at the resulting assembly, you see it can be improved.
You would like to keep the function you wrote, for readability, but you would like to substitute your own assembly for the compiler's. Is there any way to establish a relationship between your high-livel language function and the new assembly?
|
[
"If you are looking at the assembly, then its fair to assume that you have a good understanding about how code gets compiled down. If you have this knowledge, then its sometimes possible to 'reverse enginer' the changes back up into the original language but its often better not to bother. \nThe optimisations that you make are likely to be very small in comparison to the time and effort required in first making these changes. I would suggest that you leave this kind of work to the compiler and go have a cup of tea. If the changes are significant, and the performance is critical, (as say in the embedded world) then you might want to mix the normal code with the assemblar in some fashion, however, on most computers and chips the performance is usually sufficient to avoid this headache. \nIf you really need more performance, then optimise the code not the assembly.\n",
"None, I suppose. You've rejected the compiler's work in favor of your own. You might as well throw out the function you wrote in the compiled language, because now all you have is your assembler in that platform.\nI would highly advise against engaging in this kind of optimization because unless you're sure, via profiling and analysis, that you truly are making a difference. \n",
"It depends on the language you wrote your function in. Some languages like C are very low-level, translating each function call or statement to specific assembly statements. If you did use C, you can replace your function with inline assembly to improve performance.\nOther high-level languages may convert each statement into macro routines or other more complex calls on the assembly side. Certain optimizations (like tail recursion, loop unrolling, etc) can be implemented easily on the source side, but others (like making more efficient use of the register file) may be impossible (again, depending on the language and the compiler you're using).\n",
"Its tough to say there is any relationship between modified assembly and the source which generated the unmodified version. It will certainly confuse debugging tools: register contents will no longer match the source variables they were supposed to correspond to.\nThere are a number of places in packet processing code where I've examined the generated assembly and gone back to change the original source code in order to improve the result. Re-arranging source can reduce the number of branches, __attribute__ and compiler arguments can align branch points and functions to reduce I$ misses. In desperate cases a little inline assembly can be used, so that the binary can still be compiled from source.\n",
"Something you could try is to separate your original function into its own file, and provide a make rule to build the assembler from there. Then update the assembler file with your improved version, and provide a make rule to build an object file from the assembler file. Then change your link rules to include that object file.\nIf you only ever change the assembler file, that will keep on being used. If you ever change the original higher-level language file, the assembler file will be rebuilt and the object file built from the new (unimproved) version.\nThis gives you a relationship between the two; you probably want to add a warning comment at the top of the higher-level language file to warn about the behaviour. Using some form of VCS will give you the ability to recover the improved assembler file if you make a mistake here.\n",
"If you're writing a native compiled app in Visual C++, there are two methods:\n\nUse the __asm { } block and write your assembler in there.\nWrite your functions in MASM assembler, assemble to .obj, and link it as an static library. In your C/C++ code, declare the function with an extern \"C\" declaration.\n\nOther C/C++ compilers have similar approaches.\n",
"In this situation, you generally have two options: optimize the code or rewrite the compiler. I can't see where breaking the link between source and op is ever going to be the correct solution.\n"
] |
[
3,
2,
1,
1,
1,
1,
1
] |
[] |
[] |
[
"disassembly",
"optimization"
] |
stackoverflow_0000064321_disassembly_optimization.txt
|
Q:
Is there a way to use GflAx to incorporate gradient colours?
Ok, narrow question of the day. I'm using GflAx (from xnview) to create some graphic tiles. I would like to put some gradients in as well though.
Is there a way I can do this within this product?
There is also an SDK which is part of this product but I can't find that info there.
A:
You can not do this but you can create the gradient in another program and then do a "LoadBitmap" make the mods you need ontop of that 'background' and then save to a new file.
|
Is there a way to use GflAx to incorporate gradient colours?
|
Ok, narrow question of the day. I'm using GflAx (from xnview) to create some graphic tiles. I would like to put some gradients in as well though.
Is there a way I can do this within this product?
There is also an SDK which is part of this product but I can't find that info there.
|
[
"You can not do this but you can create the gradient in another program and then do a \"LoadBitmap\" make the mods you need ontop of that 'background' and then save to a new file.\n"
] |
[
1
] |
[] |
[] |
[
"gradient",
"image_manipulation"
] |
stackoverflow_0000059377_gradient_image_manipulation.txt
|
Q:
Castle Windsor: How do you add a call to a factory facility not in xml?
I know how to tell Castle Windsor to resolve a reference from a factory's method using XML, but can I do it programmatically via the Container.AddComponent() interface? If not is there any other way to do it from code?
EDIT:
There seems to be some confusion so let me clarify, I am looking for a way to do the following in code:
<facilities>
<facility
id="factory.support"
type="Castle.Facilities.FactorySupport.FactorySupportFacility, Castle.MicroKernel"
/>
</facilities>
<components>
<component
id="CustomerRepositoryFactory"
type="ConsoleApplication2.CustomerRepositoryFactory, ConsoleApplication2"
/>
<component
id="CustomerRepository"
service="ConsoleApplication2.ICustomerRepository, ConsoleApplication2"
type="ConsoleApplication2.CustomerRepository, ConsoleApplication2"
factoryId="CustomerRepositoryFactory"
factoryCreate="Create"
/>
</components>
(from this codebetter article on factory support in windsor and spring.net)
A:
Directly from the Unit Test FactorySupportTestCase (which are your friends):
[Test]
public void FactorySupport_UsingProxiedFactory_WorksFine()
{
container.AddFacility("factories", new FactorySupportFacility());
container.AddComponent("standard.interceptor", typeof(StandardInterceptor));
container.AddComponent("factory", typeof(CalulcatorFactory));
AddComponent("calculator", typeof(ICalcService), typeof(CalculatorService), "Create");
ICalcService service = (ICalcService) container["calculator"];
Assert.IsNotNull(service);
}
private void AddComponent(string key, Type service, Type type, string factoryMethod)
{
MutableConfiguration config = new MutableConfiguration(key);
config.Attributes["factoryId"] = "factory";
config.Attributes["factoryCreate"] = factoryMethod;
container.Kernel.ConfigurationStore.AddComponentConfiguration(key, config);
container.Kernel.AddComponent(key, service, type);
}
|
Castle Windsor: How do you add a call to a factory facility not in xml?
|
I know how to tell Castle Windsor to resolve a reference from a factory's method using XML, but can I do it programmatically via the Container.AddComponent() interface? If not is there any other way to do it from code?
EDIT:
There seems to be some confusion so let me clarify, I am looking for a way to do the following in code:
<facilities>
<facility
id="factory.support"
type="Castle.Facilities.FactorySupport.FactorySupportFacility, Castle.MicroKernel"
/>
</facilities>
<components>
<component
id="CustomerRepositoryFactory"
type="ConsoleApplication2.CustomerRepositoryFactory, ConsoleApplication2"
/>
<component
id="CustomerRepository"
service="ConsoleApplication2.ICustomerRepository, ConsoleApplication2"
type="ConsoleApplication2.CustomerRepository, ConsoleApplication2"
factoryId="CustomerRepositoryFactory"
factoryCreate="Create"
/>
</components>
(from this codebetter article on factory support in windsor and spring.net)
|
[
"Directly from the Unit Test FactorySupportTestCase (which are your friends):\n[Test]\n public void FactorySupport_UsingProxiedFactory_WorksFine()\n {\n container.AddFacility(\"factories\", new FactorySupportFacility());\n container.AddComponent(\"standard.interceptor\", typeof(StandardInterceptor));\n container.AddComponent(\"factory\", typeof(CalulcatorFactory));\n\n AddComponent(\"calculator\", typeof(ICalcService), typeof(CalculatorService), \"Create\");\n\n ICalcService service = (ICalcService) container[\"calculator\"];\n\n Assert.IsNotNull(service);\n }\n\n private void AddComponent(string key, Type service, Type type, string factoryMethod)\n {\n MutableConfiguration config = new MutableConfiguration(key);\n config.Attributes[\"factoryId\"] = \"factory\";\n config.Attributes[\"factoryCreate\"] = factoryMethod;\n container.Kernel.ConfigurationStore.AddComponentConfiguration(key, config);\n container.Kernel.AddComponent(key, service, type);\n }\n\n"
] |
[
3
] |
[] |
[] |
[
".net",
"castle_windsor",
"inversion_of_control"
] |
stackoverflow_0000064238_.net_castle_windsor_inversion_of_control.txt
|
Q:
Endless loop in JavaScript that does not trigger warning by browser
I'm displaying a set of images as an overlay using Google Maps. Displaying these images should be in an endless loop but most most browsers detect this, and display a warning.
Is there a way to make a endless loop in JavaScript so that it isn't stopped or warned against by the browser?
A:
Try setInterval or setTimeout.
Here is an example:
(show = (o) => setTimeout(() => {
console.log(o)
show(++o)
}, 1000))(1);
.as-console-wrapper { max-height: 100% !important; top: 0; }
A:
You should use a timer to continuously bring new images instead of an infinite loop. Check the setTimeout() function. The caveat is that you should call it in a function that calls itself, for it to wait again. Example taken from w3schools:
var c = 0
var t;
function timedCount() {
document.getElementById('txt').value = c;
c = c + 1;
t = setTimeout("timedCount()", 1000);
}
<form>
<input type="button" value="Start count!" onClick="timedCount()">
<input type="text" id="txt">
</form>
A:
The following code will set an interval and set the image to the next image from an array of image sources every second.
function setImage(){
var Static = arguments.callee;
Static.currentImage = (Static.currentImage || 0);
var elm = document.getElementById("imageContainer");
elm.src = imageArray[Static.currentImage++];
}
imageInterval = setInterval(setImage, 1000);
A:
Instead of using an infinite loop, make a timer that keeps firing every n seconds - you'll get the 'run forever' aspect without the browser hang.
A:
Perhaps try using a timer which retrieves the next image each time it ticks, unfortunately i don't know any JavaScript so I can't provide a code sample
A:
function foo() {
alert('hi');
setTimeout(foo, 5000);
}
Then just use an action like "onload" to kick off 'foo'
A:
If it fits your case, you can keep loading new images to respond to user interaction, like this website does (just scroll down).
A:
Just a formal answer:
var i = 0;
while (i < 1) {
do something...
if (i < 1) i = 0;
else i = fooling_function(i); // must return 0
}
I think no browser would detect such things.
|
Endless loop in JavaScript that does not trigger warning by browser
|
I'm displaying a set of images as an overlay using Google Maps. Displaying these images should be in an endless loop but most most browsers detect this, and display a warning.
Is there a way to make a endless loop in JavaScript so that it isn't stopped or warned against by the browser?
|
[
"Try setInterval or setTimeout.\nHere is an example:\n\n\n(show = (o) => setTimeout(() => {\r\n\r\n console.log(o)\r\n show(++o)\r\n\r\n}, 1000))(1);\n.as-console-wrapper { max-height: 100% !important; top: 0; }\n\n\n\n",
"You should use a timer to continuously bring new images instead of an infinite loop. Check the setTimeout() function. The caveat is that you should call it in a function that calls itself, for it to wait again. Example taken from w3schools:\n\n\nvar c = 0\r\nvar t;\r\n\r\nfunction timedCount() {\r\n document.getElementById('txt').value = c;\r\n c = c + 1;\r\n t = setTimeout(\"timedCount()\", 1000);\r\n}\n<form>\r\n <input type=\"button\" value=\"Start count!\" onClick=\"timedCount()\">\r\n <input type=\"text\" id=\"txt\">\r\n</form>\n\n\n\n",
"The following code will set an interval and set the image to the next image from an array of image sources every second.\nfunction setImage(){\n var Static = arguments.callee;\n Static.currentImage = (Static.currentImage || 0);\n var elm = document.getElementById(\"imageContainer\");\n elm.src = imageArray[Static.currentImage++];\n}\nimageInterval = setInterval(setImage, 1000);\n\n",
"Instead of using an infinite loop, make a timer that keeps firing every n seconds - you'll get the 'run forever' aspect without the browser hang.\n",
"Perhaps try using a timer which retrieves the next image each time it ticks, unfortunately i don't know any JavaScript so I can't provide a code sample\n",
"function foo() {\n alert('hi');\n setTimeout(foo, 5000);\n}\n\nThen just use an action like \"onload\" to kick off 'foo'\n",
"If it fits your case, you can keep loading new images to respond to user interaction, like this website does (just scroll down).\n",
"Just a formal answer:\nvar i = 0;\n\nwhile (i < 1) {\n do something...\n\n if (i < 1) i = 0;\n else i = fooling_function(i); // must return 0\n}\n\nI think no browser would detect such things. \n"
] |
[
8,
3,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"javascript",
"loops"
] |
stackoverflow_0000063011_javascript_loops.txt
|
Q:
Implementing CollectionConstraints across NUnit versions
We've implemented a CollectionConstraint for Nunit in version 2.4.3 in C#. Some of our developers have already upgraded to version 2.4.7 though, and they get project creation errors when compiling. The error is
doMatch: no suitable method found to override
Any advice on how to get this constraint so it compiles version-agnostically?
A:
Unfortunately the constraint API changed in incompatible ways for custom constraints in 2.4.6. NUnit 2.4.5 and earlier used an IConstraint interface and in 2.4.6 it was changed to a Constraint abstract base class. There was an optional Constraint base class in 2.4.5 and earlier, but the class is not consistent between versions.
Therefore there is no way to make a compiled dll work with both versions of NUnit. Everyone should upgrade to the same version of NUnit.
Sorry I'm sure this is not the answer you're looking for.
Sam
|
Implementing CollectionConstraints across NUnit versions
|
We've implemented a CollectionConstraint for Nunit in version 2.4.3 in C#. Some of our developers have already upgraded to version 2.4.7 though, and they get project creation errors when compiling. The error is
doMatch: no suitable method found to override
Any advice on how to get this constraint so it compiles version-agnostically?
|
[
"Unfortunately the constraint API changed in incompatible ways for custom constraints in 2.4.6. NUnit 2.4.5 and earlier used an IConstraint interface and in 2.4.6 it was changed to a Constraint abstract base class. There was an optional Constraint base class in 2.4.5 and earlier, but the class is not consistent between versions.\nTherefore there is no way to make a compiled dll work with both versions of NUnit. Everyone should upgrade to the same version of NUnit.\nSorry I'm sure this is not the answer you're looking for.\nSam\n"
] |
[
2
] |
[] |
[] |
[
"c#",
"nunit"
] |
stackoverflow_0000062951_c#_nunit.txt
|
Q:
What is the best way to make a .net client consume service from a Java server?
I have a user interface in .net which needs to receive data from a server, on a request/reply/update model. The only constraint is to use Java only on the server box.
What is the best approach to achieve this ? Is it by creating a Webservice in Java and then accessing it in .net, or should I create Java proxies and convert them in .net by using IKM ? Or do you have any better idea ? It can be HTTP based, used a direct socket connection, or any middleware.
A:
Write webservice in Java and access it in .net
A:
I recommend the web service route. It offers a standard interface that can be consumed by other client platforms in the future.
.NET clients interact with Java web services pretty well, though there are some gotchas. The best two technologies available for you for the .NET client are Microsoft Web Service Enhancements (WSE) and Windows Communication Foundation (WCF). WSE is an older technology that is no longer being updated by Microsoft, but still works great in Visual Studio 2005 and older. I find WSE to be a bit easier to get started with in terms of how you interface with basic services, but WCF has much more support for WS-* protocols (security, trust, etc.). If your needs are basic and you're still using Visual Studio 2005 (.NET framework 2 or older), then go with WSE. If you like the cutting edge, or you anticipate more advanced security needs (doesn't sound like you will), then go with WCF. Please note that WSE will not work easily in Visual Studio 2008 and newer, and WCF will not work in Visual Studio 2005 and older.
Going the web service route will mean that you will design to an interface that can be reused and will result in a more loosely coupled system when you're done than most of the other routes. The downside is primarily performance: xml serialization will be slower than binary over the wire, and web services do not handle large amounts of data well.
A:
Using a standard type of web service (e.g. SOAP or XML-RPC) is best because not only is it easy to produce/consume, it's easy in other languages as well.
|
What is the best way to make a .net client consume service from a Java server?
|
I have a user interface in .net which needs to receive data from a server, on a request/reply/update model. The only constraint is to use Java only on the server box.
What is the best approach to achieve this ? Is it by creating a Webservice in Java and then accessing it in .net, or should I create Java proxies and convert them in .net by using IKM ? Or do you have any better idea ? It can be HTTP based, used a direct socket connection, or any middleware.
|
[
"Write webservice in Java and access it in .net\n",
"I recommend the web service route. It offers a standard interface that can be consumed by other client platforms in the future.\n.NET clients interact with Java web services pretty well, though there are some gotchas. The best two technologies available for you for the .NET client are Microsoft Web Service Enhancements (WSE) and Windows Communication Foundation (WCF). WSE is an older technology that is no longer being updated by Microsoft, but still works great in Visual Studio 2005 and older. I find WSE to be a bit easier to get started with in terms of how you interface with basic services, but WCF has much more support for WS-* protocols (security, trust, etc.). If your needs are basic and you're still using Visual Studio 2005 (.NET framework 2 or older), then go with WSE. If you like the cutting edge, or you anticipate more advanced security needs (doesn't sound like you will), then go with WCF. Please note that WSE will not work easily in Visual Studio 2008 and newer, and WCF will not work in Visual Studio 2005 and older.\nGoing the web service route will mean that you will design to an interface that can be reused and will result in a more loosely coupled system when you're done than most of the other routes. The downside is primarily performance: xml serialization will be slower than binary over the wire, and web services do not handle large amounts of data well.\n",
"Using a standard type of web service (e.g. SOAP or XML-RPC) is best because not only is it easy to produce/consume, it's easy in other languages as well.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
".net",
"interop",
"java"
] |
stackoverflow_0000064454_.net_interop_java.txt
|
Q:
Giving class unique ID on instantiation: .Net
I would like to give a class a unique ID every time a new one is instantiated. For example with a class named Foo i would like to be able to do the following
dim a as New Foo()
dim b as New Foo()
and a would get a unique id and b would get a unique ID. The ids only have to be unique over run time so i would just like to use an integer. I have found a way to do this BUT (and heres the caveat) I do NOT want to be able to change the ID from anywhere. My current idea for a way to implement this is the following:
Public Class test
Private Shared ReadOnly _nextId As Integer
Private ReadOnly _id As Integer
Public Sub New()
_nextId = _nextId + 1
_id = _nextId
End Sub
End Class
However this will not compile because it throws an error on
_nextId = _nextId + 1
I don't see why this would be an error (because _Id is also readonly you're supposed to be able to change a read only variable in the constructor.) I think this has something to do with it being shared also. Any solution (hopefully not kludgy hehe) or an explanation of why this won't work will be accepted. The important part is i want both of the variables (or if there is a way to only have one that would even be better but i don't think that is possible) to be immutable after the object is initialized. Thanks!
A:
This design is vulnerable to multithreading issues. I'd strongly suggest using Guids for your IDs (Guid.NewGuid()). If you absolutely must use ints, check out the Interlocked class. You can wrap all incrementing and Id logic up in a base class so that you're only accessing the ID generator in one location.
A:
Consider the following code:
Public Class Foo
Private ReadOnly _fooId As FooId
Public Sub New()
_fooId = New FooId()
End Sub
Public ReadOnly Property Id() As Integer
Get
Return _fooId.Id
End Get
End Property
End Class
Public NotInheritable Class FooId
Private Shared _nextId As Integer
Private ReadOnly _id As Integer
Shared Sub New()
_nextId = 0
End Sub
Public Sub New()
SyncLock GetType(FooId)
_id = System.Math.Max(System.Threading.Interlocked.Increment(_nextId),_nextId - 1)
End SyncLock
End Sub
Public ReadOnly Property Id() As Integer
Get
Return _id
End Get
End Property
End Class
Instead of storing an int inside Foo, you store an object of type FooId. This way you have full control over what can and cannot be done to the id.
To protect our FooId against manipulation, it cannot be inherited, and has no methods except the constructor and a getter for the int. Furthermore, the variable _nextId is private to FooId and cannot be changed from the outside. Finally the SyncLock inside the constructor of FooId makes sure that it is never executed in parallell, guaranteeing that all IDs inside a process are unique (until you hit MaxInt :)).
A:
ReadOnly variables must be initialized during object construction, and then cannot be updated afterwards. This won't compile because you can't increment _nextId for that reason. (Shared ReadOnly variables can only be assigned in Shared constructors.)
As such, if you remove the ReadOnly modifier on the definition of _nextId, you should be ok.
A:
I'd do it like this.
Public MustInherit Class Unique
Private _UID As Guid = Guid.NewGuid()
Public ReadOnly Property UID() As Guid
Get
Return _UID
End Get
End Property
End Class
A:
It throws an error because _nextId is ReadOnly. Remove that.
Edit:
As you say, ReadOnly variables can be changed in a constructor, but not if they are Shared. Those can only be changed in shared constructors. Example:
Shared Sub New()
_nextId = 0
End Sub
A:
The shared integer shouldn't be read-only. A field marked readonly can only ever be assigned once and must be assigned before the constructor exits.
As the shared field is private, there is no danger that the field will be changed by anything external anyway.
A:
You said that "this will not compile because it throws an error" but never said what that error is.
A shared variable is static, so there is only a single copy of it in memory that is accessible to all instances. You can only modify a static readonly (Shared ReadOnly) from a static (Shared) constructor (New()) so you probably want something like this:
Public Class test
Private Shared ReadOnly _nextId As Integer
Private ReadOnly _id As Integer
Public Shared Sub New()
_nextId = _nextId + 1
End Sub
Public Sub New()
_id = _nextId
End Sub
End Class
(I think that's the right syntax in VB.) In C# it would look like this:
public class Test
{
private static readonly int _nextId;
private readonly int _id;
static Test()
{
_nextId++;
}
public Test()
{
_id = _nextId;
}
}
The only problem here is that the static constructor is only going to be called once, so _nextId is only going to be incremented one time. Since it is a static readonly variable you will only be able to initialize it the static constructor, so your new instances aren't going to be getting an incremented _id field like you want.
What is the problem you are trying to solve with this scenario? Do these unique IDs have to be integer values? If not, you could use a Guid and in your contructor call Guid.
A:
I posted a similar question that focused on the multithreading issues of setting a unique instance id. You can review it for details.
|
Giving class unique ID on instantiation: .Net
|
I would like to give a class a unique ID every time a new one is instantiated. For example with a class named Foo i would like to be able to do the following
dim a as New Foo()
dim b as New Foo()
and a would get a unique id and b would get a unique ID. The ids only have to be unique over run time so i would just like to use an integer. I have found a way to do this BUT (and heres the caveat) I do NOT want to be able to change the ID from anywhere. My current idea for a way to implement this is the following:
Public Class test
Private Shared ReadOnly _nextId As Integer
Private ReadOnly _id As Integer
Public Sub New()
_nextId = _nextId + 1
_id = _nextId
End Sub
End Class
However this will not compile because it throws an error on
_nextId = _nextId + 1
I don't see why this would be an error (because _Id is also readonly you're supposed to be able to change a read only variable in the constructor.) I think this has something to do with it being shared also. Any solution (hopefully not kludgy hehe) or an explanation of why this won't work will be accepted. The important part is i want both of the variables (or if there is a way to only have one that would even be better but i don't think that is possible) to be immutable after the object is initialized. Thanks!
|
[
"This design is vulnerable to multithreading issues. I'd strongly suggest using Guids for your IDs (Guid.NewGuid()). If you absolutely must use ints, check out the Interlocked class. You can wrap all incrementing and Id logic up in a base class so that you're only accessing the ID generator in one location.\n",
"Consider the following code:\nPublic Class Foo \n Private ReadOnly _fooId As FooId \n\n Public Sub New() \n _fooId = New FooId() \n End Sub \n\n Public ReadOnly Property Id() As Integer \n Get \n Return _fooId.Id \n End Get \n End Property \nEnd Class \n\nPublic NotInheritable Class FooId \n Private Shared _nextId As Integer \n Private ReadOnly _id As Integer \n\n Shared Sub New() \n _nextId = 0 \n End Sub \n\n Public Sub New() \n SyncLock GetType(FooId) \n _id = System.Math.Max(System.Threading.Interlocked.Increment(_nextId),_nextId - 1) \n End SyncLock \n End Sub \n\n Public ReadOnly Property Id() As Integer \n Get \n Return _id \n End Get \n End Property \nEnd Class \n\nInstead of storing an int inside Foo, you store an object of type FooId. This way you have full control over what can and cannot be done to the id.\nTo protect our FooId against manipulation, it cannot be inherited, and has no methods except the constructor and a getter for the int. Furthermore, the variable _nextId is private to FooId and cannot be changed from the outside. Finally the SyncLock inside the constructor of FooId makes sure that it is never executed in parallell, guaranteeing that all IDs inside a process are unique (until you hit MaxInt :)).\n",
"ReadOnly variables must be initialized during object construction, and then cannot be updated afterwards. This won't compile because you can't increment _nextId for that reason. (Shared ReadOnly variables can only be assigned in Shared constructors.)\nAs such, if you remove the ReadOnly modifier on the definition of _nextId, you should be ok.\n",
"I'd do it like this.\nPublic MustInherit Class Unique\n Private _UID As Guid = Guid.NewGuid()\n Public ReadOnly Property UID() As Guid\n Get\n Return _UID\n End Get\n End Property\nEnd Class\n\n",
"It throws an error because _nextId is ReadOnly. Remove that.\nEdit:\nAs you say, ReadOnly variables can be changed in a constructor, but not if they are Shared. Those can only be changed in shared constructors. Example:\nShared Sub New()\n _nextId = 0\nEnd Sub\n\n",
"The shared integer shouldn't be read-only. A field marked readonly can only ever be assigned once and must be assigned before the constructor exits.\nAs the shared field is private, there is no danger that the field will be changed by anything external anyway.\n",
"You said that \"this will not compile because it throws an error\" but never said what that error is.\nA shared variable is static, so there is only a single copy of it in memory that is accessible to all instances. You can only modify a static readonly (Shared ReadOnly) from a static (Shared) constructor (New()) so you probably want something like this:\nPublic Class test\n Private Shared ReadOnly _nextId As Integer\n Private ReadOnly _id As Integer\n\n Public Shared Sub New()\n _nextId = _nextId + 1\n End Sub\n\n Public Sub New()\n _id = _nextId\n End Sub\nEnd Class\n\n(I think that's the right syntax in VB.) In C# it would look like this:\npublic class Test\n{\n private static readonly int _nextId;\n private readonly int _id;\n\n static Test()\n {\n _nextId++;\n }\n\n public Test()\n {\n _id = _nextId;\n }\n\n}\nThe only problem here is that the static constructor is only going to be called once, so _nextId is only going to be incremented one time. Since it is a static readonly variable you will only be able to initialize it the static constructor, so your new instances aren't going to be getting an incremented _id field like you want.\nWhat is the problem you are trying to solve with this scenario? Do these unique IDs have to be integer values? If not, you could use a Guid and in your contructor call Guid.\n",
"I posted a similar question that focused on the multithreading issues of setting a unique instance id. You can review it for details.\n"
] |
[
5,
2,
1,
1,
0,
0,
0,
0
] |
[
"It's likely throwing an error because you're never initializing _nextId to anything. It needs to have an initial value before you can safely add 1 to it.\n"
] |
[
-1
] |
[
".net",
"vb.net"
] |
stackoverflow_0000063995_.net_vb.net.txt
|
Q:
Has anyone attempted to make PHP's system functions more Object-Oriented?
I'm just curious if any project exists that attempts to group all (or most) of PHP's built-in functions into a more object-oriented class hierarchy. For example, grouping all the string functions into a single String class, etc.
I realize this won't actually solve any problems (unless the modifications took place at the PHP source code level), since all the built-in functions would still be accessible in the global namespace, but it would certainly make usability much easier.
A:
I think something like this is intergral for PHP to move forward. Being mainly a .Net programmer, I find PHP painful to work in with it's 1 million and 1 global functions. It's nice that PHP 5.3 has namespaces, but it doesn't help things much when their own libraries aren't even object oriented, let alone employ namespaces. I don't mind PHP as a language so much, but their API is terribly disorganized, and it probably needs a complete overhaul. Kind of like what VB went through when it became VB.Net.
A:
Way too many times. As soon as someone discovers that PHP has OO features they want to wrap everything in classes.
The point to the OO stuff in PHP is so that you can architect your solutions in whichever way you want. But wrapping the existing functions in Objects doesn't yield much payoff.
That being said PHP's core is quite object oriented already. Take a look at SPL.
A:
To Answer your question, Yes there exists several of libraries that do exactly what you are talking about. As far as which one you want to use is an entirely different question. PHPClasses and pear.org are good places to start looking for such libraries.
Update:
As the others have suggested SPL is a good library and wraps many of built in php functions. However there still are lots of php functions that it does not wrap. Leaving us still without a silver bullet.
In using frameworks such as Cakephp and Zend (others too), I have noticed that they attempt to solve some of these problems by including their own libraries and building basics such as DB connectivity into the frame work. So frameworks may be another solution
A:
I don't agree. Object Oriented Programming is not inherently better than procedural programming. I believe that you should not use OO unless you need polymorphic behavior (inheritance, overriding methods, etc). Using objects as simple containers for code is not worth the overhead. This is particularly true of strings because their used so much (e.g. as array keys). Every application can usually benifit from some polymorphic features but usually at a high level. Would you ever want to extend a String class?
Also, a little history is necessary to understand PHP's odd function naming. PHP is grounded around The Standard C Library and POSIX standard and uses many of the same function names (strstr, getcwd, ldap_open, etc). This is actually a good thing because it minimizes the amount of language binding code, ensures that a full well thought out set of features (just about anything you can do in C you can do in PHP) and these system libraries are highly optimized (e.g. strchr is usually inlined which makes it about 10x faster).
|
Has anyone attempted to make PHP's system functions more Object-Oriented?
|
I'm just curious if any project exists that attempts to group all (or most) of PHP's built-in functions into a more object-oriented class hierarchy. For example, grouping all the string functions into a single String class, etc.
I realize this won't actually solve any problems (unless the modifications took place at the PHP source code level), since all the built-in functions would still be accessible in the global namespace, but it would certainly make usability much easier.
|
[
"I think something like this is intergral for PHP to move forward. Being mainly a .Net programmer, I find PHP painful to work in with it's 1 million and 1 global functions. It's nice that PHP 5.3 has namespaces, but it doesn't help things much when their own libraries aren't even object oriented, let alone employ namespaces. I don't mind PHP as a language so much, but their API is terribly disorganized, and it probably needs a complete overhaul. Kind of like what VB went through when it became VB.Net.\n",
"Way too many times. As soon as someone discovers that PHP has OO features they want to wrap everything in classes. \nThe point to the OO stuff in PHP is so that you can architect your solutions in whichever way you want. But wrapping the existing functions in Objects doesn't yield much payoff.\nThat being said PHP's core is quite object oriented already. Take a look at SPL.\n",
"To Answer your question, Yes there exists several of libraries that do exactly what you are talking about. As far as which one you want to use is an entirely different question. PHPClasses and pear.org are good places to start looking for such libraries.\nUpdate:\nAs the others have suggested SPL is a good library and wraps many of built in php functions. However there still are lots of php functions that it does not wrap. Leaving us still without a silver bullet.\nIn using frameworks such as Cakephp and Zend (others too), I have noticed that they attempt to solve some of these problems by including their own libraries and building basics such as DB connectivity into the frame work. So frameworks may be another solution\n",
"I don't agree. Object Oriented Programming is not inherently better than procedural programming. I believe that you should not use OO unless you need polymorphic behavior (inheritance, overriding methods, etc). Using objects as simple containers for code is not worth the overhead. This is particularly true of strings because their used so much (e.g. as array keys). Every application can usually benifit from some polymorphic features but usually at a high level. Would you ever want to extend a String class?\nAlso, a little history is necessary to understand PHP's odd function naming. PHP is grounded around The Standard C Library and POSIX standard and uses many of the same function names (strstr, getcwd, ldap_open, etc). This is actually a good thing because it minimizes the amount of language binding code, ensures that a full well thought out set of features (just about anything you can do in C you can do in PHP) and these system libraries are highly optimized (e.g. strchr is usually inlined which makes it about 10x faster).\n"
] |
[
6,
6,
4,
4
] |
[] |
[] |
[
"oop",
"php",
"wrapper"
] |
stackoverflow_0000061784_oop_php_wrapper.txt
|
Q:
.Net NNTP implementation
Is there a good .Net implementation of the NNTP protocol?
A:
Try libraries like http://sourceforge.net/projects/dougnewsnntp/ and http://www.codeplex.com/nntpclientlib
A:
There is a C# tutorial for reading posts using NNTP here. It should be enough to get you started but if you wish to start getting into processing binary posts, you're probably going to have to deal with some mime-type content too. I don't think this article covers that.
|
.Net NNTP implementation
|
Is there a good .Net implementation of the NNTP protocol?
|
[
"Try libraries like http://sourceforge.net/projects/dougnewsnntp/ and http://www.codeplex.com/nntpclientlib\n",
"There is a C# tutorial for reading posts using NNTP here. It should be enough to get you started but if you wish to start getting into processing binary posts, you're probably going to have to deal with some mime-type content too. I don't think this article covers that.\n"
] |
[
3,
1
] |
[] |
[] |
[
".net",
"nntp",
"protocols"
] |
stackoverflow_0000064445_.net_nntp_protocols.txt
|
Q:
VB.NET on Vista, trying to get date (Today) causes security exception
I have a VB6 program that someone recently helped me convert to VB.NET
In the program, when saving files, I stamp them with the date which I was getting by calling the Today() function.
When I try to run the new VB.NET code in Vista it throws a permission exception for the Today() . If I run Visual Studio Express (this is the 2008 Express version) in Admin mode, then the problem doesn't occur, but clearly I want to end up with a stand-alone program which runs for all users without fancy permissions.
So how can a normal VB.NET program in Vista get today's date?
A:
Use DateTime.Now or DateTime.Today. These are entirely managed and shouldn't throw security exceptions.
The old VB6 functions, such as Len(), Left(), Right(), OpenFile(), FreeFile() are all present in the .NET Framework in the Microsoft.VisualBasic DLL. To maintain backwards compatibility, they all call the old functions in unmanaged code. Unmanaged code requires special security permissions because it can be dangerous.
Whenever possible, try and use the newer .NET functions. They are usually much faster (File IO using Streams for instance) and safer.
A:
When I try the following statement:
Dim result As String = Today()
It gives me today's date, as I'd expect, and I'm running VB2005 on Vista. Can you modify the question with the version of VB you're using? Also, can you try the following statement instead of Today() to see it works for you without the exception?
Dim result As String = Now()
A:
The Today() function should behave properly on Vista. I believe behind the scenes it is simply evaluating the DateTime.Today property, so it shouldn't throw any exceptions. If you're porting VB to VB.NET you should probably go ahead and use the DateTime.Today property rather than the VB6 compatability code.
|
VB.NET on Vista, trying to get date (Today) causes security exception
|
I have a VB6 program that someone recently helped me convert to VB.NET
In the program, when saving files, I stamp them with the date which I was getting by calling the Today() function.
When I try to run the new VB.NET code in Vista it throws a permission exception for the Today() . If I run Visual Studio Express (this is the 2008 Express version) in Admin mode, then the problem doesn't occur, but clearly I want to end up with a stand-alone program which runs for all users without fancy permissions.
So how can a normal VB.NET program in Vista get today's date?
|
[
"Use DateTime.Now or DateTime.Today. These are entirely managed and shouldn't throw security exceptions.\nThe old VB6 functions, such as Len(), Left(), Right(), OpenFile(), FreeFile() are all present in the .NET Framework in the Microsoft.VisualBasic DLL. To maintain backwards compatibility, they all call the old functions in unmanaged code. Unmanaged code requires special security permissions because it can be dangerous.\nWhenever possible, try and use the newer .NET functions. They are usually much faster (File IO using Streams for instance) and safer.\n",
"When I try the following statement:\nDim result As String = Today()\n\nIt gives me today's date, as I'd expect, and I'm running VB2005 on Vista. Can you modify the question with the version of VB you're using? Also, can you try the following statement instead of Today() to see it works for you without the exception?\nDim result As String = Now()\n\n",
"The Today() function should behave properly on Vista. I believe behind the scenes it is simply evaluating the DateTime.Today property, so it shouldn't throw any exceptions. If you're porting VB to VB.NET you should probably go ahead and use the DateTime.Today property rather than the VB6 compatability code.\n"
] |
[
10,
0,
0
] |
[] |
[] |
[
"date",
"security",
"vb.net"
] |
stackoverflow_0000064469_date_security_vb.net.txt
|
Q:
Making portable code
With all the fuss about opensource projects, how come there is still not a strong standard that enables you to make portable code (I mean in C/C++ not Java or C#)
Everyone is kind of making it's own soup.
There are even some third party libs like Apache Portable Runtime.
A:
Yes, there is no standard but libraries like Qt and boost can make your life much easier when you do cross-platform development.
A:
wxwidgets is a great abstraction layer on the native GUI widgets of most window managers.
A:
I think the main reason there isn't any single library anyone agrees on is that everyone's requirements are different. When you want to wrap system libraries you'll often need to make some assumptions about what the use cases will be, unless you want to make the wrapper huge and impossible to work with. I think that might be the main reason there's no single, common cross platform runtime.
For GUI, the reason would be that each platform has its own UI conventions, you can't code one GUI that fits all, you'll simply get one that fits just one or even none at all.
A:
There are many libraries that make cross-platform development easier on their own, but making a complete wrapper for all platforms ends up being either small and highly customized, or massive and completely ridiculous.
Carried to it's logical conclusion, a complete wrapper for all aspects of an operating system becomes an entire virtual runtime. You might as well make your own programming language.
A:
The ADAPTIVE Communication Environment (ACE) is an excellent object oriented framework that provides cross-platform support for all of the low level OS functionality like threading, sockets, mutexes, etc. It runs with a crazy number of compilers and operating systems.
A:
C and C++ as languages are standards languages. If you closely follow their rules when coding (That means not using vendor-specific extensions) you're code should be portable and you should be able to compile it with any modern compiler on any OS.
However C and C++ don't have a GUI library, like Java or C#, however there exist some free or commercial GUI libraries that will allow you to write portable GUI applications.
I think the most populars are Qt (Commercial) and wxWidgets (FOSS). According to wikipedia there is a lot more.
There is also boost, while not a GUI library boost is a really great complement to C++'s STL. In fact some of the boost libraires will be added in the next C++ standard.
A:
If you make sure it compiles cleanly with both GCC and MS VC++, it will be little extra effort to port to somewhere else.
|
Making portable code
|
With all the fuss about opensource projects, how come there is still not a strong standard that enables you to make portable code (I mean in C/C++ not Java or C#)
Everyone is kind of making it's own soup.
There are even some third party libs like Apache Portable Runtime.
|
[
"Yes, there is no standard but libraries like Qt and boost can make your life much easier when you do cross-platform development.\n",
"wxwidgets is a great abstraction layer on the native GUI widgets of most window managers.\n",
"I think the main reason there isn't any single library anyone agrees on is that everyone's requirements are different. When you want to wrap system libraries you'll often need to make some assumptions about what the use cases will be, unless you want to make the wrapper huge and impossible to work with. I think that might be the main reason there's no single, common cross platform runtime.\nFor GUI, the reason would be that each platform has its own UI conventions, you can't code one GUI that fits all, you'll simply get one that fits just one or even none at all.\n",
"There are many libraries that make cross-platform development easier on their own, but making a complete wrapper for all platforms ends up being either small and highly customized, or massive and completely ridiculous.\nCarried to it's logical conclusion, a complete wrapper for all aspects of an operating system becomes an entire virtual runtime. You might as well make your own programming language.\n",
"The ADAPTIVE Communication Environment (ACE) is an excellent object oriented framework that provides cross-platform support for all of the low level OS functionality like threading, sockets, mutexes, etc. It runs with a crazy number of compilers and operating systems.\n",
"C and C++ as languages are standards languages. If you closely follow their rules when coding (That means not using vendor-specific extensions) you're code should be portable and you should be able to compile it with any modern compiler on any OS.\nHowever C and C++ don't have a GUI library, like Java or C#, however there exist some free or commercial GUI libraries that will allow you to write portable GUI applications.\nI think the most populars are Qt (Commercial) and wxWidgets (FOSS). According to wikipedia there is a lot more.\nThere is also boost, while not a GUI library boost is a really great complement to C++'s STL. In fact some of the boost libraires will be added in the next C++ standard.\n",
"If you make sure it compiles cleanly with both GCC and MS VC++, it will be little extra effort to port to somewhere else.\n"
] |
[
5,
3,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"c",
"c++",
"portability"
] |
stackoverflow_0000061499_c_c++_portability.txt
|
Q:
Has .NET made raw COM and DCOM programming redundant?
Has the introduction of the .net framework made raw programming in COM and DCOM redundant ?
(Except for using some COM+ services, e.g. for transaction management through the System.EnterpriseServices namespace)
A:
Not yet, because the OS is still unmanaged.
If MS finally do what their labs have been talking about for years and produce a fully managed OS then it will.
That OS won't be backwards compatible though. They would have to produce managed versions of Office, IE, etc first. They will have to produce a virtual machine to run unmanaged apps.
The pain would be something similar to the move from Mac OS9 to OSX.
A:
COM was the last major technology that MS actually dogfooded. MS are continuing to build new APIs that depend on COM; for example, Vista's new Media Foundation (a kind of successor to DirectShow, which was also COM-based) is a COM API. So is Direct3D10 (and I would assume D3D11). I don't think it's going to disappear any time soon, and for a lot of Windows programming tasks it's not at all redundant.
A:
Not yet, but I'd say in the long term, it aims to. Obviously there will always be a place for the lower levels, but from what I understand of Microsoft's strategy, the move is towards replacing as much with managed code as possible.
A:
I suppose that depends on what you mean by 'raw'. I still find the need to expose COM APIs from .Net class libraries on occasion. Makes the process of migrating from certain platforms to .Net a lot easier since I can replace small pieces via COM.
A:
.NET has been deliberately designed to replace COM (and, consequently, DLL Hell) so while .NET applications still can access COM components, all new development are encouraged to move to .NET except if you have a very good reason to stick with COM.
|
Has .NET made raw COM and DCOM programming redundant?
|
Has the introduction of the .net framework made raw programming in COM and DCOM redundant ?
(Except for using some COM+ services, e.g. for transaction management through the System.EnterpriseServices namespace)
|
[
"Not yet, because the OS is still unmanaged.\nIf MS finally do what their labs have been talking about for years and produce a fully managed OS then it will.\nThat OS won't be backwards compatible though. They would have to produce managed versions of Office, IE, etc first. They will have to produce a virtual machine to run unmanaged apps.\nThe pain would be something similar to the move from Mac OS9 to OSX.\n",
"COM was the last major technology that MS actually dogfooded. MS are continuing to build new APIs that depend on COM; for example, Vista's new Media Foundation (a kind of successor to DirectShow, which was also COM-based) is a COM API. So is Direct3D10 (and I would assume D3D11). I don't think it's going to disappear any time soon, and for a lot of Windows programming tasks it's not at all redundant.\n",
"Not yet, but I'd say in the long term, it aims to. Obviously there will always be a place for the lower levels, but from what I understand of Microsoft's strategy, the move is towards replacing as much with managed code as possible.\n",
"I suppose that depends on what you mean by 'raw'. I still find the need to expose COM APIs from .Net class libraries on occasion. Makes the process of migrating from certain platforms to .Net a lot easier since I can replace small pieces via COM.\n",
".NET has been deliberately designed to replace COM (and, consequently, DLL Hell) so while .NET applications still can access COM components, all new development are encouraged to move to .NET except if you have a very good reason to stick with COM.\n"
] |
[
10,
5,
2,
1,
0
] |
[] |
[] |
[
".net",
"com",
"com+",
"dcom"
] |
stackoverflow_0000034300_.net_com_com+_dcom.txt
|
Q:
Single Sign On across multiple domains
Our company has multiple domains set up with one website hosted on each of the domains. At this time, each domain has its own authentication which is done via cookies.
When someone logged on to one domain needs to access anything from the other, the user needs to log in again using different credentials on the other website, located on the other domain.
I was thinking of moving towards single sign on (SSO), so that this hassle can be eliminated. I would appreciate any ideas on how this could be achieved, as I do not have any experience in this regard.
Thanks.
Edit:
The websites are mix of internet (external) and intranet (internal-used within the company) sites.
A:
The SSO solution that I've implemented here works as follows:
There is a master domain, login.mydomain.example with the script master_login.php that manages the logins.
Each client domain has the script client_login.php
All the domains have a shared user session database.
When the client domain requires the user to be logged in, it redirects to the master domain (login.mydomain.example/master_login.php). If the user has not signed in to the master it requests authentication from the user (ie. display login page). After the user is authenticated it creates a session in a database. If the user is already authenticated it looks up their session id in the database.
The master domain returns to the client domain (client.mydomain.example/client_login.php) passing the session id.
The client domain creates a cookie storing the session id from the master. The client can find out the logged in user by querying the shared database using the session id.
Notes:
The session id is a unique global identifier generated with algorithm from RFC 4122
The master_login.php will only redirect to domains in its whitelist
The master and clients can be in different top level domains. Eg. client1.abc.example, client2.xyz.example, login.mydomain.example
A:
Don't re-invent the wheel. There are a number of open source cross-domain SSO packages such as JOSSO, OpenSSO, CAS, Shibboleth and others. If you're using Microsoft Technology throughout (IIS, AD), you can use microsoft federation (ADFS) instead.
A:
How different are the host names?
These hosts can share cookies:
mail.xyz.example
www.xyz.example
logon.xyz.example
But these cannot:
abc.example
xyz.example
www.tre.example
In the former case you can bang out a cookie-based solution. Think GUID and a database session table.
A:
If you use Active Directory you could have each app use AD for authentication, login could then be seamless.
Otherwise, if the applications can talk to each other behind the scenes, you could use sessionids and have one app handling id generation serving all of your other applications.
|
Single Sign On across multiple domains
|
Our company has multiple domains set up with one website hosted on each of the domains. At this time, each domain has its own authentication which is done via cookies.
When someone logged on to one domain needs to access anything from the other, the user needs to log in again using different credentials on the other website, located on the other domain.
I was thinking of moving towards single sign on (SSO), so that this hassle can be eliminated. I would appreciate any ideas on how this could be achieved, as I do not have any experience in this regard.
Thanks.
Edit:
The websites are mix of internet (external) and intranet (internal-used within the company) sites.
|
[
"The SSO solution that I've implemented here works as follows:\n\nThere is a master domain, login.mydomain.example with the script master_login.php that manages the logins.\nEach client domain has the script client_login.php\nAll the domains have a shared user session database.\nWhen the client domain requires the user to be logged in, it redirects to the master domain (login.mydomain.example/master_login.php). If the user has not signed in to the master it requests authentication from the user (ie. display login page). After the user is authenticated it creates a session in a database. If the user is already authenticated it looks up their session id in the database.\nThe master domain returns to the client domain (client.mydomain.example/client_login.php) passing the session id.\nThe client domain creates a cookie storing the session id from the master. The client can find out the logged in user by querying the shared database using the session id.\n\nNotes:\n\nThe session id is a unique global identifier generated with algorithm from RFC 4122\nThe master_login.php will only redirect to domains in its whitelist\nThe master and clients can be in different top level domains. Eg. client1.abc.example, client2.xyz.example, login.mydomain.example\n\n",
"Don't re-invent the wheel. There are a number of open source cross-domain SSO packages such as JOSSO, OpenSSO, CAS, Shibboleth and others. If you're using Microsoft Technology throughout (IIS, AD), you can use microsoft federation (ADFS) instead.\n",
"How different are the host names?\nThese hosts can share cookies:\n\nmail.xyz.example\nwww.xyz.example\nlogon.xyz.example\n\nBut these cannot:\n\nabc.example\nxyz.example\nwww.tre.example\n\nIn the former case you can bang out a cookie-based solution. Think GUID and a database session table.\n",
"If you use Active Directory you could have each app use AD for authentication, login could then be seamless. \nOtherwise, if the applications can talk to each other behind the scenes, you could use sessionids and have one app handling id generation serving all of your other applications.\n"
] |
[
94,
33,
16,
2
] |
[] |
[] |
[
"authentication",
"single_sign_on"
] |
stackoverflow_0000044509_authentication_single_sign_on.txt
|
Q:
Encrypt/Decrypt across machines is a no-no
I'm using an identical call to "CryptUnprotectData" (exposed from Crypt32.dll) between XP and Vista. Works fine in XP. I get the following exception when I run in Vista:
"Decryption failed. Key not valid for use in specified state."
As expected, the versions of crypt32.dll are different between XP and Vista (w/XP actually having the more recent, possibly as a result of SP3 or some other update).
More specifically, I'm encrypting data, putting it in the registry, then reading and decrypting using "CryptUnprotectData". UAC is turned off.
Anyone seen this one before?
A:
The CryptUnprotectData function documentation states that it usually only works when the user has the same logon credentials as the encrypter.
This suggests to me that maybe the key is tied to the user's current token. Since you mention Vista, this makes me think UAC and restricted tokens.
Can you show us some code? Can you give us more information about what you're doing with the data -- i.e. are you moving it between processes, or users, or computers?
A:
Nice. Hopefully this is my bone-head move of the week! ;-)
This suggests to me that maybe the key
is tied to the user's current token.
That was it. Turns out I was using encrypted data from another machine (the XP one) and trying to decrypt on the Vista machine.
As the MSDN documentation states:
Usually, only a user with the same
logon credentials as the encrypter can
decrypt the data. In addition, the
encryption and decryption must be done
on the same computer.
Once I re-encrypted the data on the Vista machine, decryption works as expected.
Thanks.
|
Encrypt/Decrypt across machines is a no-no
|
I'm using an identical call to "CryptUnprotectData" (exposed from Crypt32.dll) between XP and Vista. Works fine in XP. I get the following exception when I run in Vista:
"Decryption failed. Key not valid for use in specified state."
As expected, the versions of crypt32.dll are different between XP and Vista (w/XP actually having the more recent, possibly as a result of SP3 or some other update).
More specifically, I'm encrypting data, putting it in the registry, then reading and decrypting using "CryptUnprotectData". UAC is turned off.
Anyone seen this one before?
|
[
"The CryptUnprotectData function documentation states that it usually only works when the user has the same logon credentials as the encrypter.\nThis suggests to me that maybe the key is tied to the user's current token. Since you mention Vista, this makes me think UAC and restricted tokens.\nCan you show us some code? Can you give us more information about what you're doing with the data -- i.e. are you moving it between processes, or users, or computers?\n",
"Nice. Hopefully this is my bone-head move of the week! ;-)\n\nThis suggests to me that maybe the key\n is tied to the user's current token.\n\nThat was it. Turns out I was using encrypted data from another machine (the XP one) and trying to decrypt on the Vista machine.\nAs the MSDN documentation states:\n\nUsually, only a user with the same\n logon credentials as the encrypter can\n decrypt the data. In addition, the\n encryption and decryption must be done\n on the same computer.\n\nOnce I re-encrypted the data on the Vista machine, decryption works as expected.\nThanks.\n"
] |
[
4,
1
] |
[] |
[] |
[
"cryptoapi",
"encryption",
"windows_vista"
] |
stackoverflow_0000064258_cryptoapi_encryption_windows_vista.txt
|
Q:
Why are the hash codes generated by this function not unique?
I'm testing the VB function below that I got from a Google search. I plan to use it to generate hash codes for quick string comparison. However, there are occasions in which two different strings have the same hash code. For example, these strings
"122Gen 1 heap size (.NET CLR Memory w3wp):mccsmtpteweb025.20833333333333E-02"
"122Gen 2 heap size (.NET CLR Memory w3wp):mccsmtpteweb015.20833333333333E-02"
have the same hash code of 237117279.
Please tell me:
- What is wrong with the function?
- How can I fix it?
Thank you
martin
Private Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" (dest As Any, src As Any, ByVal bytes As Long)
Private Function HashCode(Key As String) As Long
On Error GoTo ErrorGoTo
Dim lastEl As Long, i As Long
' copy ansi codes into an array of long'
lastEl = (Len(Key) - 1) \ 4
ReDim codes(lastEl) As Long
' this also converts from Unicode to ANSI'
CopyMemory codes(0), ByVal Key, Len(Key)
' XOR the ANSI codes of all characters'
For i = 0 To lastEl - 1
HashCode = HashCode Xor codes(i) 'Xor'
Next
ErrorGoTo:
Exit Function
End Function
A:
I'm betting there are more than just "occasions" when two strings generate the same hash using your function. In fact, it probably happens more often than you think.
A few things to realize:
First, there will be hash collisions. It happens. Even with really, really big spaces like MD5 (128 bits) there are still two strings that can generate the same resulting hash. You have to deal with those collisions by creating buckets.
Second, a long integer isn't really a big hash space. You're going to get more collisions than you would if you used more bits.
Thirdly, there are libraries available to you in Visual Basic (like .NET's System.Security.Cryptography namespace) that will do a much better job of hashing than most mere mortals.
A:
The two Strings have the same characters. (Note the '2' and the '1' that are flip-flopped)
That is why the hash value is the same.
Make sure that the hash function is taking into account the order of the characters.
A:
Hash functions do not guarantee uniqueness of hash values. If the input value range (judging your sample strings) is larger than the output value range (eg 32 bit integer), then uniqueness is physically impossible.
A:
If the biggest problem is that it doesn't account for the position of the bytes, you could fix it like this:
Private Function HashCode(Key As String) As Long
On Error GoTo ErrorGoTo
Dim lastEl As Long, i As Long
' copy ansi codes into an array of long'
lastEl = (Len(Key) - 1) \ 4
ReDim codes(lastEl) As Long
' this also converts from Unicode to ANSI'
CopyMemory codes(0), ByVal Key, Len(Key)
' XOR the ANSI codes of all characters'
For i = 0 To lastEl - 1
HashCode = HashCode Xor (codes(i) + i) 'Xor'
Next
ErrorGoTo:
Exit Function
End Function
The only difference is that it adds the characters position to it's byte value before the XOR.
A:
No hash function can guarantee uniqueness. There are ~4 billion 32-bit integers, so even the best hash function will generate duplicates when presented with ~4 billion and 1 strings (and mostly likely long before).
Moving to 64-bit hashes or even 128-bit hashes isn't really the solution, though it reduces the probability of a collision.
If you want a better hash function you could look at the cryptographic hashes, but it would be better to reconsider you algorithm and decide if you can deal with the collisions some other way.
A:
The System.Security.Cryptography namespace contains multiple classes which can do hashing for you (such as MD5) which will probably hash them better than you could yourself and will take much less effort.
You don't always have to reinvent the wheel.
A:
Simple XOR is a bad hash: you'll find lots of strings which collide. The hash doesn't depend on the order of the letters in the string, for one thing.
Try using the FNV hash http://isthe.com/chongo/tech/comp/fnv/
This is really simple to implement. It shifts the hash code after each XOR, so the same letters in a different order will produce a different hash.
A:
I fixed the syntax highlighting for him.
Also, for those who weren't sure about the environment or were suggesting a more-secure hash: it's Classic (pre-.Net) VB, because .Net would require parentheses for the the call to CopyMemory.
IIRC, there aren't any secure hashes built in for Classic VB. There's not much out there on the web either, so this may be his best bet.
A:
Hash functions are not meant to return distinct values for distinct strings. However, a good hash function should return different values for strings that look alike. Hash functions are used to search for many reasons, including searching into a large collection. If the hash function is good and if it returns values from the range [0,N-1], then a large collection of M objects will be divide in N collections, each one having about M/N elements. This way, you need to search only in an array of M/N elements instead of searching in an array of M elements.
But, if you only have 2 strings, it is not faster to compute the hash value for those! It is better to just compare the two strings.
An interresing hash function could be:
unsigned int hash(const char* name) {
unsigned mul=1;
unsigned val=0;
while(name[0]!=0) {
val+=mul*((unsigned)name[0]);
mul*=7; //you could use an arbitrary prime number, but test the hash dispersion afterwards
name++;
}
return val;
}
A:
I don't quite see the environment you work in. Is this .Net code? If you really want good hash codes, I would recommend looking into cryptographic hashes (proven algorithms) instead of trying to write your own.
Btw, could you edit your post and paste the code in as a Code Sample (see toolbar)? This would make it easier to read.
A:
"Don't do that."
Writing your own hash function is a big mistake, because your language certainly already has an implementation of SHA-1, which is a perfectly good hash function. If you only need 32 bits (instead of the 160 that SHA-1 provides), just use the last 32 bits of SHA-1.
A:
There's a visual basic implementation of MD5 hashing here
http://www.bullzip.com/md5/vb/md5-visual-basic.htm
A:
This particular hash functions XORs all of the characters in a string. Unfortunately XOR is associative:
(a XOR b) XOR c = a XOR (b XOR c)
So any strings with the same input characters will result in the same hash code. The two strings provided are the same, except for the location of two characters, therefore they should have the same hashcode.
You may need to find a better algorithm, MD5 would be a good choice.
A:
The XOR operation is commutative; that is, when XORing all the chars in a string, the order of the chars does not matter. All anagrams of a string will produce the same XOR hash.
In your example, your second string can be generated from your first by swapping the "1" after "...Gen " with the first "2" following it.
There is nothing wrong with your function. All useful hashing functions will sometimes generate collisions, and your program must be prepared to resolve them.
A collision occurs when an input hashes to a value already identified with an earlier input. If a hashing algorithm could not generate collisions, the hash values would need to be as large as the input values. Such a hashing algorithm would be of limited use compared to just storing the input values.
-Al.
|
Why are the hash codes generated by this function not unique?
|
I'm testing the VB function below that I got from a Google search. I plan to use it to generate hash codes for quick string comparison. However, there are occasions in which two different strings have the same hash code. For example, these strings
"122Gen 1 heap size (.NET CLR Memory w3wp):mccsmtpteweb025.20833333333333E-02"
"122Gen 2 heap size (.NET CLR Memory w3wp):mccsmtpteweb015.20833333333333E-02"
have the same hash code of 237117279.
Please tell me:
- What is wrong with the function?
- How can I fix it?
Thank you
martin
Private Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" (dest As Any, src As Any, ByVal bytes As Long)
Private Function HashCode(Key As String) As Long
On Error GoTo ErrorGoTo
Dim lastEl As Long, i As Long
' copy ansi codes into an array of long'
lastEl = (Len(Key) - 1) \ 4
ReDim codes(lastEl) As Long
' this also converts from Unicode to ANSI'
CopyMemory codes(0), ByVal Key, Len(Key)
' XOR the ANSI codes of all characters'
For i = 0 To lastEl - 1
HashCode = HashCode Xor codes(i) 'Xor'
Next
ErrorGoTo:
Exit Function
End Function
|
[
"I'm betting there are more than just \"occasions\" when two strings generate the same hash using your function. In fact, it probably happens more often than you think.\nA few things to realize:\nFirst, there will be hash collisions. It happens. Even with really, really big spaces like MD5 (128 bits) there are still two strings that can generate the same resulting hash. You have to deal with those collisions by creating buckets.\nSecond, a long integer isn't really a big hash space. You're going to get more collisions than you would if you used more bits.\nThirdly, there are libraries available to you in Visual Basic (like .NET's System.Security.Cryptography namespace) that will do a much better job of hashing than most mere mortals.\n",
"The two Strings have the same characters. (Note the '2' and the '1' that are flip-flopped)\nThat is why the hash value is the same.\nMake sure that the hash function is taking into account the order of the characters.\n",
"Hash functions do not guarantee uniqueness of hash values. If the input value range (judging your sample strings) is larger than the output value range (eg 32 bit integer), then uniqueness is physically impossible.\n",
"If the biggest problem is that it doesn't account for the position of the bytes, you could fix it like this:\nPrivate Function HashCode(Key As String) As Long\n On Error GoTo ErrorGoTo\n\n Dim lastEl As Long, i As Long\n ' copy ansi codes into an array of long'\n lastEl = (Len(Key) - 1) \\ 4\n ReDim codes(lastEl) As Long\n ' this also converts from Unicode to ANSI'\n CopyMemory codes(0), ByVal Key, Len(Key)\n ' XOR the ANSI codes of all characters'\n\n For i = 0 To lastEl - 1\n HashCode = HashCode Xor (codes(i) + i) 'Xor'\n Next\n\nErrorGoTo:\n Exit Function\nEnd Function\n\nThe only difference is that it adds the characters position to it's byte value before the XOR.\n",
"No hash function can guarantee uniqueness. There are ~4 billion 32-bit integers, so even the best hash function will generate duplicates when presented with ~4 billion and 1 strings (and mostly likely long before).\nMoving to 64-bit hashes or even 128-bit hashes isn't really the solution, though it reduces the probability of a collision.\nIf you want a better hash function you could look at the cryptographic hashes, but it would be better to reconsider you algorithm and decide if you can deal with the collisions some other way. \n",
"The System.Security.Cryptography namespace contains multiple classes which can do hashing for you (such as MD5) which will probably hash them better than you could yourself and will take much less effort.\nYou don't always have to reinvent the wheel.\n",
"Simple XOR is a bad hash: you'll find lots of strings which collide. The hash doesn't depend on the order of the letters in the string, for one thing.\nTry using the FNV hash http://isthe.com/chongo/tech/comp/fnv/\nThis is really simple to implement. It shifts the hash code after each XOR, so the same letters in a different order will produce a different hash.\n",
"I fixed the syntax highlighting for him. \nAlso, for those who weren't sure about the environment or were suggesting a more-secure hash: it's Classic (pre-.Net) VB, because .Net would require parentheses for the the call to CopyMemory. \nIIRC, there aren't any secure hashes built in for Classic VB. There's not much out there on the web either, so this may be his best bet.\n",
"Hash functions are not meant to return distinct values for distinct strings. However, a good hash function should return different values for strings that look alike. Hash functions are used to search for many reasons, including searching into a large collection. If the hash function is good and if it returns values from the range [0,N-1], then a large collection of M objects will be divide in N collections, each one having about M/N elements. This way, you need to search only in an array of M/N elements instead of searching in an array of M elements.\nBut, if you only have 2 strings, it is not faster to compute the hash value for those! It is better to just compare the two strings.\nAn interresing hash function could be:\n\n\n unsigned int hash(const char* name) {\n unsigned mul=1;\n unsigned val=0;\n while(name[0]!=0) {\n val+=mul*((unsigned)name[0]);\n mul*=7; //you could use an arbitrary prime number, but test the hash dispersion afterwards\n name++;\n }\n return val;\n }\n\n\n",
"I don't quite see the environment you work in. Is this .Net code? If you really want good hash codes, I would recommend looking into cryptographic hashes (proven algorithms) instead of trying to write your own.\nBtw, could you edit your post and paste the code in as a Code Sample (see toolbar)? This would make it easier to read.\n",
"\"Don't do that.\" \nWriting your own hash function is a big mistake, because your language certainly already has an implementation of SHA-1, which is a perfectly good hash function. If you only need 32 bits (instead of the 160 that SHA-1 provides), just use the last 32 bits of SHA-1. \n",
"There's a visual basic implementation of MD5 hashing here\nhttp://www.bullzip.com/md5/vb/md5-visual-basic.htm\n",
"This particular hash functions XORs all of the characters in a string. Unfortunately XOR is associative:\n(a XOR b) XOR c = a XOR (b XOR c)\n\nSo any strings with the same input characters will result in the same hash code. The two strings provided are the same, except for the location of two characters, therefore they should have the same hashcode.\nYou may need to find a better algorithm, MD5 would be a good choice.\n",
"The XOR operation is commutative; that is, when XORing all the chars in a string, the order of the chars does not matter. All anagrams of a string will produce the same XOR hash.\nIn your example, your second string can be generated from your first by swapping the \"1\" after \"...Gen \" with the first \"2\" following it.\nThere is nothing wrong with your function. All useful hashing functions will sometimes generate collisions, and your program must be prepared to resolve them.\nA collision occurs when an input hashes to a value already identified with an earlier input. If a hashing algorithm could not generate collisions, the hash values would need to be as large as the input values. Such a hashing algorithm would be of limited use compared to just storing the input values.\n-Al.\n"
] |
[
10,
8,
4,
2,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"hash_code_uniqueness",
"hash_function",
"vb6"
] |
stackoverflow_0000063897_hash_code_uniqueness_hash_function_vb6.txt
|
Q:
Looking for better End-to-End Comms with Flex, .NET and DBMS
We're reviewing some of our practices at work - and the specific thing we're looking at right now is the best method for doing Comms with Flex based clients, and .NET web services.
Our typical approach is to first model the interactions based on requirements, mock up some XML messages and sanity check them, turn those into XSDs, and finally build classes on each end that serialize to/from XML.
This works okay until we hit the database and then things like Join Tables start mucking up all that work we did simplifying down the client end.
We've tried solving this with LINQ to SQL and other OR mappers, but none of them really solve the problem without introducing more serious issues.
So, the question really is: Without treating a RDBMS as simply an object store, is there a better way to handle complex data requirements without writing a huge amount of conversion code?
I suppose the magic bullet I'm looking for is something that knows what a Join table is and how to deal with it, and still allows me to generate 'nice' serialized XML for Flex and retains strong .NET Typing.
Bonus points if it can analyse the SQL required for each method, generate stored procedures and use them. But that's probably asking too much :)
Note re "Join Tables":
Our definition of this is where you have a table which has two or more foreign keys as it's own primary key.
eg:
Photos (PK PhotoID) <- PhotoTags (PK FK PhotoID, PK FK TagID) -> Tags (PK TagID)
When a Flex client gets a Photo object, it might also get a List of all Tags.
So, that might look like so:
<photo id="3">
<tags>
<tag name="park" />
<tag name="sydney" />
</tags>
</photo>
Instead, the OR tools I've seen give us:
<photo id="3">
<phototags>
<phototag>
<tag name="park" />
</phototag>
<phototag>
<tag name="sydney" />
</phototag>
</phototags>
</photo>
A:
While I am sure there are other ways to solve your issue, is there a specific reason that you want to communicate with .net via web services?
One very clean solution is to use something like WebOrb (http://www.themidnightcoders.com/weborb/dotnet/)
They have community as well as commercial offerings and handle the issues you are describing quite elegantly.
-mw
|
Looking for better End-to-End Comms with Flex, .NET and DBMS
|
We're reviewing some of our practices at work - and the specific thing we're looking at right now is the best method for doing Comms with Flex based clients, and .NET web services.
Our typical approach is to first model the interactions based on requirements, mock up some XML messages and sanity check them, turn those into XSDs, and finally build classes on each end that serialize to/from XML.
This works okay until we hit the database and then things like Join Tables start mucking up all that work we did simplifying down the client end.
We've tried solving this with LINQ to SQL and other OR mappers, but none of them really solve the problem without introducing more serious issues.
So, the question really is: Without treating a RDBMS as simply an object store, is there a better way to handle complex data requirements without writing a huge amount of conversion code?
I suppose the magic bullet I'm looking for is something that knows what a Join table is and how to deal with it, and still allows me to generate 'nice' serialized XML for Flex and retains strong .NET Typing.
Bonus points if it can analyse the SQL required for each method, generate stored procedures and use them. But that's probably asking too much :)
Note re "Join Tables":
Our definition of this is where you have a table which has two or more foreign keys as it's own primary key.
eg:
Photos (PK PhotoID) <- PhotoTags (PK FK PhotoID, PK FK TagID) -> Tags (PK TagID)
When a Flex client gets a Photo object, it might also get a List of all Tags.
So, that might look like so:
<photo id="3">
<tags>
<tag name="park" />
<tag name="sydney" />
</tags>
</photo>
Instead, the OR tools I've seen give us:
<photo id="3">
<phototags>
<phototag>
<tag name="park" />
</phototag>
<phototag>
<tag name="sydney" />
</phototag>
</phototags>
</photo>
|
[
"While I am sure there are other ways to solve your issue, is there a specific reason that you want to communicate with .net via web services?\nOne very clean solution is to use something like WebOrb (http://www.themidnightcoders.com/weborb/dotnet/)\nThey have community as well as commercial offerings and handle the issues you are describing quite elegantly.\n-mw\n"
] |
[
1
] |
[] |
[] |
[
".net",
"apache_flex",
"orm"
] |
stackoverflow_0000064177_.net_apache_flex_orm.txt
|
Q:
Does the exe you get out of obfuscation programs vary in speed?
There are a number of obfuscation programs out there for .Net and I've tried one, my exe seems much slower when obfuscated. Do all obfuscation programs have the same effect or have I chosen a bad one? I'm hoping some are better than others, if you know of a fast one let me know.
A:
Disclaimer: my employer is PreEmptive Solutions, the creator of the Dotfuscator .NET obfuscator.
It can depend on the obfuscator you use and the options you enable in it. I am going to speak from experience with Dotfuscator.
There can be load time and memory footprint improvements of obfuscated assemblies if you use renaming and removal, partly because all/most of your methods, fields, etc are renamed to much smaller names (for example "ThisVeryLongMethodName(SomeVeryLongParameterName)" becomes "a(a)" so you gain some benefit in assembly size and usually with load time. In addition with removal you remove methods, etc. that are never call and again decrease the size of your binaries.
String encryption can adversely affect runtime performance to a slight degree as the strings must be converted back to human readable text at runtime.
If you use any other systems/techniques like Microsoft SLP's secure vm technology to render methods unreadable that will also incur a runtime performance penalty due to the secure vm.
Other obfuscation tools that do not produce managed code assemblies as an output but instead rely on a native code loader to "preprocess" their output can also incur an runtime performance hit (especially at load time).
A:
Obfuscation shouldn't change the runtime performance of your code. If it is then you've got a bad obfuscator that's doing much more than just obfuscating. All obfuscation should do is make your IL hard to read.
A:
There are different obfuscation methods that tools can use. There are the simple rename methods that should not affect performance in any way. Other methods might change the flow of the code. That could have a negative impact on performance. You might want to check out other obfuscators and try out different settings.
|
Does the exe you get out of obfuscation programs vary in speed?
|
There are a number of obfuscation programs out there for .Net and I've tried one, my exe seems much slower when obfuscated. Do all obfuscation programs have the same effect or have I chosen a bad one? I'm hoping some are better than others, if you know of a fast one let me know.
|
[
"Disclaimer: my employer is PreEmptive Solutions, the creator of the Dotfuscator .NET obfuscator.\nIt can depend on the obfuscator you use and the options you enable in it. I am going to speak from experience with Dotfuscator.\nThere can be load time and memory footprint improvements of obfuscated assemblies if you use renaming and removal, partly because all/most of your methods, fields, etc are renamed to much smaller names (for example \"ThisVeryLongMethodName(SomeVeryLongParameterName)\" becomes \"a(a)\" so you gain some benefit in assembly size and usually with load time. In addition with removal you remove methods, etc. that are never call and again decrease the size of your binaries. \nString encryption can adversely affect runtime performance to a slight degree as the strings must be converted back to human readable text at runtime.\nIf you use any other systems/techniques like Microsoft SLP's secure vm technology to render methods unreadable that will also incur a runtime performance penalty due to the secure vm.\nOther obfuscation tools that do not produce managed code assemblies as an output but instead rely on a native code loader to \"preprocess\" their output can also incur an runtime performance hit (especially at load time).\n",
"Obfuscation shouldn't change the runtime performance of your code. If it is then you've got a bad obfuscator that's doing much more than just obfuscating. All obfuscation should do is make your IL hard to read.\n",
"There are different obfuscation methods that tools can use. There are the simple rename methods that should not affect performance in any way. Other methods might change the flow of the code. That could have a negative impact on performance. You might want to check out other obfuscators and try out different settings.\n"
] |
[
8,
5,
2
] |
[] |
[] |
[
".net",
"obfuscation"
] |
stackoverflow_0000064541_.net_obfuscation.txt
|
Q:
How do you reliably get an IP address via DHCP?
I work with embedded Linux systems that sometimes want to get their IP address from a DHCP server. The DHCP Client client we use (dhcpcd) has limited retry logic. If our device starts up without any DHCP server available and times out, dhcpcd will exit and the device will never get an IP address until it's rebooted with a DHCP server visible/connected. I can't be the only one that has this problem. The problem doesn't even seem to be specific to embedded systems (though it's worse there). How do you handle this? Is there a more robust client available?
A:
The reference dhclient from the ISC should run forever in the default configuration, and it should acquire a lease later if it doesn't get one at startup.
I am using the out of the box dhcp client on FreeBSD, which is derived from OpenBSD's and based on the ISC's dhclient, and this is the out of the box behavior.
See http://www.isc.org/index.pl?/sw/dhcp/
A:
You have several options:
While you don't have an IP address, restart dhcpcd to get more retries.
Have a backup static IP address. This was quite successful in the embedded devices I've made.
Use auto-IP as a backup. Windows does this.
A:
Add to rc.local a check to see if an IP has been obtained. If no setup an 'at' job in the near future to attempt again. Continue scheduling 'at' jobs until an IP is obtained.
|
How do you reliably get an IP address via DHCP?
|
I work with embedded Linux systems that sometimes want to get their IP address from a DHCP server. The DHCP Client client we use (dhcpcd) has limited retry logic. If our device starts up without any DHCP server available and times out, dhcpcd will exit and the device will never get an IP address until it's rebooted with a DHCP server visible/connected. I can't be the only one that has this problem. The problem doesn't even seem to be specific to embedded systems (though it's worse there). How do you handle this? Is there a more robust client available?
|
[
"The reference dhclient from the ISC should run forever in the default configuration, and it should acquire a lease later if it doesn't get one at startup.\nI am using the out of the box dhcp client on FreeBSD, which is derived from OpenBSD's and based on the ISC's dhclient, and this is the out of the box behavior.\nSee http://www.isc.org/index.pl?/sw/dhcp/\n",
"You have several options:\n\nWhile you don't have an IP address, restart dhcpcd to get more retries.\nHave a backup static IP address. This was quite successful in the embedded devices I've made.\nUse auto-IP as a backup. Windows does this.\n\n",
"Add to rc.local a check to see if an IP has been obtained. If no setup an 'at' job in the near future to attempt again. Continue scheduling 'at' jobs until an IP is obtained. \n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"dhcp",
"linux"
] |
stackoverflow_0000063690_dhcp_linux.txt
|
Q:
Territory Map Generation
Is there a trivial, or at least moderately straight-forward way to generate territory maps (e.g. Risk)?
I have looked in the past and the best I could find were vague references to Voronoi diagrams. An example of a Voronoi diagram is this:
.
These hold promise, but I guess i haven't seen any straight-forward ways of rendering these, let alone holding them in some form of data structure to treat each territory as an object.
Another approach that holds promise is flood fill, but again I'm unsure on the best way to start with this approach.
Any advice would be much appreciated.
A:
The best reference I've seen on them is Computational Geometry: Algorithms and Applications, which covers Voronoi diagrams, Delaunay triangulations (similar to Voronoi diagrams and each can be converted into the other), and other similar data structures.
They talk about all the data structures you need but they don't give you the code necessary to implement it (which may be a good exercise). In terms of code, an Amazon search shows the book Computational Geometry in C, which presumably comes with the code (although since you're stuck in C, you'd mind as well get the other one and implement it in whatever language you want). I also don't have any experience with this book, only the first.
Sorry to have only books to recommend! The only decent online resource I've seen on them are the two Wikipedia articles, which doesn't really tell you implementation details. This link may be helpful though.
A:
Why not use a map of primitives (triangles, squares), distribute the starting points for the countries (the "capitals"), and then randomly expanding the countries by adding a random adjacent primitive to the country.
A:
CGAL is a C++ library that has data structures and algorithms used in Computational Geometry.
A:
I'm actually dealing with exactly this kind of stuff for my company's video game. The most useful info I've found are at these two links:
Paul Bourke's page at UWA, with his 1989 paper on Delaunay and a series of implementation links.
A great explanation of the psudocode and a visual of doing Delaunay at codeGuru.com.
In terms of rendering these - most of the implementations I've found will need massaging to get what you'd want, but since using this for a game map would lead to a number of points plus lines between them, it could be a very simple matter to do draw this out to screen.
|
Territory Map Generation
|
Is there a trivial, or at least moderately straight-forward way to generate territory maps (e.g. Risk)?
I have looked in the past and the best I could find were vague references to Voronoi diagrams. An example of a Voronoi diagram is this:
.
These hold promise, but I guess i haven't seen any straight-forward ways of rendering these, let alone holding them in some form of data structure to treat each territory as an object.
Another approach that holds promise is flood fill, but again I'm unsure on the best way to start with this approach.
Any advice would be much appreciated.
|
[
"The best reference I've seen on them is Computational Geometry: Algorithms and Applications, which covers Voronoi diagrams, Delaunay triangulations (similar to Voronoi diagrams and each can be converted into the other), and other similar data structures. \nThey talk about all the data structures you need but they don't give you the code necessary to implement it (which may be a good exercise). In terms of code, an Amazon search shows the book Computational Geometry in C, which presumably comes with the code (although since you're stuck in C, you'd mind as well get the other one and implement it in whatever language you want). I also don't have any experience with this book, only the first.\nSorry to have only books to recommend! The only decent online resource I've seen on them are the two Wikipedia articles, which doesn't really tell you implementation details. This link may be helpful though.\n",
"Why not use a map of primitives (triangles, squares), distribute the starting points for the countries (the \"capitals\"), and then randomly expanding the countries by adding a random adjacent primitive to the country.\n",
"CGAL is a C++ library that has data structures and algorithms used in Computational Geometry.\n",
"I'm actually dealing with exactly this kind of stuff for my company's video game. The most useful info I've found are at these two links:\nPaul Bourke's page at UWA, with his 1989 paper on Delaunay and a series of implementation links.\nA great explanation of the psudocode and a visual of doing Delaunay at codeGuru.com.\nIn terms of rendering these - most of the implementations I've found will need massaging to get what you'd want, but since using this for a game map would lead to a number of points plus lines between them, it could be a very simple matter to do draw this out to screen.\n"
] |
[
7,
3,
2,
2
] |
[] |
[] |
[
"language_agnostic",
"maps",
"voronoi"
] |
stackoverflow_0000004225_language_agnostic_maps_voronoi.txt
|
Q:
Can the Weblogic default handler display the list of contexts?
In Jetty, if there is no deployment at '/' then the DefaultHandler displays a list of known contexts. This is very useful during development.
Is it possible to configure BEA Weblogic to provide a similar convenience?
A:
You could write a small webapp that hooks up to the Weblogic JMX and displays the list of deployed webapps and deploy that one at '/'.
|
Can the Weblogic default handler display the list of contexts?
|
In Jetty, if there is no deployment at '/' then the DefaultHandler displays a list of known contexts. This is very useful during development.
Is it possible to configure BEA Weblogic to provide a similar convenience?
|
[
"You could write a small webapp that hooks up to the Weblogic JMX and displays the list of deployed webapps and deploy that one at '/'.\n"
] |
[
1
] |
[] |
[] |
[
"jakarta_ee",
"java",
"weblogic"
] |
stackoverflow_0000051157_jakarta_ee_java_weblogic.txt
|
Q:
Ajax Control Toolkit Calendar Control CSS
I am using the AJAX Control Toolkit Popup Calendar Control in a datagrid. When it is in the footer it looks fine. When it is in the edit side of the datagrid it is inheriting the style from the datagrid and looks completely different (i.e. too big).
Is there a way to alter the CSS so that it does not inherit the style from the datagrid?
A:
Open the page in firefox. However, first, download the firebug extension. Then, right click on the offending version and go down to inspect element.
Firebug is awesome because it let's you navigate the css of any element. You have two options here:
1) Assign the topmost element an css class and work it that way.
or
If that's not an option, you can use firebug to get the xpath to the offending element.
Xpaths look like body/table/tr/td/table/tr[2]
what you want to do with that in css is
body table tr td table tr {
/*css goes here */
}
Option 1 is definitely the better pick. Option 2 is more of a dirty way of getting things
done when things like asp.net doesn't let us have the fine grain of control we want.
It would be really awesome if you used a pastebin and posted the link to your rendered page's html.
A:
It uses the style from the grid, because it's in it. If you want to change it's style, change the style of the control. What do you want it to do?
A:
Here is the pastebin link:
http://pastebin.com/m17d99f8a
I am using a stylesheet for the grid that I got from Matt Berseth's blog located here:
http://mattberseth.com/blog/2007/10/a_yui_datatable_styled_gridvie.html
I am using a similar stylesheet for the calendar that I cannot find the link for anymore.
|
Ajax Control Toolkit Calendar Control CSS
|
I am using the AJAX Control Toolkit Popup Calendar Control in a datagrid. When it is in the footer it looks fine. When it is in the edit side of the datagrid it is inheriting the style from the datagrid and looks completely different (i.e. too big).
Is there a way to alter the CSS so that it does not inherit the style from the datagrid?
|
[
"Open the page in firefox. However, first, download the firebug extension. Then, right click on the offending version and go down to inspect element.\nFirebug is awesome because it let's you navigate the css of any element. You have two options here:\n1) Assign the topmost element an css class and work it that way. \nor\nIf that's not an option, you can use firebug to get the xpath to the offending element. \nXpaths look like body/table/tr/td/table/tr[2]\nwhat you want to do with that in css is\nbody table tr td table tr {\n /*css goes here */\n\n}\n\nOption 1 is definitely the better pick. Option 2 is more of a dirty way of getting things\ndone when things like asp.net doesn't let us have the fine grain of control we want.\nIt would be really awesome if you used a pastebin and posted the link to your rendered page's html. \n",
"It uses the style from the grid, because it's in it. If you want to change it's style, change the style of the control. What do you want it to do?\n",
"Here is the pastebin link:\nhttp://pastebin.com/m17d99f8a\nI am using a stylesheet for the grid that I got from Matt Berseth's blog located here:\nhttp://mattberseth.com/blog/2007/10/a_yui_datatable_styled_gridvie.html\nI am using a similar stylesheet for the calendar that I cannot find the link for anymore.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"css"
] |
stackoverflow_0000064193_css.txt
|
Q:
MySQL Interview Questions
I've been asked to screen some candidates for a MySQL DBA / Developer position for a role that requires an enterprise level skill set.
I myself am a SQL Server person so I know what I would be looking for from that point of view with regards to scalability / design etc but is there anything specific I should be asking with regards to MySQL?
I would ideally like to ask them about enterprise level features of MySQL that they would typically only use when working on a big database. Need to separate out the enterprise developers from the home / small website kind of guys.
Thanks.
A:
Although SQL Server and MySQL are both RDBMs, MySQL has many unique features that can illustrate the difference between novice and expert.
Your first step should be to ensure that the candidate is comfortable using the command line, not just GUI tools such as phpMyAdmin. During the interview, try asking the candidate to write MySQL code to create a database table or add a new index. These are very basic queries, but exactly the type that GUI tools prevent novices from mastering. You can double-check the answers with someone who is more familiar with MySQL.
Can the candidate demonstrate knowledge of how JOINs work? For example, try asking the candidate to construct a query that returns all rows from Table One where no matching entries exist in Table Two. The answer should involve a LEFT JOIN.
Ask the candidate to discuss backup strategies, and the various strengths and weaknesses of each. The candidate should know that backing up the database files directly is not an effective strategy unless all the tables are MyISAM. The candidate should definitely mention mysqldump as a cornerstone for backups. More sophisticated backup solutions include ibbackup/innobackup and LVM snapshots. Ideally, the candidate should also discuss how backups can affect performance (a common solution is to use a slave server for taking backups).
Does the candidate have experience with replication? What are some of the common replication configurations and the various advantages of each? The most common setup is master-slave, allowing the application to offload SELECT queries to slave servers, along with taking backups using a slave to prevent performance issues on the master. Another common setup is master-master, the main benefit being the ability to make schema changes without impacting performance. Make sure the candidate discusses common issues such as cloning a slave server (mysqldump + notation of the binlog position), load distribution using a load balancer or MySQL proxy, resolving slave lag by breaking larger queries into chunks, and how to promote a slave to become a new master.
How would the candidate troubleshoot performance issues? Do they have sufficient knowledge of the underlying operating system and hardware to diagnose whether a bottleneck is CPU bound, IO bound, or network bound? Can they demonstrate how to use EXPLAIN to discover indexing problems? Do they mention the slow query log or configuration options such as the key buffer, tmp table size, innodb buffer pool size, etc?
Does the candidate appreciate the subtleties of each storage engine? (MyISAM, InnoDB, and MEMORY are the main ones). Do they understand how each storage engine optimizes queries, and how locking is handled? At the least, the candidate should mention that MyISAM issues a table-level lock whereas InnODB uses row-level locking.
What is the safest way to make schema changes to a live database? The candidate should mention master-master replication, as well as avoiding the locking and performance issues of ALTER TABLE by creating a new table with the desired configuration and using mysqldump or INSERT INTO ... SELECT followed by RENAME TABLE.
Lastly, the only true measurement of a pro is experience. If the candidate cannot point to specific experience managing large data sets in a high availability environment, they might not be able to back up any knowledge they possess on a purely intellectual level.
A:
I'd ask about the differences between the the various storage engines, their perceived benefits and drawbacks.
Defiantly cover replication, and dig into the drawbacks of replication, esp when using tables with auto increment keys.
If they are still with you then ask about replication lag, it's effects and standard patterns for monitoring it.
A:
I think it would depend on the database type: transactional or data warehouse?
Anyhow, for all types I'd ask about specific to MySQL replication and clustering, performance tuning and monitorization concepts.
|
MySQL Interview Questions
|
I've been asked to screen some candidates for a MySQL DBA / Developer position for a role that requires an enterprise level skill set.
I myself am a SQL Server person so I know what I would be looking for from that point of view with regards to scalability / design etc but is there anything specific I should be asking with regards to MySQL?
I would ideally like to ask them about enterprise level features of MySQL that they would typically only use when working on a big database. Need to separate out the enterprise developers from the home / small website kind of guys.
Thanks.
|
[
"Although SQL Server and MySQL are both RDBMs, MySQL has many unique features that can illustrate the difference between novice and expert.\nYour first step should be to ensure that the candidate is comfortable using the command line, not just GUI tools such as phpMyAdmin. During the interview, try asking the candidate to write MySQL code to create a database table or add a new index. These are very basic queries, but exactly the type that GUI tools prevent novices from mastering. You can double-check the answers with someone who is more familiar with MySQL.\nCan the candidate demonstrate knowledge of how JOINs work? For example, try asking the candidate to construct a query that returns all rows from Table One where no matching entries exist in Table Two. The answer should involve a LEFT JOIN.\nAsk the candidate to discuss backup strategies, and the various strengths and weaknesses of each. The candidate should know that backing up the database files directly is not an effective strategy unless all the tables are MyISAM. The candidate should definitely mention mysqldump as a cornerstone for backups. More sophisticated backup solutions include ibbackup/innobackup and LVM snapshots. Ideally, the candidate should also discuss how backups can affect performance (a common solution is to use a slave server for taking backups).\nDoes the candidate have experience with replication? What are some of the common replication configurations and the various advantages of each? The most common setup is master-slave, allowing the application to offload SELECT queries to slave servers, along with taking backups using a slave to prevent performance issues on the master. Another common setup is master-master, the main benefit being the ability to make schema changes without impacting performance. Make sure the candidate discusses common issues such as cloning a slave server (mysqldump + notation of the binlog position), load distribution using a load balancer or MySQL proxy, resolving slave lag by breaking larger queries into chunks, and how to promote a slave to become a new master.\nHow would the candidate troubleshoot performance issues? Do they have sufficient knowledge of the underlying operating system and hardware to diagnose whether a bottleneck is CPU bound, IO bound, or network bound? Can they demonstrate how to use EXPLAIN to discover indexing problems? Do they mention the slow query log or configuration options such as the key buffer, tmp table size, innodb buffer pool size, etc?\nDoes the candidate appreciate the subtleties of each storage engine? (MyISAM, InnoDB, and MEMORY are the main ones). Do they understand how each storage engine optimizes queries, and how locking is handled? At the least, the candidate should mention that MyISAM issues a table-level lock whereas InnODB uses row-level locking.\nWhat is the safest way to make schema changes to a live database? The candidate should mention master-master replication, as well as avoiding the locking and performance issues of ALTER TABLE by creating a new table with the desired configuration and using mysqldump or INSERT INTO ... SELECT followed by RENAME TABLE.\nLastly, the only true measurement of a pro is experience. If the candidate cannot point to specific experience managing large data sets in a high availability environment, they might not be able to back up any knowledge they possess on a purely intellectual level.\n",
"I'd ask about the differences between the the various storage engines, their perceived benefits and drawbacks. \nDefiantly cover replication, and dig into the drawbacks of replication, esp when using tables with auto increment keys.\nIf they are still with you then ask about replication lag, it's effects and standard patterns for monitoring it.\n",
"I think it would depend on the database type: transactional or data warehouse?\nAnyhow, for all types I'd ask about specific to MySQL replication and clustering, performance tuning and monitorization concepts.\n"
] |
[
39,
6,
2
] |
[] |
[] |
[
"mysql"
] |
stackoverflow_0000062069_mysql.txt
|
Q:
What do I need to manage XML files?
I believe I need a DTD to define the schema and an XSLT if I want to display it in a browser and have it look "pretty". But I'm not sure what else I would need to have a well-defined XML document that can be queried using XQuery and displayed in a web browser.
A:
Strictly speaking, you need nothing. XML, even without a schema definition, works.
A schema definition (in XSD, RelaxNG or DTD) helps various tools that work with the XML, because they can verify that the structure of the XML conforms to what you want.
An XSLT translation to HTML is nice if the XML contains information you'll want to look at with a browser. It's far from necessary, though.
To query the XML with XPath or XQuery, you need an XPath or XQuery processor.
A:
For a XML document to be queryable using XQquery you do not have to define a DTD or XSD. The purpose of DTD or XSD is to define the strict structure of a XML document and to allow validation before usage.
Modern browsers interpret XML files very nicely and show a DOM tree. If enhanced formatting of XML for browser display is necessary you have to create a XSLT transformation file and then add a directive to the original XML document pointing to the XSLT file. The browser picks that directive and uses the built-in XSLT processor to obtain the output that is then interpreted by the browser.
info.xml
<?xml version="1.0" encoding="iso-8859-1"?>
<?xml-stylesheet type="text/xsl" href="info.xslt"?>
<info>
<appName>My App</appName>
<version>1.0.129</version>
<buildTime>10-09-2008 12:44:03</buildTime>
</info>
info.xslt
<?xml version="1.0" encoding="iso-8859-1"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<html>
<head>
<title>Application</title>
<style type="text/css">
body { font-family: Lucida Console; }
#outer { text-align: left; }
#name {
font-weight: bold;
font-size: 1.2em;
}
#logo {
float: left;
padding-right: 20px;
padding-bottom: 200px;
}
</style>
</head>
<body>
<xsl:apply-templates select="info" />
</body>
</html>
</xsl:template>
<xsl:template match="info">
<img id="logo" src="image.png" />
<div id="outer">
<div id="name">
<xsl:value-of select="appName"/>
</div>
<div id="version">
<xsl:value-of select="version"/>
</div>
<div id="date">
<xsl:value-of select="buildTime"/>
</div>
</div>
</xsl:template>
</xsl:stylesheet>
|
What do I need to manage XML files?
|
I believe I need a DTD to define the schema and an XSLT if I want to display it in a browser and have it look "pretty". But I'm not sure what else I would need to have a well-defined XML document that can be queried using XQuery and displayed in a web browser.
|
[
"Strictly speaking, you need nothing. XML, even without a schema definition, works.\nA schema definition (in XSD, RelaxNG or DTD) helps various tools that work with the XML, because they can verify that the structure of the XML conforms to what you want.\nAn XSLT translation to HTML is nice if the XML contains information you'll want to look at with a browser. It's far from necessary, though.\nTo query the XML with XPath or XQuery, you need an XPath or XQuery processor.\n",
"For a XML document to be queryable using XQquery you do not have to define a DTD or XSD. The purpose of DTD or XSD is to define the strict structure of a XML document and to allow validation before usage.\nModern browsers interpret XML files very nicely and show a DOM tree. If enhanced formatting of XML for browser display is necessary you have to create a XSLT transformation file and then add a directive to the original XML document pointing to the XSLT file. The browser picks that directive and uses the built-in XSLT processor to obtain the output that is then interpreted by the browser.\ninfo.xml\n<?xml version=\"1.0\" encoding=\"iso-8859-1\"?>\n<?xml-stylesheet type=\"text/xsl\" href=\"info.xslt\"?>\n<info>\n <appName>My App</appName>\n <version>1.0.129</version>\n <buildTime>10-09-2008 12:44:03</buildTime>\n</info>\n\ninfo.xslt\n<?xml version=\"1.0\" encoding=\"iso-8859-1\"?>\n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n <xsl:template match=\"/\">\n <html>\n <head>\n <title>Application</title>\n <style type=\"text/css\">\n body { font-family: Lucida Console; }\n #outer { text-align: left; }\n #name {\n font-weight: bold;\n font-size: 1.2em;\n }\n #logo {\n float: left;\n padding-right: 20px;\n padding-bottom: 200px;\n }\n </style>\n </head>\n <body>\n <xsl:apply-templates select=\"info\" />\n </body>\n </html>\n </xsl:template>\n\n <xsl:template match=\"info\">\n <img id=\"logo\" src=\"image.png\" />\n <div id=\"outer\">\n <div id=\"name\">\n <xsl:value-of select=\"appName\"/>\n </div>\n <div id=\"version\">\n <xsl:value-of select=\"version\"/>\n </div>\n <div id=\"date\">\n <xsl:value-of select=\"buildTime\"/>\n </div>\n </div>\n </xsl:template>\n</xsl:stylesheet>\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"browser",
"dtd",
"xml",
"xquery",
"xslt"
] |
stackoverflow_0000064841_browser_dtd_xml_xquery_xslt.txt
|
Q:
What's the best way to load highly re-used data in a .net web application
Let's say I have a list of categories for navigation on a web app. Rather than selecting from the database for every user, should I add a function call in the application_onStart of the global.asax to fetch that data into an array or collection that is re-used over and over. If my data does not change at all - (Edit - very often), would this be the best way?
A:
Premature optimization is evil. That being a given, if you are having performance problems in your application and you have "static" information that you want to display to your users you can definitely load that data once into an array and store it in the Application Object. You want to be careful and balance memory usage with optimization.
The problem you run into then is changing the database stored info and not having it update the cached version. You would probably want to have some kind of last changed date in the database that you store in the state along with the cached data. That way you can query for the greatest changed time and compare it. If it's newer than your cached date then you dump it and reload.
A:
You can store the list items in the Application object. You are right about the application_onStart(), simply call a method that will read your database and load the data to the Application object.
In Global.asax
public class Global : System.Web.HttpApplication
{
// The key to use in the rest of the web site to retrieve the list
public const string ListItemKey = "MyListItemKey";
// a class to hold your actual values. This can be use with databinding
public class NameValuePair
{
public string Name{get;set;}
public string Value{get;set;}
public NameValuePair(string Name, string Value)
{
this.Name = Name;
this.Value = Value;
}
}
protected void Application_Start(object sender, EventArgs e)
{
InitializeApplicationVariables();
}
protected void InitializeApplicationVariables()
{
List<NameValuePair> listItems = new List<NameValuePair>();
// replace the following code with your data access code and fill in the collection
listItems.Add( new NameValuePair("Item1", "1"));
listItems.Add( new NameValuePair("Item2", "2"));
listItems.Add( new NameValuePair("Item3", "3"));
// load it in the application object
Application[ListItemKey] = listItems;
}
}
Now you can access your list in the rest of the project. For example, in default.aspx to load the values in a DropDownList:
<asp:DropDownList runat="server" ID="ddList" DataTextField="Name" DataValueField="Value"></asp:DropDownList>
And in the code-behind file:
protected override void OnPreInit(EventArgs e)
{
ddList.DataSource = Application[Global.ListItemKey];
ddList.DataBind();
base.OnPreInit(e);
}
A:
If it never changes, it probably doesn't need to be in the database.
If there isn't much data, you might put it in the web.config, or as en Enum in your code.
A:
Fetching all may be expensive. Try lazy init, fetch only request data and then store it in the cache variable.
A:
In an application variable.
Remember that an application variable can contain an object in .Net, so you can instantiate the object in the global.asax and then use it directly in the code.
Since application variables are in-memory they are very quick (vs having to call a database)
For example:
// Create and load the profile object
x_siteprofile thisprofile = new x_siteprofile(Server.MapPath(String.Concat(config.Path, "templates/")));
Application.Add("SiteProfileX", thisprofile);
A:
I would store the data in the Application Cache (Cache object). And I wouldn't preload it, I would load it the first time it is requested. What is nice about the Cache is that ASP.NET will manage it including giving you options for expiring the cache entry after file changes, a time period, etc. And since the items are kept in memory, the objects don't get serialized/deserialized so usage is very fast.
Usage is straightforward. There are Get and Add methods on the Cache object to retrieve and add items to the cache respectively.
A:
I use a static collection as a private with a public static property that either loads or gets it from the database.
Additionally you can add a static datetime that gets set when it gets loaded and if you call for it, past a certain amount of time, clear the static collection and requery it.
A:
Caching is the way to go. And if your into design patterns, take a look at the singleton.
Overall however I'm not sure I'd be worried about it until you notice performance degradation.
|
What's the best way to load highly re-used data in a .net web application
|
Let's say I have a list of categories for navigation on a web app. Rather than selecting from the database for every user, should I add a function call in the application_onStart of the global.asax to fetch that data into an array or collection that is re-used over and over. If my data does not change at all - (Edit - very often), would this be the best way?
|
[
"Premature optimization is evil. That being a given, if you are having performance problems in your application and you have \"static\" information that you want to display to your users you can definitely load that data once into an array and store it in the Application Object. You want to be careful and balance memory usage with optimization.\nThe problem you run into then is changing the database stored info and not having it update the cached version. You would probably want to have some kind of last changed date in the database that you store in the state along with the cached data. That way you can query for the greatest changed time and compare it. If it's newer than your cached date then you dump it and reload.\n",
"You can store the list items in the Application object. You are right about the application_onStart(), simply call a method that will read your database and load the data to the Application object.\nIn Global.asax\npublic class Global : System.Web.HttpApplication\n{\n // The key to use in the rest of the web site to retrieve the list\n public const string ListItemKey = \"MyListItemKey\";\n // a class to hold your actual values. This can be use with databinding\n public class NameValuePair\n { \n public string Name{get;set;} \n public string Value{get;set;}\n public NameValuePair(string Name, string Value)\n {\n this.Name = Name;\n this.Value = Value;\n }\n }\n\n protected void Application_Start(object sender, EventArgs e)\n {\n InitializeApplicationVariables();\n }\n\n\n protected void InitializeApplicationVariables()\n {\n List<NameValuePair> listItems = new List<NameValuePair>();\n // replace the following code with your data access code and fill in the collection\n listItems.Add( new NameValuePair(\"Item1\", \"1\"));\n listItems.Add( new NameValuePair(\"Item2\", \"2\"));\n listItems.Add( new NameValuePair(\"Item3\", \"3\"));\n // load it in the application object\n Application[ListItemKey] = listItems;\n }\n }\n\nNow you can access your list in the rest of the project. For example, in default.aspx to load the values in a DropDownList:\n<asp:DropDownList runat=\"server\" ID=\"ddList\" DataTextField=\"Name\" DataValueField=\"Value\"></asp:DropDownList>\n\nAnd in the code-behind file:\nprotected override void OnPreInit(EventArgs e)\n{\n ddList.DataSource = Application[Global.ListItemKey];\n ddList.DataBind();\n base.OnPreInit(e);\n}\n\n",
"If it never changes, it probably doesn't need to be in the database.\nIf there isn't much data, you might put it in the web.config, or as en Enum in your code.\n",
"Fetching all may be expensive. Try lazy init, fetch only request data and then store it in the cache variable.\n",
"In an application variable.\nRemember that an application variable can contain an object in .Net, so you can instantiate the object in the global.asax and then use it directly in the code.\nSince application variables are in-memory they are very quick (vs having to call a database)\nFor example:\n// Create and load the profile object\nx_siteprofile thisprofile = new x_siteprofile(Server.MapPath(String.Concat(config.Path, \"templates/\")));\nApplication.Add(\"SiteProfileX\", thisprofile);\n\n",
"I would store the data in the Application Cache (Cache object). And I wouldn't preload it, I would load it the first time it is requested. What is nice about the Cache is that ASP.NET will manage it including giving you options for expiring the cache entry after file changes, a time period, etc. And since the items are kept in memory, the objects don't get serialized/deserialized so usage is very fast.\nUsage is straightforward. There are Get and Add methods on the Cache object to retrieve and add items to the cache respectively.\n",
"I use a static collection as a private with a public static property that either loads or gets it from the database.\nAdditionally you can add a static datetime that gets set when it gets loaded and if you call for it, past a certain amount of time, clear the static collection and requery it.\n",
"Caching is the way to go. And if your into design patterns, take a look at the singleton.\nOverall however I'm not sure I'd be worried about it until you notice performance degradation.\n"
] |
[
2,
2,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"application_start",
"asp.net",
"caching",
"global_asax"
] |
stackoverflow_0000064284_application_start_asp.net_caching_global_asax.txt
|
Q:
How to replace a character programmatically in Oracle 8.x series
Due to repetitive errors with one of our Java applications:
Engine engine_0: Error in application action.
org.xml.sax.SAXParseException: An invalid XML character (Unicode: 0x13)
was found in the element content of the document.
I need to "fix" some Unicode character in an Oracle database, ideally in a programmatic fashion. Once identified, what would be a simple way to "search and replace" it?
A:
Assuming the characters are present in a text field:
update TABLE set COLUMN=REPLACE(convert(varchar(5000), COLUMN), 'searchstring', 'replacestring')
(note that this will only work on a text field with no more than 5000 characters, for larger text fields increase the number in the query).
|
How to replace a character programmatically in Oracle 8.x series
|
Due to repetitive errors with one of our Java applications:
Engine engine_0: Error in application action.
org.xml.sax.SAXParseException: An invalid XML character (Unicode: 0x13)
was found in the element content of the document.
I need to "fix" some Unicode character in an Oracle database, ideally in a programmatic fashion. Once identified, what would be a simple way to "search and replace" it?
|
[
"Assuming the characters are present in a text field:\nupdate TABLE set COLUMN=REPLACE(convert(varchar(5000), COLUMN), 'searchstring', 'replacestring')\n\n(note that this will only work on a text field with no more than 5000 characters, for larger text fields increase the number in the query).\n"
] |
[
4
] |
[] |
[] |
[
"exception",
"java",
"oracle",
"sql",
"unicode"
] |
stackoverflow_0000064875_exception_java_oracle_sql_unicode.txt
|
Q:
Firebird's SQL's Substring function not working
I created a view on a machine using the substring function from Firebird, and it worked. When I copied the database to a different machine, the view was broken. This is the way I used it:
SELECT SUBSTRING(field FROM 5 FOR 15) FROM table;
And this is the output on the machine that does not accept the function:
token unknown: FROM
Both computers have this configuration:
IB Expert version 2.5.0.42 to run the queries and deal with the database.
Firebird version 1.5 as server to database.
BDE Administration version 5.01 installed, with Interbase 4.0 drivers.
Any ideas about why it's behaving differently on these machines?
A:
Make sure Firebird engine is 1.5 and there's no InterBase server running on this same box on the port you expected Firebird 1.5.
Make sure you don't have any UDF called 'substring' registered inside this DB so that Firebird is expecting different parameters.
A:
Different engine versions?
Have you tried naming that expression in the result?
SELECT SUBSTRING(field FROM 5 FOR 15) AS x FROM table;
|
Firebird's SQL's Substring function not working
|
I created a view on a machine using the substring function from Firebird, and it worked. When I copied the database to a different machine, the view was broken. This is the way I used it:
SELECT SUBSTRING(field FROM 5 FOR 15) FROM table;
And this is the output on the machine that does not accept the function:
token unknown: FROM
Both computers have this configuration:
IB Expert version 2.5.0.42 to run the queries and deal with the database.
Firebird version 1.5 as server to database.
BDE Administration version 5.01 installed, with Interbase 4.0 drivers.
Any ideas about why it's behaving differently on these machines?
|
[
"\nMake sure Firebird engine is 1.5 and there's no InterBase server running on this same box on the port you expected Firebird 1.5.\n\nMake sure you don't have any UDF called 'substring' registered inside this DB so that Firebird is expecting different parameters.\n\n\n",
"Different engine versions?\nHave you tried naming that expression in the result?\nSELECT SUBSTRING(field FROM 5 FOR 15) AS x FROM table;\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"firebird",
"interbase",
"sql"
] |
stackoverflow_0000005142_firebird_interbase_sql.txt
|
Q:
Problem with .net app under linux, doesn't work from shell script
I'm working on a .net post-commit hook to feed data into OnTime via their Soap SDK. My hook works on Windows fine, but on our production RHEL4 subversion server, it won't work when called from a shell script.
#!/bin/sh
/usr/bin/mono $1/hooks/post-commit.exe "$@"
When I execute it with parameters from the command line, it works properly. When executed via the shell script, I get the following error: (looks like there is some problem with the process execution of SVN that I use to get the log data for the revision):
Unhandled Exception: System.InvalidOperationException: The process must exit before getting the requested information.
at System.Diagnostics.Process.get_ExitCode () [0x0003f] in /tmp/monobuild/build/BUILD/mono-1.9.1/mcs/class/System/System.Diagnostics/Process.cs:149
at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:get_ExitCode ()
at SVNLib.SVN.Execute (System.String sCMD, System.String sParams, System.String sComment, System.String sUserPwd, SVNLib.SVNCallback callback) [0x00000]
at SVNLib.SVN.Log (System.String sUrl, Int32 nRevLow, Int32 nRevHigh, SVNLib.SVNCallback callback) [0x00000]
at SVNLib.SVN.LogAsString (System.String sUrl, Int32 nRevLow, Int32 nRevHigh) [0x00000]
at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000]
I've tried using mkbundle and mkbundle2 to make a stand alone that could be named post-commit, but I get a different error message:
Unhandled Exception: System.ArgumentNullException: Argument cannot be null.
Parameter name: Value cannot be null.
at System.Guid.CheckNull (System.Object o) [0x00000]
at System.Guid..ctor (System.String g) [0x00000]
at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000]
Any ideas why it might be failing from a shell script or what might be wrong with the bundled version?
Edit: @Herms, I've already tried it with an echo, and it looks right. As for the $1/hooks/post-commit.exe, I've tried the script with and without a full path to the .net assembly with the same results.
Edit: @Leon, I've tried both $1 $2 and "$@" with the same results. It is a subversion post commit hook, and it takes two parameters, so those need to be passed along to the .net assembly. The "$@" was what was recommended at the mono site for calling a .net assembly from a shell script. The shell script is executing the .net assembly and with the correct parameters, but it is throwing an exception that does not get thrown when run directly from the command line.
Edit: @Vinko, I don't see any differences in the environment other than things like BASH_LINENO and BASH_SOURCE
Edit: @Luke, I tired it, but that makes no difference either. I first noticed the problem when testing from TortoiseSVN on my machine (when it runs as a sub-process of the subversion daemon), but also found that I get the same results when executing the script from the hooks directory (i.e. ./post-commit REPOS REV, where post-commit is the above sh script. Doing mono post-commit.exe REPOS REV works fine. The main problem is that to execute, I need to have something of the name post-commit so that it will be called. But it does not work from a shell script, and as noted above, the mkbundle is not working with a different problem.
A:
It is normal for some processes to hang around for a while after they close their stdout (ie. you get an end-of-file reading from them). You need to call proc.WaitForExit() after reading all the data but before checking ExitCode.
A:
Just a random thought that might help with debugging. Try changing your shell script to:
#!/bin/sh
echo /usr/bin/mono $1/hooks/post-commit.exe "$@"
Check and see if the line it prints matches the command you're expecting it to run. It's possible your command line argument handling in the shell script isn't doing what you want it to do.
I don't know what your input to the script is expected to be, but the $1 before the path looks a bit out of place to me.
A:
Are you sure you want to do
/usr/bin/mono $1/hooks/post-commit.exe "$@"
$@ expands to ALL arguments. "$@" expands to all arguments join by spaces. I suspect you shell script is incorrect. You didn't state exactly what you wanted the script to do, so that does limit our possibilities to make suggestions.
A:
Compare the environment variables in your shell and from within the script.
A:
Try putting "cd $1/hooks/" before the line that runs mono. You may have some assemblies in that folder that are found when you run mono from that folder in the shell but are not being found when you run your script.
A:
After having verified that my code did work from the command line, I found that it was no longer working! I went looking into my .net code to see if anything made sense.
Here is what I had:
static public int Execute(string sCMD, string sParams, string sComment,
string sUserPwd, SVNCallback callback)
{
System.Diagnostics.Process proc = new System.Diagnostics.Process();
proc.EnableRaisingEvents = false;
proc.StartInfo.RedirectStandardOutput = true;
proc.StartInfo.CreateNoWindow = true;
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.Verb = "open";
proc.StartInfo.FileName = "svn";
proc.StartInfo.Arguments = Cmd(sCMD, sParams, sComment, UserPass());
proc.Start();
int nLine = 0;
string sLine = "";
while ((sLine = proc.StandardOutput.ReadLine()) != null)
{
++nLine;
if (callback != null)
{
callback.Invoke(nLine, sLine);
}
}
int errorCode = proc.ExitCode;
proc.Close();
return errorCode;
}
I changed this:
while (!proc.HasExited)
{
sLine = proc.StandardOutput.ReadLine();
if (sLine != null)
{
++nLine;
if (callback != null)
{
callback.Invoke(nLine, sLine);
}
}
}
int errorCode = proc.ExitCode;
It looks like the Process is hanging around a bit longer than I'm getting output, and thus the proc.ExitCode is throwing an error.
|
Problem with .net app under linux, doesn't work from shell script
|
I'm working on a .net post-commit hook to feed data into OnTime via their Soap SDK. My hook works on Windows fine, but on our production RHEL4 subversion server, it won't work when called from a shell script.
#!/bin/sh
/usr/bin/mono $1/hooks/post-commit.exe "$@"
When I execute it with parameters from the command line, it works properly. When executed via the shell script, I get the following error: (looks like there is some problem with the process execution of SVN that I use to get the log data for the revision):
Unhandled Exception: System.InvalidOperationException: The process must exit before getting the requested information.
at System.Diagnostics.Process.get_ExitCode () [0x0003f] in /tmp/monobuild/build/BUILD/mono-1.9.1/mcs/class/System/System.Diagnostics/Process.cs:149
at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:get_ExitCode ()
at SVNLib.SVN.Execute (System.String sCMD, System.String sParams, System.String sComment, System.String sUserPwd, SVNLib.SVNCallback callback) [0x00000]
at SVNLib.SVN.Log (System.String sUrl, Int32 nRevLow, Int32 nRevHigh, SVNLib.SVNCallback callback) [0x00000]
at SVNLib.SVN.LogAsString (System.String sUrl, Int32 nRevLow, Int32 nRevHigh) [0x00000]
at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000]
I've tried using mkbundle and mkbundle2 to make a stand alone that could be named post-commit, but I get a different error message:
Unhandled Exception: System.ArgumentNullException: Argument cannot be null.
Parameter name: Value cannot be null.
at System.Guid.CheckNull (System.Object o) [0x00000]
at System.Guid..ctor (System.String g) [0x00000]
at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000]
Any ideas why it might be failing from a shell script or what might be wrong with the bundled version?
Edit: @Herms, I've already tried it with an echo, and it looks right. As for the $1/hooks/post-commit.exe, I've tried the script with and without a full path to the .net assembly with the same results.
Edit: @Leon, I've tried both $1 $2 and "$@" with the same results. It is a subversion post commit hook, and it takes two parameters, so those need to be passed along to the .net assembly. The "$@" was what was recommended at the mono site for calling a .net assembly from a shell script. The shell script is executing the .net assembly and with the correct parameters, but it is throwing an exception that does not get thrown when run directly from the command line.
Edit: @Vinko, I don't see any differences in the environment other than things like BASH_LINENO and BASH_SOURCE
Edit: @Luke, I tired it, but that makes no difference either. I first noticed the problem when testing from TortoiseSVN on my machine (when it runs as a sub-process of the subversion daemon), but also found that I get the same results when executing the script from the hooks directory (i.e. ./post-commit REPOS REV, where post-commit is the above sh script. Doing mono post-commit.exe REPOS REV works fine. The main problem is that to execute, I need to have something of the name post-commit so that it will be called. But it does not work from a shell script, and as noted above, the mkbundle is not working with a different problem.
|
[
"It is normal for some processes to hang around for a while after they close their stdout (ie. you get an end-of-file reading from them). You need to call proc.WaitForExit() after reading all the data but before checking ExitCode.\n",
"Just a random thought that might help with debugging. Try changing your shell script to:\n#!/bin/sh\necho /usr/bin/mono $1/hooks/post-commit.exe \"$@\"\n\nCheck and see if the line it prints matches the command you're expecting it to run. It's possible your command line argument handling in the shell script isn't doing what you want it to do.\nI don't know what your input to the script is expected to be, but the $1 before the path looks a bit out of place to me.\n",
"Are you sure you want to do\n/usr/bin/mono $1/hooks/post-commit.exe \"$@\"\n$@ expands to ALL arguments. \"$@\" expands to all arguments join by spaces. I suspect you shell script is incorrect. You didn't state exactly what you wanted the script to do, so that does limit our possibilities to make suggestions.\n",
"Compare the environment variables in your shell and from within the script.\n",
"Try putting \"cd $1/hooks/\" before the line that runs mono. You may have some assemblies in that folder that are found when you run mono from that folder in the shell but are not being found when you run your script.\n",
"After having verified that my code did work from the command line, I found that it was no longer working! I went looking into my .net code to see if anything made sense.\nHere is what I had:\n\n static public int Execute(string sCMD, string sParams, string sComment,\n string sUserPwd, SVNCallback callback)\n {\n System.Diagnostics.Process proc = new System.Diagnostics.Process();\n proc.EnableRaisingEvents = false;\n proc.StartInfo.RedirectStandardOutput = true;\n proc.StartInfo.CreateNoWindow = true;\n proc.StartInfo.UseShellExecute = false;\n proc.StartInfo.Verb = \"open\";\n proc.StartInfo.FileName = \"svn\";\n proc.StartInfo.Arguments = Cmd(sCMD, sParams, sComment, UserPass());\n proc.Start();\n int nLine = 0;\n string sLine = \"\";\n while ((sLine = proc.StandardOutput.ReadLine()) != null)\n {\n ++nLine;\n if (callback != null)\n {\n callback.Invoke(nLine, sLine);\n }\n }\n int errorCode = proc.ExitCode;\n proc.Close();\n return errorCode;\n }\n\nI changed this:\n\n while (!proc.HasExited)\n {\n sLine = proc.StandardOutput.ReadLine();\n if (sLine != null)\n {\n ++nLine;\n if (callback != null)\n {\n callback.Invoke(nLine, sLine);\n }\n }\n }\n int errorCode = proc.ExitCode;\n\nIt looks like the Process is hanging around a bit longer than I'm getting output, and thus the proc.ExitCode is throwing an error.\n"
] |
[
2,
0,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"linux",
"mono",
"svn"
] |
stackoverflow_0000054503_.net_linux_mono_svn.txt
|
Q:
Exceptions in Web Services
My group is developing a service-based (.NET WCF) application and we're trying to decide how to handle exceptions in our internal services. Should we throw exceptions? Return exceptions serialized as XML? Just return an error code?
Keep in mind that the user will never see these exceptions, it's only for other parts of the application.
A:
WCF uses SoapFaults as its native way of transmitting exceptions from either the service to the client, or the client to the service.
You can declare a custom SOAP fault using the FaultContract attribute in your contract interface:
For example:
[ServiceContract(Namespace="foobar")]
interface IContract
{
[OperationContract]
[FaultContract(typeof(CustomFault))]
void DoSomething();
}
[DataContract(Namespace="Foobar")]
class CustomFault
{
[DataMember]
public string error;
public CustomFault(string err)
{
error = err;
}
}
class myService : IContract
{
public void DoSomething()
{
throw new FaultException<CustomFault>( new CustomFault("Custom Exception!"));
}
}
A:
Well, why not just throw the standard SOAPExceptions? The problem with error codes and serialized XML is that they both require additional logic to recognize that an error did in fact happen. Such an approach is only useful if you have specialized logging or logic that needs to happen on the other side of the web service. Such an example would be returning a flag that says "it's ok to continue" with an error exception report.
Regardless of how you throw it, it won't make the job any easier, as the calling side still needs to recognize there was an exception and deal with it.
A:
I'm a bit confused, I'm not being flippant -- you say you want to return exceptions serialised as XML on the one hand and that the user will never see the exceptions on the other hand. Who will be seeing these exceptions?
Normally I'd say to use WCF fault contracts.
A:
Phil, different parts of the application call each other using WCF. By "return exceptions serialized as XML," I meant that the return value of the function wold be an exception object. Success would be indicated by null.
I don't think that's the right option.
WCF fault contracts sound good, but I don't know anything about them. Checking google right now.
A:
I would avoid sending exceptions directly back to the client unless you are okay with that much detail being sent back.
I would recommend using WCF faults to transmit your error message and code (something that can be used to make a decision on the receiver to retry, error out, etc) depending if it is the sender or receiver at fault.
This can be done using FaultCode.CreateReceiverFaultCode and FaultCode.CreateSenderFaultCode.
I'm in the process of going through this right now, but ran into a nasty snag it seems in the WCF fault generated SOAP 1.1 response. If you are interested, you can check out my question about it here:
.NET WCF faults generating incorrect SOAP 1.1 faultcode values
|
Exceptions in Web Services
|
My group is developing a service-based (.NET WCF) application and we're trying to decide how to handle exceptions in our internal services. Should we throw exceptions? Return exceptions serialized as XML? Just return an error code?
Keep in mind that the user will never see these exceptions, it's only for other parts of the application.
|
[
"WCF uses SoapFaults as its native way of transmitting exceptions from either the service to the client, or the client to the service.\nYou can declare a custom SOAP fault using the FaultContract attribute in your contract interface:\nFor example:\n[ServiceContract(Namespace=\"foobar\")]\ninterface IContract\n{\n [OperationContract]\n [FaultContract(typeof(CustomFault))]\n void DoSomething();\n}\n\n\n[DataContract(Namespace=\"Foobar\")]\nclass CustomFault\n{\n [DataMember]\n public string error;\n\n public CustomFault(string err)\n {\n error = err;\n }\n}\n\nclass myService : IContract\n{\n public void DoSomething()\n {\n throw new FaultException<CustomFault>( new CustomFault(\"Custom Exception!\"));\n }\n}\n\n",
"Well, why not just throw the standard SOAPExceptions? The problem with error codes and serialized XML is that they both require additional logic to recognize that an error did in fact happen. Such an approach is only useful if you have specialized logging or logic that needs to happen on the other side of the web service. Such an example would be returning a flag that says \"it's ok to continue\" with an error exception report. \nRegardless of how you throw it, it won't make the job any easier, as the calling side still needs to recognize there was an exception and deal with it. \n",
"I'm a bit confused, I'm not being flippant -- you say you want to return exceptions serialised as XML on the one hand and that the user will never see the exceptions on the other hand. Who will be seeing these exceptions?\nNormally I'd say to use WCF fault contracts. \n",
"Phil, different parts of the application call each other using WCF. By \"return exceptions serialized as XML,\" I meant that the return value of the function wold be an exception object. Success would be indicated by null. \nI don't think that's the right option.\nWCF fault contracts sound good, but I don't know anything about them. Checking google right now.\n",
"I would avoid sending exceptions directly back to the client unless you are okay with that much detail being sent back.\nI would recommend using WCF faults to transmit your error message and code (something that can be used to make a decision on the receiver to retry, error out, etc) depending if it is the sender or receiver at fault.\nThis can be done using FaultCode.CreateReceiverFaultCode and FaultCode.CreateSenderFaultCode.\nI'm in the process of going through this right now, but ran into a nasty snag it seems in the WCF fault generated SOAP 1.1 response. If you are interested, you can check out my question about it here:\n.NET WCF faults generating incorrect SOAP 1.1 faultcode values\n"
] |
[
3,
2,
1,
0,
0
] |
[] |
[] |
[
".net",
"exception",
"exception_handling",
"wcf",
"web_services"
] |
stackoverflow_0000040122_.net_exception_exception_handling_wcf_web_services.txt
|
Q:
C pointers in C#
Is this function declaration in C#:
void foo(string mystring)
the same as this one in C:
void foo(char *)
i.e. In C#, does the called function receive a pointer behind the scenes?
A:
In this specific instance, it is more like:
void foo(const char *);
.Net strings are immutable and passed by reference. However, in general C# receives a pointer or reference to an object behind the scenes.
A:
There are pointers behind the scenes in C#, though they are more like C++'s smart pointers, so the raw pointers are encapsulated. A char* isn't really the same as System.String since a pointer to a char usually means the start of a character array, and a C# string is an object with a length field and a character array. The pointer points to the outer structure which points into something like a wchar_t array, so there's some indirection with a C# string and wider characters for Unicode support.
A:
No. In C# (and all other .NET languages) the String is a first-class data type. It is not simply an array of characters. You can convert back and forth between them, but they do not behave the same. There are a number of string manipulation methods (like "Substring()" and "StartsWith") that are available to the String class, which don't apply to arrays in general, which an array of characters is simply an instance of.
A:
Essentially, yes. In C#, string (actually System.String) is a reference type, so when foo() is called, it receives a pointer to the string in the heap.
A:
For value types (int, double, etc.), the function receives a copy of the value. For other objects, it's a reference pointing to the original object.
Strings are special because they are immutable. Technically it means it will pass the reference, but in practice it will behave pretty much like a value type.
You can force value types to pass a reference by using the ref keyword:
public void Foo(ref int value) { value = 12 }
public void Bar()
{
int val = 3;
Foo(ref val);
// val == 12
}
A:
no in c# string is unicode.
in c# it is not called a pointer, but a reference.
A:
If you mean - will the method be allowed to access the contents of the character space, the answer is yes.
A:
Yes, because a string is of dynamic size, so there must be heap memory behind the scenes
However they are NOT the same.
in c the pointer points to a string that may also be used elsewhere, so changing it will effect those other places.
A:
Anything that is not a "value type", which essentially covers enums, booleans, and built-in numeric types, will be passed "by reference", which is arguably the same as the C/C++ mechanism of passing by reference or pointer. Syntactically and semantically it is essentially identical to C/C++ passing by reference.
Note, however, that in C# strings are immutable, so even though it is passed by reference you can't edit the string without creating a new one.
Also note that you can't pass an argument as "const" in C#, regardless whether it is a value type or a reference type.
A:
While those are indeed equivalent in a semantic sense (i.e. the code is doing something with a string), C#, like Java, keeps pointers completely out of its everyday use, relegating them to areas such as transitions to native OS functions - even then, there are framework classes which wrap those up nicely, such as SafeFileHandle.
Long story short, don't go out of your way thinking of pointers in C#.
A:
As far as I know, all classes in C# (not sure about the others) are reference types.
|
C pointers in C#
|
Is this function declaration in C#:
void foo(string mystring)
the same as this one in C:
void foo(char *)
i.e. In C#, does the called function receive a pointer behind the scenes?
|
[
"In this specific instance, it is more like:\nvoid foo(const char *);\n\n.Net strings are immutable and passed by reference. However, in general C# receives a pointer or reference to an object behind the scenes.\n",
"There are pointers behind the scenes in C#, though they are more like C++'s smart pointers, so the raw pointers are encapsulated. A char* isn't really the same as System.String since a pointer to a char usually means the start of a character array, and a C# string is an object with a length field and a character array. The pointer points to the outer structure which points into something like a wchar_t array, so there's some indirection with a C# string and wider characters for Unicode support.\n",
"No. In C# (and all other .NET languages) the String is a first-class data type. It is not simply an array of characters. You can convert back and forth between them, but they do not behave the same. There are a number of string manipulation methods (like \"Substring()\" and \"StartsWith\") that are available to the String class, which don't apply to arrays in general, which an array of characters is simply an instance of.\n",
"Essentially, yes. In C#, string (actually System.String) is a reference type, so when foo() is called, it receives a pointer to the string in the heap.\n",
"For value types (int, double, etc.), the function receives a copy of the value. For other objects, it's a reference pointing to the original object.\nStrings are special because they are immutable. Technically it means it will pass the reference, but in practice it will behave pretty much like a value type.\nYou can force value types to pass a reference by using the ref keyword:\npublic void Foo(ref int value) { value = 12 }\npublic void Bar()\n{\n int val = 3;\n Foo(ref val);\n // val == 12\n}\n\n",
"no in c# string is unicode.\nin c# it is not called a pointer, but a reference.\n",
"If you mean - will the method be allowed to access the contents of the character space, the answer is yes. \n",
"Yes, because a string is of dynamic size, so there must be heap memory behind the scenes\nHowever they are NOT the same.\nin c the pointer points to a string that may also be used elsewhere, so changing it will effect those other places.\n",
"Anything that is not a \"value type\", which essentially covers enums, booleans, and built-in numeric types, will be passed \"by reference\", which is arguably the same as the C/C++ mechanism of passing by reference or pointer. Syntactically and semantically it is essentially identical to C/C++ passing by reference.\nNote, however, that in C# strings are immutable, so even though it is passed by reference you can't edit the string without creating a new one.\nAlso note that you can't pass an argument as \"const\" in C#, regardless whether it is a value type or a reference type.\n",
"While those are indeed equivalent in a semantic sense (i.e. the code is doing something with a string), C#, like Java, keeps pointers completely out of its everyday use, relegating them to areas such as transitions to native OS functions - even then, there are framework classes which wrap those up nicely, such as SafeFileHandle.\nLong story short, don't go out of your way thinking of pointers in C#.\n",
"As far as I know, all classes in C# (not sure about the others) are reference types.\n"
] |
[
12,
5,
5,
4,
3,
1,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"c",
"c#",
"language_implementation",
"pointers",
"string"
] |
stackoverflow_0000064689_c_c#_language_implementation_pointers_string.txt
|
Q:
Make Test.QuickCheck.Batch use a default type for testing list functions
I am testing a function called extractions that operates over any list.
extractions :: [a] -> [(a,[a])]
extractions [] = []
extractions l = extract l []
where extract [] _ = []
extract (x:xs) prev = (x, prev++xs) : extract xs (x : prev)
I want to test it, for example, with
import Test.QuickCheck.Batch
prop_len l = length l == length (extractions l)
main = runTests "extractions" defOpt [run prop_len]
But this won't compile; I have to supply a type either for run or prop_len, because QuickCheck can't generate [a], it has to generate something concrete. So I chose Int:
main = runTests "extractions" defOpt [r prop_len]
where r = run :: ([Int] -> Bool) -> TestOptions -> IO TestResult
Is there any way to get QuickCheck to choose a for me instead of having it specified in the type of run?
A:
The quickcheck manual says "no":
Properties must have monomorphic types. `Polymorphic' properties, such as the one above, must be restricted to a particular type to be used for testing. It is convenient to do so by stating the types of one or more arguments in a
where types = (x1 :: t1, x2 :: t2, ...)
clause...
|
Make Test.QuickCheck.Batch use a default type for testing list functions
|
I am testing a function called extractions that operates over any list.
extractions :: [a] -> [(a,[a])]
extractions [] = []
extractions l = extract l []
where extract [] _ = []
extract (x:xs) prev = (x, prev++xs) : extract xs (x : prev)
I want to test it, for example, with
import Test.QuickCheck.Batch
prop_len l = length l == length (extractions l)
main = runTests "extractions" defOpt [run prop_len]
But this won't compile; I have to supply a type either for run or prop_len, because QuickCheck can't generate [a], it has to generate something concrete. So I chose Int:
main = runTests "extractions" defOpt [r prop_len]
where r = run :: ([Int] -> Bool) -> TestOptions -> IO TestResult
Is there any way to get QuickCheck to choose a for me instead of having it specified in the type of run?
|
[
"The quickcheck manual says \"no\":\n\nProperties must have monomorphic types. `Polymorphic' properties, such as the one above, must be restricted to a particular type to be used for testing. It is convenient to do so by stating the types of one or more arguments in a\nwhere types = (x1 :: t1, x2 :: t2, ...)\nclause...\n\n"
] |
[
7
] |
[] |
[] |
[
"haskell",
"quickcheck",
"testing",
"type_inference"
] |
stackoverflow_0000064197_haskell_quickcheck_testing_type_inference.txt
|
Q:
How to define and use static variables in F# class
Is there a way to have a mutable static variable in F# class that is identical to a static variable in C# class ?
A:
You use static let bindings (note: while necessary some times, it's none too functional):
type StaticMemberTest () =
static let mutable test : string = ""
member this.Test
with get() =
test <- "asdf"
test
|
How to define and use static variables in F# class
|
Is there a way to have a mutable static variable in F# class that is identical to a static variable in C# class ?
|
[
"You use static let bindings (note: while necessary some times, it's none too functional):\ntype StaticMemberTest () =\n\n static let mutable test : string = \"\"\n\n member this.Test \n\n with get() = \n test <- \"asdf\"\n test\n\n"
] |
[
18
] |
[] |
[] |
[
"f#",
"functional_programming"
] |
stackoverflow_0000062654_f#_functional_programming.txt
|
Q:
Display ODBC connections dialog and get chosen ODBC back
Any information on how to display the ODBC connections dialog and get the chosen ODBC back?
A:
// a_RootKey is Microsoft.Win32.RegistryKey
// DSN is a class not provided in this code sample - you can see what properties are needed from the usage below.
List<DSN> DsnList = new List<DSN>();
Microsoft.Win32.RegistryKey SearchKey = a_RootKey.OpenSubKey("SOFTWARE\\ODBC\\ODBC.INI\\ODBC Data Sources");
if (SearchKey != null)
{
foreach (string DsnName in SearchKey.GetValueNames() )
{
if ( (string)SearchKey.GetValue(DsnName) == "SQL Server" )
{
Microsoft.Win32.RegistryKey anotherkey = a_RootKey.OpenSubKey("SOFTWARE\\ODBC\\ODBC.INI\\" + DSNName);
DSN dsn = new DSN();
dsn.Name = DSNName;
dsn.Server = (string)anotherkey.GetValue("Server");
dsn.Database = (string)anotherkey.GetValue("Database");
dsn.Driver = (string)anotherkey.GetValue("Driver");
DsnList.Add(dsn);
}
}
}
return DsnList;
A:
OK since no one seems to have an answer, how about iterating throught the ODBC connections by DBSource, I.e. SQLServer or MySQL
|
Display ODBC connections dialog and get chosen ODBC back
|
Any information on how to display the ODBC connections dialog and get the chosen ODBC back?
|
[
"// a_RootKey is Microsoft.Win32.RegistryKey \n// DSN is a class not provided in this code sample - you can see what properties are needed from the usage below.\n\nList<DSN> DsnList = new List<DSN>();\n\nMicrosoft.Win32.RegistryKey SearchKey = a_RootKey.OpenSubKey(\"SOFTWARE\\\\ODBC\\\\ODBC.INI\\\\ODBC Data Sources\");\n\nif (SearchKey != null)\n{\n\n foreach (string DsnName in SearchKey.GetValueNames() )\n { \n if ( (string)SearchKey.GetValue(DsnName) == \"SQL Server\" )\n {\n Microsoft.Win32.RegistryKey anotherkey = a_RootKey.OpenSubKey(\"SOFTWARE\\\\ODBC\\\\ODBC.INI\\\\\" + DSNName);\n DSN dsn = new DSN();\n dsn.Name = DSNName;\n dsn.Server = (string)anotherkey.GetValue(\"Server\");\n dsn.Database = (string)anotherkey.GetValue(\"Database\");\n dsn.Driver = (string)anotherkey.GetValue(\"Driver\");\n\n DsnList.Add(dsn);\n }\n\n }\n}\nreturn DsnList;\n\n",
"OK since no one seems to have an answer, how about iterating throught the ODBC connections by DBSource, I.e. SQLServer or MySQL\n"
] |
[
2,
0
] |
[] |
[] |
[
"c#",
"dialog",
"odbc"
] |
stackoverflow_0000064581_c#_dialog_odbc.txt
|
Q:
Is The Perl Journal available online?
Does anyone know where online copies of the old The Perl Journal articles can be found?
I know they are now owned by Dr. Dobb's, just the main page for it says they are part of whatever section the subject matter is relevant too, rather than being indexed together. That said, I have never been able to find any of them online on that site.
I know Mark Jason Dominus has a few of his articles on his site, any one know of any other good places? Or even what search terms to use at Dr. Dobb's?
A:
Volumes 1-5 (1996 -> 2000) can be found at http://www.foo.be/docs/tpj/
Hmm, looks like that was the entire run? I though it was longer than that for some reason.
A:
Many of the articles have the string "TPJ" at the end of the article, so I get quite a few results from searching just "TPJ". I'll put together an index of the articles they published on the website. Are you looking for a particular article or author?
I've linked to my TPJ Online articles on my personal web page, but they are all into the DDJ site with unhelpful URLs. Stonehenge gives out Randal's TPJ articles for free, and Simon Cozens has most of his articles online too.
Besides the online articles, there are compilation books from O'Reilly Media: Web, Graphics & Perl TK: Best of The Perl Journal, Games, Diversions & Perl Culture: Best of The Perl Journal , and Computer Science & Perl Programming: Best of TPJ .
If you are just jonesing for a Perl magazine, there is also The Perl Review, which I publish, as well as the german-language $foo Magazin.
The issues at http://www.foo.be/docs/tpj/ are the entire run of TPJ before it was purchased by Earthweb. At issue 20, things got ugly as Earthweb was imploding in the dot.bomb days and TPJ was eventually bought by CMP and added a supplement to SysAdmin in the summer of 2001. That lasted for about a year before they let it die quietly, then it came back a couple years later as an online magazine. It finally stopped in 2006 when it was rolled into DDJ completely and ceased to exist as a title.
Hope that helps,
A:
Randal Schwartz's Perl Journal articles are linked from http://www.stonehenge.com/merlyn/PerlJournal/
|
Is The Perl Journal available online?
|
Does anyone know where online copies of the old The Perl Journal articles can be found?
I know they are now owned by Dr. Dobb's, just the main page for it says they are part of whatever section the subject matter is relevant too, rather than being indexed together. That said, I have never been able to find any of them online on that site.
I know Mark Jason Dominus has a few of his articles on his site, any one know of any other good places? Or even what search terms to use at Dr. Dobb's?
|
[
"Volumes 1-5 (1996 -> 2000) can be found at http://www.foo.be/docs/tpj/\nHmm, looks like that was the entire run? I though it was longer than that for some reason.\n",
"Many of the articles have the string \"TPJ\" at the end of the article, so I get quite a few results from searching just \"TPJ\". I'll put together an index of the articles they published on the website. Are you looking for a particular article or author?\nI've linked to my TPJ Online articles on my personal web page, but they are all into the DDJ site with unhelpful URLs. Stonehenge gives out Randal's TPJ articles for free, and Simon Cozens has most of his articles online too.\nBesides the online articles, there are compilation books from O'Reilly Media: Web, Graphics & Perl TK: Best of The Perl Journal, Games, Diversions & Perl Culture: Best of The Perl Journal , and Computer Science & Perl Programming: Best of TPJ .\nIf you are just jonesing for a Perl magazine, there is also The Perl Review, which I publish, as well as the german-language $foo Magazin.\nThe issues at http://www.foo.be/docs/tpj/ are the entire run of TPJ before it was purchased by Earthweb. At issue 20, things got ugly as Earthweb was imploding in the dot.bomb days and TPJ was eventually bought by CMP and added a supplement to SysAdmin in the summer of 2001. That lasted for about a year before they let it die quietly, then it came back a couple years later as an online magazine. It finally stopped in 2006 when it was rolled into DDJ completely and ceased to exist as a title.\nHope that helps,\n",
"Randal Schwartz's Perl Journal articles are linked from http://www.stonehenge.com/merlyn/PerlJournal/\n"
] |
[
17,
17,
8
] |
[] |
[] |
[
"perl"
] |
stackoverflow_0000005024_perl.txt
|
Q:
Adopting standard libraries
My team has a command parsing library for console apps. Each team around us has their own as well. There isn't anything in the BCL so I suppose this is natural.
I've looked at the the module in Mono, which seems solid, and the one on CodePlex looks fine as well. There are probably others out there that will work (and I would love to hear your suggestions).
The real question is: how do I get my team, and others around us, to commit to just using one?
A:
Well to be honest, you can't make everyone settle on one solution. You can suggest a solution and point out it's benefits, but eventually the advantages would have to be greater than the inertia that they have built up with their present library.
To make them settle on one library you would need to go up the management change until you get to the person that manages all the groups involved. Convince that person why everyone should use one library then let it filter back down.
Now that I have said that, why does it matter? Does your team routinely have to work on code from the other teams? Are the other teams using libraries that cause problems for your code? Is this standardization purely for the sake of standardization or is there some specific problem that not standardizing causes?
A:
Once you find a solution, you start forcing it in code reviews. If it's not implemented in new code, tell them, sorry, but you have to go back and do it again. If you already have standards and reviews in place, this is a lot easier to implement.
A:
EBGreen, good point, I should have mentioned why I am looking to do this. Our teams frequently read and edit code from the surrounding teams. And I mean feature teams, not just dev/test/pm divisions.
This is just one of those little things that slow everybody down. Working on Team C's code? Got to track down their lib, which mysteriously isn't in the nightly builds (another problem, but independent of this). Reviewing another dev's work? Need to figure out how their parser works. Starting a new project? Need to decide which library to import.
I think that your response does indicate the solution though: Put the library somewhere very convenient, so it can be picked up by new projects (I doubt that many existing ones will be revised, nor should they be if they're working fine), and make the advantages of using it clear.
Thanks!
A:
@fatcat1111, in that case by all means a standardized library would be advantageous. As for how to convince the other teams, there are two approaches that I can think of. First point out that standardization across a group always reduces coding effort (discounting for the initial ramp up for people that are new to the standardized library). Second, try to convince them on features. Hopefully you would be choosing the most feature complete library so it would be superior to what everyone else is using.
A:
PowerShell provides great command line parsing options for free. When you create a cmdlet, you define properties and use attributes to determine command line options. The PowerShell runtime then handles the parsing of input for you.
Also, since PowerShell works with .NET objects, the result of your commands can be rich objects with properties and methods.
Here is a nice blog post demonstrating writing and debugging cmdlets.
A:
I recommend NDesk.Options. It makes use of Lambda functions / .NET 3.5 and it's very
nice:
http://www.ndesk.org/Options
What you basically do is:
var p = new OptionSet () {
{ "file=", v => data = v },<br>
{ "v|verbose", v => { ++verbose } },<br>
{ "h|?|help", v => help = v != null },<br>
};
List<string> extra = p.Parse (args);
And you're done.
It support key/value pairs with custom separators, lists, single value options and toggle options
You WILL NOT regret using it.
|
Adopting standard libraries
|
My team has a command parsing library for console apps. Each team around us has their own as well. There isn't anything in the BCL so I suppose this is natural.
I've looked at the the module in Mono, which seems solid, and the one on CodePlex looks fine as well. There are probably others out there that will work (and I would love to hear your suggestions).
The real question is: how do I get my team, and others around us, to commit to just using one?
|
[
"Well to be honest, you can't make everyone settle on one solution. You can suggest a solution and point out it's benefits, but eventually the advantages would have to be greater than the inertia that they have built up with their present library.\nTo make them settle on one library you would need to go up the management change until you get to the person that manages all the groups involved. Convince that person why everyone should use one library then let it filter back down.\nNow that I have said that, why does it matter? Does your team routinely have to work on code from the other teams? Are the other teams using libraries that cause problems for your code? Is this standardization purely for the sake of standardization or is there some specific problem that not standardizing causes?\n",
"Once you find a solution, you start forcing it in code reviews. If it's not implemented in new code, tell them, sorry, but you have to go back and do it again. If you already have standards and reviews in place, this is a lot easier to implement.\n",
"EBGreen, good point, I should have mentioned why I am looking to do this. Our teams frequently read and edit code from the surrounding teams. And I mean feature teams, not just dev/test/pm divisions. \nThis is just one of those little things that slow everybody down. Working on Team C's code? Got to track down their lib, which mysteriously isn't in the nightly builds (another problem, but independent of this). Reviewing another dev's work? Need to figure out how their parser works. Starting a new project? Need to decide which library to import. \nI think that your response does indicate the solution though: Put the library somewhere very convenient, so it can be picked up by new projects (I doubt that many existing ones will be revised, nor should they be if they're working fine), and make the advantages of using it clear. \nThanks! \n",
"@fatcat1111, in that case by all means a standardized library would be advantageous. As for how to convince the other teams, there are two approaches that I can think of. First point out that standardization across a group always reduces coding effort (discounting for the initial ramp up for people that are new to the standardized library). Second, try to convince them on features. Hopefully you would be choosing the most feature complete library so it would be superior to what everyone else is using.\n",
"PowerShell provides great command line parsing options for free. When you create a cmdlet, you define properties and use attributes to determine command line options. The PowerShell runtime then handles the parsing of input for you.\nAlso, since PowerShell works with .NET objects, the result of your commands can be rich objects with properties and methods.\nHere is a nice blog post demonstrating writing and debugging cmdlets.\n",
"I recommend NDesk.Options. It makes use of Lambda functions / .NET 3.5 and it's very\nnice:\nhttp://www.ndesk.org/Options\nWhat you basically do is:\nvar p = new OptionSet () {\n { \"file=\", v => data = v },<br>\n { \"v|verbose\", v => { ++verbose } },<br>\n { \"h|?|help\", v => help = v != null },<br>\n};\nList<string> extra = p.Parse (args);\n\nAnd you're done.\nIt support key/value pairs with custom separators, lists, single value options and toggle options\nYou WILL NOT regret using it.\n"
] |
[
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
".net"
] |
stackoverflow_0000054978_.net.txt
|
Q:
User Interface Controls for Win32
I see many user interface control libraries for .NET, but where can I get similar stuff for win32 using simply C/C++?
Things like prettier buttons, dials, listviews, graphs, etc.
Seems every Win32 programmers' right of passage is to end up writing his own collection. :/
No MFC controls please. I only do pure C/C++. And with that said, I also don't feel like adding a multi-megabyte framework to my application just so that I can have a prettier button.
I apologize for leaving out one tiny detail, and that is that my development is for Windows Mobile.
So manifest files are out.
I just notice how many developer companies have gone crazy with making pretty looking .NET components and wondered where the equivalent C/C++ Win32 components have gone?
I read about how many people ended up writing their own gradient button class, etc. So you would think that there would be some commercial classes for this stuff. It's just weird.
I'll take a closer look at QT and investigate its GUI support for such things. This is the challenge when you're the one man in your own uISV. No other developers to help you "get things done".
A:
I've used Trolltech's Qt framework in the past and had great success with it:
In addition, it's also cross-platform, so in theory you can target Win, Mac, & Linux (provided you don't do anything platform-specific in the rest of your code, of course ;) )
Edit: I notice that you're targeting Windows Mobile; that definitely adds to Qt's strength, as its cross-platform support extends to WinCE and Embedded Linux as well.
A:
I you don't mind using the MFC libraries you should try the Visual C++ 2008 Feature Pack
A:
Stingray
CodeJock - Toolkit Pro for MFC/ C++
A:
The Code Project has lots of UI controls for C/C++
Most of them are focussed on MFC or WTL but there are some that are pure Win32.
As an aside if you're not using a framework, you really should consider WTL over pure Win32. It's low overhead and about a million times more productive.
A:
For prettier buttons, etc., if you aren't already doing it, embed an application manifest so that your program is linked to version 6 of the common controls library. Doing so will get you the Windows XP- or Vista-styled versions of the standard Windows controls.
If you want types of controls beyond what Windows offers natively, you'll likely have to either write it yourself or be more specific about what kind of control you are looking for.
A:
The MFC feature pack is derived from BCGSoft components.
A:
Using winAPI's you can do almost anything you want and really fast too. It takes some time to figure it out but it works. Go to MSDN, lookup MessageBox(), check out DialogBox() and go from there.
I personally do not care for MFC by the way. If you want to use an MFC like approach I'd recommend Borland's C++ Builder. Pretty old but still very usefull I think.
|
User Interface Controls for Win32
|
I see many user interface control libraries for .NET, but where can I get similar stuff for win32 using simply C/C++?
Things like prettier buttons, dials, listviews, graphs, etc.
Seems every Win32 programmers' right of passage is to end up writing his own collection. :/
No MFC controls please. I only do pure C/C++. And with that said, I also don't feel like adding a multi-megabyte framework to my application just so that I can have a prettier button.
I apologize for leaving out one tiny detail, and that is that my development is for Windows Mobile.
So manifest files are out.
I just notice how many developer companies have gone crazy with making pretty looking .NET components and wondered where the equivalent C/C++ Win32 components have gone?
I read about how many people ended up writing their own gradient button class, etc. So you would think that there would be some commercial classes for this stuff. It's just weird.
I'll take a closer look at QT and investigate its GUI support for such things. This is the challenge when you're the one man in your own uISV. No other developers to help you "get things done".
|
[
"I've used Trolltech's Qt framework in the past and had great success with it:\nIn addition, it's also cross-platform, so in theory you can target Win, Mac, & Linux (provided you don't do anything platform-specific in the rest of your code, of course ;) )\nEdit: I notice that you're targeting Windows Mobile; that definitely adds to Qt's strength, as its cross-platform support extends to WinCE and Embedded Linux as well.\n",
"I you don't mind using the MFC libraries you should try the Visual C++ 2008 Feature Pack\n",
"Stingray\nCodeJock - Toolkit Pro for MFC/ C++\n",
"The Code Project has lots of UI controls for C/C++\nMost of them are focussed on MFC or WTL but there are some that are pure Win32.\nAs an aside if you're not using a framework, you really should consider WTL over pure Win32. It's low overhead and about a million times more productive.\n",
"For prettier buttons, etc., if you aren't already doing it, embed an application manifest so that your program is linked to version 6 of the common controls library. Doing so will get you the Windows XP- or Vista-styled versions of the standard Windows controls.\nIf you want types of controls beyond what Windows offers natively, you'll likely have to either write it yourself or be more specific about what kind of control you are looking for.\n",
"The MFC feature pack is derived from BCGSoft components.\n",
"Using winAPI's you can do almost anything you want and really fast too. It takes some time to figure it out but it works. Go to MSDN, lookup MessageBox(), check out DialogBox() and go from there.\nI personally do not care for MFC by the way. If you want to use an MFC like approach I'd recommend Borland's C++ Builder. Pretty old but still very usefull I think.\n"
] |
[
4,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"c++",
"user_interface",
"winapi"
] |
stackoverflow_0000063147_c++_user_interface_winapi.txt
|
Q:
Using jQuery, how can I dynamically set the size attribute of a select box?
Using jQuery, how can I dynamically set the size attribute of a select box?
I would like to include it in this code:
$("#mySelect").bind("click",
function() {
$("#myOtherSelect").children().remove();
var options = '' ;
for (var i = 0; i < myArray[this.value].length; i++) {
options += '<option value="' + myArray[this.value][i] + '">' + myArray[this.value][i] + '</option>';
}
$("#myOtherSelect").html(options).attr [... use myArray[this.value].length here ...];
});
});
A:
Oops, it's
$('#mySelect').attr('size', value)
A:
$("#mySelect").bind("click", function(){
$("#myOtherSelect").children().remove();
var myArray = [ "value1", "value2", "value3" ];
for (var i = 0; i < myArray.length; i++) {
$("#myOtherSelect").append( '<option value="' + myArray[i] + '">' + myArray[i] + '</option>' );
}
$("#myOtherSelect").attr( "size", myArray.length );
});
|
Using jQuery, how can I dynamically set the size attribute of a select box?
|
Using jQuery, how can I dynamically set the size attribute of a select box?
I would like to include it in this code:
$("#mySelect").bind("click",
function() {
$("#myOtherSelect").children().remove();
var options = '' ;
for (var i = 0; i < myArray[this.value].length; i++) {
options += '<option value="' + myArray[this.value][i] + '">' + myArray[this.value][i] + '</option>';
}
$("#myOtherSelect").html(options).attr [... use myArray[this.value].length here ...];
});
});
|
[
"Oops, it's\n$('#mySelect').attr('size', value)\n\n",
"$(\"#mySelect\").bind(\"click\", function(){\n $(\"#myOtherSelect\").children().remove();\n var myArray = [ \"value1\", \"value2\", \"value3\" ];\n for (var i = 0; i < myArray.length; i++) {\n $(\"#myOtherSelect\").append( '<option value=\"' + myArray[i] + '\">' + myArray[i] + '</option>' );\n }\n $(\"#myOtherSelect\").attr( \"size\", myArray.length );\n});\n\n"
] |
[
25,
1
] |
[] |
[] |
[
"jquery"
] |
stackoverflow_0000065206_jquery.txt
|
Q:
Resetting detection of source file changes
Sometimes I have to work on code that moves the computer clock forward. In this case some .cpp or .h files get their latest modification date set to the future time.
Later on, when my clock is fixed, and I compile my sources, system rebuilds most of the project because some of the latest modification dates are in the future. Each subsequent recompile has the same problem.
Solution that I know are:
a) Find the file that has the future time and re-save it. This method is not ideal because the project is very big and it takes time even for windows advanced search to find the files that are changed.
b) Delete the whole project and re-check it out from svn.
Does anyone know how I can get around this problem?
Is there perhaps a setting in visual studio that will allow me to tell the compiler to use the archive bit instead of the last modification date to detect source file changes?
Or perhaps there is a recursive modification date reset tool that can be used in this situation?
A:
I would recommend using a virtual machine where you can mess with the clock to your heart's content and it won't affect your development machine. Two free ones are Virtual PC from Microsoft and VirtualBox from Sun.
A:
I don't know if this works in your situation but how about you don't move your clock forward, but wrap your gettime method (or whatever you're using) and make it return the future time that you need?
A:
Install Unix Utils
touch temp
find . -newer temp -exec touch {} ;
rm temp
Make sure to use the full path when calling find or it will probably use Windows' find.exe instead. This is untested in the Windows shell -- you might need to modify the syntax a bit.
A:
If this was my problem, I'd look for ways to avoid mucking with the system time. Isolating the code under unit tests, or a virtual machine, or something.
However, because I love PowerShell:
Get-ChildItem -r . |
? { $_.LastWriteTime -gt ([DateTime]::Now) } |
Set-ItemProperty -Name "LastWriteTime" -Value ([DateTime]::Now)
A:
I don't use windows - but surely there is something like awk or grep that you can use to find the "future" timestamped files, and then "touch" them so they have the right time - even a perl script.
A:
1) Use a build system that doesn't use timestamps to detect modifications, like scons
2) Use ccache to speed up your build system that does use timestamps (and rebuild all).
In either case it is using md5sum's to verify that a file has been modified, not timestamps.
|
Resetting detection of source file changes
|
Sometimes I have to work on code that moves the computer clock forward. In this case some .cpp or .h files get their latest modification date set to the future time.
Later on, when my clock is fixed, and I compile my sources, system rebuilds most of the project because some of the latest modification dates are in the future. Each subsequent recompile has the same problem.
Solution that I know are:
a) Find the file that has the future time and re-save it. This method is not ideal because the project is very big and it takes time even for windows advanced search to find the files that are changed.
b) Delete the whole project and re-check it out from svn.
Does anyone know how I can get around this problem?
Is there perhaps a setting in visual studio that will allow me to tell the compiler to use the archive bit instead of the last modification date to detect source file changes?
Or perhaps there is a recursive modification date reset tool that can be used in this situation?
|
[
"I would recommend using a virtual machine where you can mess with the clock to your heart's content and it won't affect your development machine. Two free ones are Virtual PC from Microsoft and VirtualBox from Sun.\n",
"I don't know if this works in your situation but how about you don't move your clock forward, but wrap your gettime method (or whatever you're using) and make it return the future time that you need?\n",
"Install Unix Utils\ntouch temp\nfind . -newer temp -exec touch {} ;\nrm temp\n\nMake sure to use the full path when calling find or it will probably use Windows' find.exe instead. This is untested in the Windows shell -- you might need to modify the syntax a bit.\n",
"If this was my problem, I'd look for ways to avoid mucking with the system time. Isolating the code under unit tests, or a virtual machine, or something.\nHowever, because I love PowerShell:\nGet-ChildItem -r . | \n ? { $_.LastWriteTime -gt ([DateTime]::Now) } | \n Set-ItemProperty -Name \"LastWriteTime\" -Value ([DateTime]::Now)\n\n",
"I don't use windows - but surely there is something like awk or grep that you can use to find the \"future\" timestamped files, and then \"touch\" them so they have the right time - even a perl script.\n",
"1) Use a build system that doesn't use timestamps to detect modifications, like scons\n2) Use ccache to speed up your build system that does use timestamps (and rebuild all).\nIn either case it is using md5sum's to verify that a file has been modified, not timestamps.\n"
] |
[
5,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"c++",
"svn",
"time",
"timezone",
"visual_studio"
] |
stackoverflow_0000060977_c++_svn_time_timezone_visual_studio.txt
|
Q:
In Cocoa do I need to remove an Object from receiving KVO notifications when deallocating it?
When I've registered an object foo to receive KVO notifications from another object bar (using addObserver:...), if I then deallocate foo do I need to send a removeObserver:forKeyPath: message to bar in -dealloc?
A:
You need to use -removeObserver:forKeyPath: to remove the observer before -[NSObject dealloc] runs, so yes, doing it in the -dealloc method of your class would work.
Better than that though would be to have a deterministic point where whatever owns the object that's doing the observing could tell it it's done and will (eventually) be deallocated. That way, you can stop observing immediately when the thing doing the observing is no longer needed, regardless of when it's actually deallocated.
This is important to keep in mind because the lifetime of objects in Cocoa isn't as deterministic as some people seem to think it is. The various Mac OS X frameworks themselves will send your objects -retain and -autorelease, extending their lifetime beyond what you might otherwise think it would be.
Furthermore, when you make the transition to Objective-C garbage collection, you'll find that -finalize will run at very different times — and in very different contexts — than -dealloc did. For one thing, finalization takes place on a different thread, so you really can't safely send -removeObserver:forKeyPath: to another object in a -finalize method.
Stick to memory (and other scarce resource) management in -dealloc and -finalize, and use a separate -invalidate method to have an owner tell an object you're done with it at a deterministic point; do things like removing KVO observations there. The intent of your code will be clearer and you will have fewer subtle bugs to take care of.
A:
A bit of extra info that I've gained by painful experience: although NSNotificationCenter uses zeroing weak references when running under garbage collection, KVO does not. Thus, you can get away with not removing an NSNotificationCenter observer when using GC (when using retain/release, you still need to remove your observer), but you must still remove your KVO observers, as Chris describes.
A:
Definitely agree with Chris on the "Stick to memory (and other scarce resource) management in -dealloc and -finalize..." comment. A lot of times I'll see people try to invalidate NSTimer objects in their dealloc functions. The problem is, NSTimer retains it's targets. So, if the target of that NSTimer is self, dealloc will never get called resulting in some potentially nasty memory leaks.
Invalidate in -invalidate and do other memory cleanup in your dealloc and finalize.
|
In Cocoa do I need to remove an Object from receiving KVO notifications when deallocating it?
|
When I've registered an object foo to receive KVO notifications from another object bar (using addObserver:...), if I then deallocate foo do I need to send a removeObserver:forKeyPath: message to bar in -dealloc?
|
[
"You need to use -removeObserver:forKeyPath: to remove the observer before -[NSObject dealloc] runs, so yes, doing it in the -dealloc method of your class would work.\nBetter than that though would be to have a deterministic point where whatever owns the object that's doing the observing could tell it it's done and will (eventually) be deallocated. That way, you can stop observing immediately when the thing doing the observing is no longer needed, regardless of when it's actually deallocated.\nThis is important to keep in mind because the lifetime of objects in Cocoa isn't as deterministic as some people seem to think it is. The various Mac OS X frameworks themselves will send your objects -retain and -autorelease, extending their lifetime beyond what you might otherwise think it would be.\nFurthermore, when you make the transition to Objective-C garbage collection, you'll find that -finalize will run at very different times — and in very different contexts — than -dealloc did. For one thing, finalization takes place on a different thread, so you really can't safely send -removeObserver:forKeyPath: to another object in a -finalize method.\nStick to memory (and other scarce resource) management in -dealloc and -finalize, and use a separate -invalidate method to have an owner tell an object you're done with it at a deterministic point; do things like removing KVO observations there. The intent of your code will be clearer and you will have fewer subtle bugs to take care of.\n",
"A bit of extra info that I've gained by painful experience: although NSNotificationCenter uses zeroing weak references when running under garbage collection, KVO does not. Thus, you can get away with not removing an NSNotificationCenter observer when using GC (when using retain/release, you still need to remove your observer), but you must still remove your KVO observers, as Chris describes.\n",
"Definitely agree with Chris on the \"Stick to memory (and other scarce resource) management in -dealloc and -finalize...\" comment. A lot of times I'll see people try to invalidate NSTimer objects in their dealloc functions. The problem is, NSTimer retains it's targets. So, if the target of that NSTimer is self, dealloc will never get called resulting in some potentially nasty memory leaks. \nInvalidate in -invalidate and do other memory cleanup in your dealloc and finalize.\n"
] |
[
39,
5,
2
] |
[] |
[] |
[
"cocoa",
"macos"
] |
stackoverflow_0000013927_cocoa_macos.txt
|
Q:
What is the best way to create a web page thumbnail?
Is there some reasonably cross platform way to create a thumbnail image given a URL? I know there are thumbnail web services that will do this, but I want a piece of software or library that will do this locally. I guess in Linux I could always spawn a browser window using a headless X server, but what about Windows or OS X?
A:
You can use Firefox or XULRunner with some fairly simple XUL to create thumbnails as PNG dataURLs (that you could then write to file if needed). Robert O'Callahan has some excellent information on it here:
http://weblogs.mozillazine.org/roc/archives/2005/05/rendering_web_p.html
A:
I know you said you want the service to be local, but... if you have to be connected to the Internet to take the screenshot, you should equally have access to a web service. It seems like a better move to do this than to open yourself up to cross-platform issues of taking screenshots locally.
A:
There are a number of commercial packages that will do what you want. I'm not sure from reading your question if free is a requirement. But here are some applications I've found that are reasonably priced and which do exactly what you want. I have not used them myself, but they have free trial downloads so you can evaluate before you purchase.
HTML to Image from Guanming Software - Runs on Linux and Windows
HTML2Image from SysImage - Runs on Windows
HTML2Image from Tooto - Runs on Windows
Convert HTML to Image from FrameworkTeam - Windows command line tool
|
What is the best way to create a web page thumbnail?
|
Is there some reasonably cross platform way to create a thumbnail image given a URL? I know there are thumbnail web services that will do this, but I want a piece of software or library that will do this locally. I guess in Linux I could always spawn a browser window using a headless X server, but what about Windows or OS X?
|
[
"You can use Firefox or XULRunner with some fairly simple XUL to create thumbnails as PNG dataURLs (that you could then write to file if needed). Robert O'Callahan has some excellent information on it here:\nhttp://weblogs.mozillazine.org/roc/archives/2005/05/rendering_web_p.html\n",
"I know you said you want the service to be local, but... if you have to be connected to the Internet to take the screenshot, you should equally have access to a web service. It seems like a better move to do this than to open yourself up to cross-platform issues of taking screenshots locally.\n",
"There are a number of commercial packages that will do what you want. I'm not sure from reading your question if free is a requirement. But here are some applications I've found that are reasonably priced and which do exactly what you want. I have not used them myself, but they have free trial downloads so you can evaluate before you purchase.\n\nHTML to Image from Guanming Software - Runs on Linux and Windows\nHTML2Image from SysImage - Runs on Windows\nHTML2Image from Tooto - Runs on Windows\nConvert HTML to Image from FrameworkTeam - Windows command line tool\n\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"image",
"thumbnails"
] |
stackoverflow_0000065078_image_thumbnails.txt
|
Q:
Should HTML co-exist with code?
In a web application, is it acceptable to use HTML in your code (non-scripted languages, Java, .NET)?
There are two major sub questions:
Should you use code to print HTML, or otherwise directly create HTML that is displayed?
Should you mix code within your HTML pages?
A:
Generally, it's better to keep presentation (HTML) separate from logic ("back-end" code). Your code is decoupled and easier to maintain this way.
A:
As long as your HTML-writing code is separate from your application logic, and the HTML is guaranteed to be well-formed somehow, you should be okay.
The only code that should be mixed in markup-based pages (i.e, those that contain literal HTML) is the code used for formatting the HTML (e.g., a loop for writing out a list).
There are trade-offs whether you put the code in with the HTML or you use pure code to write the HTML out using quoted string literals.
A:
No, if you want to build good and maintainable software, and to achieve loose coupling.
A:
If I understand the question right, you're asking whether it's a good practice to mix markup with back-end code. No. While this is commonly done, it's still a bad idea.
You should read up on the MVC paradigm, as well as on existing questions on the matter, such as What is the best way to migrate an existing messy webapp to elegant MVC? and Best practices for refactoring classic ASP?
A:
The point is to keep the display logic separate from the rest of the code. In any complex site you'll have code mixed in with your HTML, but the code should be for display purposes only. It shouldn't be doing any complex calculations.
For example, templates will contain loops and conditionals. Plus you'll probably have a library of HTML-specific routines, like printing out an <option> list based on a list object.
Imagine you were writing an application that has two output modes: HTML and something else. How would you write it, to avoid duplicating code? That will probably point you in the right direction.
A:
The HTML that makes up the view has to get sent to the browser in some way. In .net, each server control emits its own HTML markup as part of the page lifecycle. So yes it is OK to use HTML in server side code.
Perhaps you should try following the ASP.net pattern. Create a bunch of controls that represent UI elements and make them responsible for emitting their own HTML based on their state.
A:
Its fugly, and not type safe. But people do it without consequence. I'd prefer using a DOM or, at a minimum, classes designed to write HTML using type safe semantics. Also, its not all that good to mix UI with logic...
A:
If I need methods that generate HTML I usually isolate them in an HtmlHelpers class. That way you keep some level of separation. The ASP.NET MVC Framework does this quite successfully.
A:
If you mean printing out HTML in your code, then no. Unless you have a good reason not to, you should use templates
Even if you think you don't need this now, there's always a good chance you'll need it later. Maybe you want to output in a different format than HTML, or you want different presentation for the same data. You usually have the need for these things further down the road, so it's best to use one from the start.
A:
I hate when developers print() a bunch of html. It's completely unnecessary and looks ugly in any text editor that shows print/echo strings in red.
A:
I agree with everyone else that you should try as hard as you can to separate the HTML/XHTML markup from the application logic. However, sometimes you do need to generate HTML/XHTML in the application logic for various reasons.
In these cases what I have been trying to do is to ensure the bare minimum amount of presentation code is in mixed in with the application logic and try to migrate everything else over to the presentation code. It is worth nothing that is some cases you have situations where you could have everything moved over to the presentation layer, but it might be a bit easier to generate the markup as part of the application logic. In those cases, your best bet is likely to be to go the route that makes the most sense in terms of time.
A:
I don't think there's any excuse for generating HTML inside your business logic. Don't even do it when it's just a "quick fix" or when you'll "go back and fix it later", because that never happens.
To reiterate my position from other questions, using some control logic (conditionals, loops) within HTML to construct it is OK. Do NOT do any data massaging or business logic in the HTML. You have to be disciplined, but it's worth it. Maintenance is much easier if your concerns (like logic and display) are separated.
A:
Ideally you are aiming for a separation of concerns between your presentation (UI) code and your domain (business logic) code.
The reason why you should avoid coupling these two concerns (in either direction) is simple...
You will only have one reason to change a piece of code. whether this is from structural/styling changes in your html design, or from your business rules changing, you should only have to make the change in one place.
To a lesser extent, although many purists would disagree, by sprinkling HTML code through your domain code or vice versa you are creating noise for the next developer who comes along to read/maintain it.
A:
I try to avoid using code to print HTML "directly". It is difficult to maintain, edit, add styles and etc. Some cases like generating an HTML email in the code, I create a text file or HTML file with markers like, [name], [verification code] and etc. I load this from the code and replace those markers. This way, you can edit the style of the email without re-compiling your code. Separating "presentation" and "logic" is a good practice in my opinion.
Mixing code within HTML is generally not a good practice in similar reasons as said in #1. However, I do use code in HTML for things like simple dynamic strings that are displayed multiple times on a page or pages. I think this is better than creating multiple server controls for same exact values to set. Since this is not code "logic" mixed in the HTML, I think this is ok.
|
Should HTML co-exist with code?
|
In a web application, is it acceptable to use HTML in your code (non-scripted languages, Java, .NET)?
There are two major sub questions:
Should you use code to print HTML, or otherwise directly create HTML that is displayed?
Should you mix code within your HTML pages?
|
[
"Generally, it's better to keep presentation (HTML) separate from logic (\"back-end\" code). Your code is decoupled and easier to maintain this way.\n",
"As long as your HTML-writing code is separate from your application logic, and the HTML is guaranteed to be well-formed somehow, you should be okay.\nThe only code that should be mixed in markup-based pages (i.e, those that contain literal HTML) is the code used for formatting the HTML (e.g., a loop for writing out a list). \nThere are trade-offs whether you put the code in with the HTML or you use pure code to write the HTML out using quoted string literals.\n",
"No, if you want to build good and maintainable software, and to achieve loose coupling.\n",
"If I understand the question right, you're asking whether it's a good practice to mix markup with back-end code. No. While this is commonly done, it's still a bad idea.\nYou should read up on the MVC paradigm, as well as on existing questions on the matter, such as What is the best way to migrate an existing messy webapp to elegant MVC? and Best practices for refactoring classic ASP?\n",
"The point is to keep the display logic separate from the rest of the code. In any complex site you'll have code mixed in with your HTML, but the code should be for display purposes only. It shouldn't be doing any complex calculations.\nFor example, templates will contain loops and conditionals. Plus you'll probably have a library of HTML-specific routines, like printing out an <option> list based on a list object.\nImagine you were writing an application that has two output modes: HTML and something else. How would you write it, to avoid duplicating code? That will probably point you in the right direction.\n",
"The HTML that makes up the view has to get sent to the browser in some way. In .net, each server control emits its own HTML markup as part of the page lifecycle. So yes it is OK to use HTML in server side code.\nPerhaps you should try following the ASP.net pattern. Create a bunch of controls that represent UI elements and make them responsible for emitting their own HTML based on their state.\n",
"Its fugly, and not type safe. But people do it without consequence. I'd prefer using a DOM or, at a minimum, classes designed to write HTML using type safe semantics. Also, its not all that good to mix UI with logic...\n",
"If I need methods that generate HTML I usually isolate them in an HtmlHelpers class. That way you keep some level of separation. The ASP.NET MVC Framework does this quite successfully.\n",
"If you mean printing out HTML in your code, then no. Unless you have a good reason not to, you should use templates\nEven if you think you don't need this now, there's always a good chance you'll need it later. Maybe you want to output in a different format than HTML, or you want different presentation for the same data. You usually have the need for these things further down the road, so it's best to use one from the start.\n",
"I hate when developers print() a bunch of html. It's completely unnecessary and looks ugly in any text editor that shows print/echo strings in red.\n",
"I agree with everyone else that you should try as hard as you can to separate the HTML/XHTML markup from the application logic. However, sometimes you do need to generate HTML/XHTML in the application logic for various reasons. \nIn these cases what I have been trying to do is to ensure the bare minimum amount of presentation code is in mixed in with the application logic and try to migrate everything else over to the presentation code. It is worth nothing that is some cases you have situations where you could have everything moved over to the presentation layer, but it might be a bit easier to generate the markup as part of the application logic. In those cases, your best bet is likely to be to go the route that makes the most sense in terms of time.\n",
"I don't think there's any excuse for generating HTML inside your business logic. Don't even do it when it's just a \"quick fix\" or when you'll \"go back and fix it later\", because that never happens.\nTo reiterate my position from other questions, using some control logic (conditionals, loops) within HTML to construct it is OK. Do NOT do any data massaging or business logic in the HTML. You have to be disciplined, but it's worth it. Maintenance is much easier if your concerns (like logic and display) are separated.\n",
"Ideally you are aiming for a separation of concerns between your presentation (UI) code and your domain (business logic) code.\nThe reason why you should avoid coupling these two concerns (in either direction) is simple...\nYou will only have one reason to change a piece of code. whether this is from structural/styling changes in your html design, or from your business rules changing, you should only have to make the change in one place.\nTo a lesser extent, although many purists would disagree, by sprinkling HTML code through your domain code or vice versa you are creating noise for the next developer who comes along to read/maintain it. \n",
"\nI try to avoid using code to print HTML \"directly\". It is difficult to maintain, edit, add styles and etc. Some cases like generating an HTML email in the code, I create a text file or HTML file with markers like, [name], [verification code] and etc. I load this from the code and replace those markers. This way, you can edit the style of the email without re-compiling your code. Separating \"presentation\" and \"logic\" is a good practice in my opinion.\nMixing code within HTML is generally not a good practice in similar reasons as said in #1. However, I do use code in HTML for things like simple dynamic strings that are displayed multiple times on a page or pages. I think this is better than creating multiple server controls for same exact values to set. Since this is not code \"logic\" mixed in the HTML, I think this is ok.\n\n"
] |
[
14,
5,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"html",
"user_interface"
] |
stackoverflow_0000064760_html_user_interface.txt
|
Q:
BufferedImage in IKVM
What is the best and/or easiest way to replace the missing BufferedImage functionality for a Java project I am converting to .NET with IKVM?
I'm basically getting "cli.System.NotImplementedException: BufferedImage" exceptions when running the application, which otherwise runs fine.
A:
The AWT code in IKVM is fairly easy to read and edit. I'd recommend you look for the methods that you are using that throw that exception, and then implement them. I've done this several times before with IKVM's AWT implementation and found it easy to do for background/server related functions. Its much less usable if your app is a desktop app, however.
|
BufferedImage in IKVM
|
What is the best and/or easiest way to replace the missing BufferedImage functionality for a Java project I am converting to .NET with IKVM?
I'm basically getting "cli.System.NotImplementedException: BufferedImage" exceptions when running the application, which otherwise runs fine.
|
[
"The AWT code in IKVM is fairly easy to read and edit. I'd recommend you look for the methods that you are using that throw that exception, and then implement them. I've done this several times before with IKVM's AWT implementation and found it easy to do for background/server related functions. Its much less usable if your app is a desktop app, however.\n"
] |
[
3
] |
[] |
[] |
[
".net",
"bufferedimage",
"ikvm",
"java",
"notimplementedexception"
] |
stackoverflow_0000063030_.net_bufferedimage_ikvm_java_notimplementedexception.txt
|
Q:
Priority of a query in MS SQL
Is there a way to tell MS SQL that a query is not too important and that it can (and should) take its time?
Likewise is there a way to tell MS SQL that it should give higher priority to a query?
A:
Not in versions below SQL 2008. In SQL Server 2008 there's the resource governor. Using that you can assign logins to groups based on properties of the login (login name, application name, etc). The groups can then be assigned to resource pools and limitations or restrictions i.t.o. resources can be applied to those resource pools
A:
SQL Server does not have any form of resource governor yet. There is a SET option called QUERY_GOVERNOR_COST_LIMIT but it's not quite what you're looking for. And it prevents queries from executing based on the cost rather than controlling resources.
A:
I'm not sure if this is what you're asking, but I had a situation where a single UI click added 10,000 records to an email queue (lots of data in the body). The email went out over the next several days so it didn't need to be a high priority, in fact it would bog the server every time it happened.
I split the procedure into 10,000 individual calls, ran the process on the UI in a different thread (set to low priority) and set it to sleep for a second after running the procedure. It took a while, but I had very granular control over exactly what it was doing.
btw, this was NOT spam, so don't flame me thinking it was.
|
Priority of a query in MS SQL
|
Is there a way to tell MS SQL that a query is not too important and that it can (and should) take its time?
Likewise is there a way to tell MS SQL that it should give higher priority to a query?
|
[
"Not in versions below SQL 2008. In SQL Server 2008 there's the resource governor. Using that you can assign logins to groups based on properties of the login (login name, application name, etc). The groups can then be assigned to resource pools and limitations or restrictions i.t.o. resources can be applied to those resource pools\n",
"SQL Server does not have any form of resource governor yet. There is a SET option called QUERY_GOVERNOR_COST_LIMIT but it's not quite what you're looking for. And it prevents queries from executing based on the cost rather than controlling resources. \n",
"I'm not sure if this is what you're asking, but I had a situation where a single UI click added 10,000 records to an email queue (lots of data in the body). The email went out over the next several days so it didn't need to be a high priority, in fact it would bog the server every time it happened.\nI split the procedure into 10,000 individual calls, ran the process on the UI in a different thread (set to low priority) and set it to sleep for a second after running the procedure. It took a while, but I had very granular control over exactly what it was doing.\nbtw, this was NOT spam, so don't flame me thinking it was.\n"
] |
[
18,
2,
1
] |
[] |
[] |
[
"database",
"sql",
"sql_server"
] |
stackoverflow_0000060878_database_sql_sql_server.txt
|
Q:
Error Serializing String in WebService call
This morning I ran into an issue with returning back a text string as result from a Web Service call. the Error I was getting is below
************** Exception Text **************
System.ServiceModel.CommunicationException: Error in deserializing body of reply message for operation 'GetFilingTreeXML'. ---> System.InvalidOperationException: There is an error in XML document (1, 9201). ---> System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 9201.
at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)
at System.Xml.XmlExceptionHelper.ThrowMaxStringContentLengthExceeded(XmlDictionaryReader reader, Int32 maxStringContentLength)
at System.Xml.XmlDictionaryReader.ReadString(Int32 maxStringContentLength)
at System.Xml.XmlDictionaryReader.ReadString()
at System.Xml.XmlBaseReader.ReadElementString()
at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderImageServerClientInterfaceSoap.Read10_GetFilingTreeXMLResponse()
at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer9.Deserialize(XmlSerializationReader reader)
at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
--- End of inner exception stack trace ---
at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle)
at System.ServiceModel.Dispatcher.XmlSerializerOperationFormatter.DeserializeBody(XmlDictionaryReader reader, MessageVersion version, XmlSerializer serializer, MessagePartDescription returnPart, MessagePartDescriptionCollection bodyParts, Object[] parameters, Boolean isRequest)
--- End of inner exception stack trace ---
I did a search and the results are below:
Search Results
Most of those are WCF related but were enough to point me in the right direction. I will post answer as reply.
A:
Try this blog post here. You can modify the MaxStringContentLength property in the Binding configuration.
A:
Jow Wirtley's blog post pointed me in the right direction.
All I had to do was update the bindings in the app.config of the client app and it all works now.
|
Error Serializing String in WebService call
|
This morning I ran into an issue with returning back a text string as result from a Web Service call. the Error I was getting is below
************** Exception Text **************
System.ServiceModel.CommunicationException: Error in deserializing body of reply message for operation 'GetFilingTreeXML'. ---> System.InvalidOperationException: There is an error in XML document (1, 9201). ---> System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 9201.
at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)
at System.Xml.XmlExceptionHelper.ThrowMaxStringContentLengthExceeded(XmlDictionaryReader reader, Int32 maxStringContentLength)
at System.Xml.XmlDictionaryReader.ReadString(Int32 maxStringContentLength)
at System.Xml.XmlDictionaryReader.ReadString()
at System.Xml.XmlBaseReader.ReadElementString()
at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderImageServerClientInterfaceSoap.Read10_GetFilingTreeXMLResponse()
at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer9.Deserialize(XmlSerializationReader reader)
at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
--- End of inner exception stack trace ---
at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle)
at System.ServiceModel.Dispatcher.XmlSerializerOperationFormatter.DeserializeBody(XmlDictionaryReader reader, MessageVersion version, XmlSerializer serializer, MessagePartDescription returnPart, MessagePartDescriptionCollection bodyParts, Object[] parameters, Boolean isRequest)
--- End of inner exception stack trace ---
I did a search and the results are below:
Search Results
Most of those are WCF related but were enough to point me in the right direction. I will post answer as reply.
|
[
"Try this blog post here. You can modify the MaxStringContentLength property in the Binding configuration.\n",
"Jow Wirtley's blog post pointed me in the right direction.\nAll I had to do was update the bindings in the app.config of the client app and it all works now.\n"
] |
[
29,
6
] |
[] |
[] |
[
"maxstringcontentlength",
"web_services",
"xmlreader"
] |
stackoverflow_0000065452_maxstringcontentlength_web_services_xmlreader.txt
|
Q:
Using OpenID for both .NET/Windows and PHP/Linux/Apache web sites
Is it possible to use OpenID for both .NET web sites and PHP websites (Apache/Linux)?
I have a manager that wants single sign-on for access to any/all web sites, regardless of which web server hosts a web site.
I create .NET web apps and the PHP web sites/apps are done by another programmer.
How would I go about using OpenID for a .NET web app?
What about for the PHP programmer?
A:
For .NET: http://code.google.com/p/dotnetopenid/
For PHP: http://openidenabled.com/php-openid/
A:
You can use OpenID for all sites, regardless of platform. Use this for ease of login (it's javascript):
https://www.idselector.com/
For your .NET sites, dotnetopenid works nicely. For PHP you can use the code from here:
http://openidenabled.com/php-openid/
OpenID uses the URL to identify the site - not the technology.
A:
use the following library:
http://code.google.com/p/dotnetopenid
|
Using OpenID for both .NET/Windows and PHP/Linux/Apache web sites
|
Is it possible to use OpenID for both .NET web sites and PHP websites (Apache/Linux)?
I have a manager that wants single sign-on for access to any/all web sites, regardless of which web server hosts a web site.
I create .NET web apps and the PHP web sites/apps are done by another programmer.
How would I go about using OpenID for a .NET web app?
What about for the PHP programmer?
|
[
"For .NET: http://code.google.com/p/dotnetopenid/\nFor PHP: http://openidenabled.com/php-openid/\n",
"You can use OpenID for all sites, regardless of platform. Use this for ease of login (it's javascript):\nhttps://www.idselector.com/\nFor your .NET sites, dotnetopenid works nicely. For PHP you can use the code from here:\nhttp://openidenabled.com/php-openid/\nOpenID uses the URL to identify the site - not the technology.\n",
"use the following library:\nhttp://code.google.com/p/dotnetopenid\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
"apache",
"asp.net",
"linux",
"openid",
"php"
] |
stackoverflow_0000065494_apache_asp.net_linux_openid_php.txt
|
Q:
IIS crashes when serving an ASP.NET application under heavy load. How to troubleshoot it?
I am working on an ASP.NET web application, it seems to be working properly when I try to debug it in Visual Studio. However when I emulate heavy load, IIS crashes without any trace -- log entry in the system journal is very generic, "The World Wide Web Publishing service terminated unexpectedly. It has done this 4 time(s)."
How is it possible to get more information from IIS to troubleshoot this problem?
A:
Download Debugging tools for Windows:
http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx
Debugging Tools for Windows has has a script (ADPLUS) that allows you to create dumps when a process CRASHES:
http://support.microsoft.com/kb/286350
The command should be something like (if you are using IIS6):
cscript adplus.vbs -crash -pn w3wp.exe
This command will attach the debugger to the worker process. When the crash occurs it will generate a dump (a *.DMP file).
You can open it in WinDBG (also included in the Debugging Tools for Windows). File > Open Crash dump...
By default, WinDBG will show you (next to the command line) the thread were the process crashed.
The first thing you need to do in WinDBG is to load the .NET Framework extensions:
.loadby sos mscorwks
then, you will display the managed callstack:
!clrstack
if the thread was not running managed code, then you'll need to check the native stack:
kpn 200
This should give you some ideas. To continue troubleshooting I recommend you read the following article:
http://msdn.microsoft.com/en-us/library/ms954594.aspx
A:
Crash dump of asp.net process should give you tons of info..If you want to quickly get some info on why the process got recycled, try this tip from Scott Gu..
Health monitoring feature of asp.net 2.0 is also worth looking at..
A:
The key is "without any trace". You need to put your own trace logging in to create some chatter. Then you'll be able to spot where the chatter stops.
|
IIS crashes when serving an ASP.NET application under heavy load. How to troubleshoot it?
|
I am working on an ASP.NET web application, it seems to be working properly when I try to debug it in Visual Studio. However when I emulate heavy load, IIS crashes without any trace -- log entry in the system journal is very generic, "The World Wide Web Publishing service terminated unexpectedly. It has done this 4 time(s)."
How is it possible to get more information from IIS to troubleshoot this problem?
|
[
"Download Debugging tools for Windows:\nhttp://www.microsoft.com/whdc/DevTools/Debugging/default.mspx\nDebugging Tools for Windows has has a script (ADPLUS) that allows you to create dumps when a process CRASHES:\nhttp://support.microsoft.com/kb/286350\nThe command should be something like (if you are using IIS6):\ncscript adplus.vbs -crash -pn w3wp.exe\n\nThis command will attach the debugger to the worker process. When the crash occurs it will generate a dump (a *.DMP file).\nYou can open it in WinDBG (also included in the Debugging Tools for Windows). File > Open Crash dump...\nBy default, WinDBG will show you (next to the command line) the thread were the process crashed.\nThe first thing you need to do in WinDBG is to load the .NET Framework extensions:\n.loadby sos mscorwks\n\nthen, you will display the managed callstack:\n!clrstack\n\nif the thread was not running managed code, then you'll need to check the native stack:\nkpn 200\n\nThis should give you some ideas. To continue troubleshooting I recommend you read the following article:\nhttp://msdn.microsoft.com/en-us/library/ms954594.aspx\n",
"Crash dump of asp.net process should give you tons of info..If you want to quickly get some info on why the process got recycled, try this tip from Scott Gu..\nHealth monitoring feature of asp.net 2.0 is also worth looking at..\n",
"The key is \"without any trace\". You need to put your own trace logging in to create some chatter. Then you'll be able to spot where the chatter stops.\n"
] |
[
4,
3,
0
] |
[] |
[] |
[
"crash",
"debugging",
"iis"
] |
stackoverflow_0000062720_crash_debugging_iis.txt
|
Q:
Visual Studio 2008: Is it worth the upgrade from 2005?
As of the fall of 2008 I'm about to embark on a new development cycle for a major product that has a winforms and an asp.net interface. We use Telerik, DevExpress and Infragistics components in it and all are going to have a release within a month or so which will be the one I target for our spring release of our product.
They all support VS2005 and we will continue to target .net 2+ so I can't see any compelling reason so far to upgrade to VS2008.
Has anyone found a compelling reason for upgrading to VS2008?
A:
It's worth it. It's faster, the designer is vastly improved (split view, faster context switching), it has better support for javascript and when you're ready to target 3.5, you'll be ready to go.
A:
These are Microsoft's 10 reasons to upgrade (.DOC):
LINQ support
Same designer elements as Microsoft Expression (Web and Blend)
AJAX and WCF/REST
Better WPF support
Improved MSTEST (also included in Professional edition)
Improved HTML, CSS, and JavaScript editors
Choose from Project settings which version of the framework to target
Improved Office dev tools, including ribbon UI and Click-Once support
Integrated WCF and WWF support
Better performance and stability
A:
Yes, it's definately worth the upgrade. I would actaully say go straight to VS2008 SP1 as well. There have been a lot of IDE improvements (usability features and speed) and improvements in the web development experience as well including better JS and CSS support.
A:
If you have a release within a month, I'd suggest not upgrading. Make the upgrade to 2k8 part of the next major release ... no reason you should risk something not working quite the same or some other complication if everything is working as is.
A:
To add to John's post, there is also built in unit testing, built in refactoring, code analysis, and the web designer for html\javascript is vastly improved. I can't think of any reason why you wouldn't upgrade.
A:
It is worth the upgrade for me for the main reason that I can target different .NET versions (2, 3, 3.5) from the same IDE whereas in the past, one version of Visual Studio supported one version of .NET.
The UI seems much more responsive now, but the core set of tools and processes hasn't changed that much.
A:
The new C# language features are compelling for me:
automatic properties, object initializers, collection initializers, extension methods, lambda expressions.
For a quick overview from the guy responsible, see:
http://weblogs.asp.net/scottgu/archive/2007/03/08/new-c-orcas-language-features-automatic-properties-object-initializers-and-collection-initializers.aspx
http://weblogs.asp.net/scottgu/archive/2007/03/13/new-orcas-language-feature-extension-methods.aspx
http://weblogs.asp.net/scottgu/archive/2007/04/08/new-orcas-language-feature-lambda-expressions.aspx
A:
I agree with Mr. Martinez in that I wouldn't port any existing projects up to the 3.5 framework, but the split designer and javascript debugging is worthwhile on its own.
A:
Upgrade, you will not regret it in the slightest. In particular, Linq is going to make your life so much easier. There there are the extensions for c#.
That's barely touching the surface, there is certainly new toys in the area you are developing as well, either web, desktop or server.
A:
I'd upgrade, but set aside some time for the install process. It took two hours on my moderately fast dev workstation, and I'm still doing updates, patches, hotfixes, two hours after the install finished... (haven't gotten any "real work" done today at all!)
A:
It is helpful in the particular case you describe. Consider the following:
1) You are at the start of a development cycle. It is always easier to make these types of changes at the start of or between cycles as opposed to in the middle of one. Given this principle, your next convenient time to upgrade (if the schedule is not delayed) would be next Spring.
2) VS2008 allows for the compiler to target any specific .NET runtime version including 2.0 if you need to continue supporting an older framework.
Also, as some of the other answers have suggested, go straight to SP1. The service pack upgrade experience was not nearly as big of an ordeal as VS2005 SP1... at least in my experience.
A:
VS 2008 is not the point. The latest .Net package is the point. You can use Linq and all the other new Features with notepad and the commandline compiler but i guess that is more theoretical. So my statement is yes, .net 3.5 is the recommendation but using it without VS 2008 isn't a good idea.
|
Visual Studio 2008: Is it worth the upgrade from 2005?
|
As of the fall of 2008 I'm about to embark on a new development cycle for a major product that has a winforms and an asp.net interface. We use Telerik, DevExpress and Infragistics components in it and all are going to have a release within a month or so which will be the one I target for our spring release of our product.
They all support VS2005 and we will continue to target .net 2+ so I can't see any compelling reason so far to upgrade to VS2008.
Has anyone found a compelling reason for upgrading to VS2008?
|
[
"It's worth it. It's faster, the designer is vastly improved (split view, faster context switching), it has better support for javascript and when you're ready to target 3.5, you'll be ready to go.\n",
"These are Microsoft's 10 reasons to upgrade (.DOC):\n\nLINQ support\nSame designer elements as Microsoft Expression (Web and Blend)\nAJAX and WCF/REST\nBetter WPF support\nImproved MSTEST (also included in Professional edition)\nImproved HTML, CSS, and JavaScript editors\nChoose from Project settings which version of the framework to target\nImproved Office dev tools, including ribbon UI and Click-Once support \nIntegrated WCF and WWF support\nBetter performance and stability\n\n",
"Yes, it's definately worth the upgrade. I would actaully say go straight to VS2008 SP1 as well. There have been a lot of IDE improvements (usability features and speed) and improvements in the web development experience as well including better JS and CSS support.\n",
"If you have a release within a month, I'd suggest not upgrading. Make the upgrade to 2k8 part of the next major release ... no reason you should risk something not working quite the same or some other complication if everything is working as is.\n",
"To add to John's post, there is also built in unit testing, built in refactoring, code analysis, and the web designer for html\\javascript is vastly improved. I can't think of any reason why you wouldn't upgrade.\n",
"It is worth the upgrade for me for the main reason that I can target different .NET versions (2, 3, 3.5) from the same IDE whereas in the past, one version of Visual Studio supported one version of .NET.\nThe UI seems much more responsive now, but the core set of tools and processes hasn't changed that much.\n",
"The new C# language features are compelling for me:\nautomatic properties, object initializers, collection initializers, extension methods, lambda expressions.\nFor a quick overview from the guy responsible, see:\nhttp://weblogs.asp.net/scottgu/archive/2007/03/08/new-c-orcas-language-features-automatic-properties-object-initializers-and-collection-initializers.aspx\nhttp://weblogs.asp.net/scottgu/archive/2007/03/13/new-orcas-language-feature-extension-methods.aspx\nhttp://weblogs.asp.net/scottgu/archive/2007/04/08/new-orcas-language-feature-lambda-expressions.aspx\n",
"I agree with Mr. Martinez in that I wouldn't port any existing projects up to the 3.5 framework, but the split designer and javascript debugging is worthwhile on its own.\n",
"Upgrade, you will not regret it in the slightest. In particular, Linq is going to make your life so much easier. There there are the extensions for c#.\nThat's barely touching the surface, there is certainly new toys in the area you are developing as well, either web, desktop or server.\n",
"I'd upgrade, but set aside some time for the install process. It took two hours on my moderately fast dev workstation, and I'm still doing updates, patches, hotfixes, two hours after the install finished... (haven't gotten any \"real work\" done today at all!)\n",
"It is helpful in the particular case you describe. Consider the following:\n1) You are at the start of a development cycle. It is always easier to make these types of changes at the start of or between cycles as opposed to in the middle of one. Given this principle, your next convenient time to upgrade (if the schedule is not delayed) would be next Spring. \n2) VS2008 allows for the compiler to target any specific .NET runtime version including 2.0 if you need to continue supporting an older framework.\nAlso, as some of the other answers have suggested, go straight to SP1. The service pack upgrade experience was not nearly as big of an ordeal as VS2005 SP1... at least in my experience.\n",
"VS 2008 is not the point. The latest .Net package is the point. You can use Linq and all the other new Features with notepad and the commandline compiler but i guess that is more theoretical. So my statement is yes, .net 3.5 is the recommendation but using it without VS 2008 isn't a good idea.\n"
] |
[
12,
8,
5,
3,
2,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"ide",
"upgrade",
"visual_studio",
"visual_studio_2005",
"visual_studio_2008"
] |
stackoverflow_0000064839_ide_upgrade_visual_studio_visual_studio_2005_visual_studio_2008.txt
|
Q:
Is there a good .net library for 3-way comparison of HTML that can be used for merge?
In order to merge independant HTML changes, I'm looking for recomendations for a 3-way comparison / merge library for HTML. The common 3-way text merge algorithms perform poorly because they do not understand the tree like structure of HTML and XML. Of course, such a library must understand the looser syntax of HTML, i.e. tags are not always closed. My platform is .Net.
A:
You could also just go cheep: Run the files through tidy and then compare. This will result in similar structures, where new / deleted children will show up with traditional diff tools. It breaks down on removal / addition of surrounding nodes - good luck on solving that one...
Also, the XML Notepad (sorry, couldn't find a link that works on microsoft.com) by Microsoft can compare XML files and does this in a tree based fashion.
|
Is there a good .net library for 3-way comparison of HTML that can be used for merge?
|
In order to merge independant HTML changes, I'm looking for recomendations for a 3-way comparison / merge library for HTML. The common 3-way text merge algorithms perform poorly because they do not understand the tree like structure of HTML and XML. Of course, such a library must understand the looser syntax of HTML, i.e. tags are not always closed. My platform is .Net.
|
[
"You could also just go cheep: Run the files through tidy and then compare. This will result in similar structures, where new / deleted children will show up with traditional diff tools. It breaks down on removal / addition of surrounding nodes - good luck on solving that one...\nAlso, the XML Notepad (sorry, couldn't find a link that works on microsoft.com) by Microsoft can compare XML files and does this in a tree based fashion.\n"
] |
[
1
] |
[
"A simple google search offered up: Differ. I've never used it so I can't vouch for the quality of that :-)\n"
] |
[
-1
] |
[
".net",
"diff",
"merge"
] |
stackoverflow_0000065463_.net_diff_merge.txt
|
Q:
Mono-Develop throws error "" when trying to create select Gtk objects (dialogs), why?
I've recently started playing with Mono (1.9.1) on Ubuntu 8.04 with the Mono-Develop IDE (v1). I am attempting to use GTK-Sharp 2 to run the GUI for the play apps.
For some reason when I try to create gtk dialogs (ColorSelectionDialog or MessageDialog) the compiler throws the error "'Gtk.ColorSelectionDialog.ColorSelectionDialog(GLib.GType)' is inaccessible due to its protection level(CS0122)"
Perhaps these dialogs are not public objects in the GTK Libary?
Here is a sample of some c# code that throws the exception:
Gtk.ColorSelectionDialog dlg = new Gtk.ColorSelectionDialog(); //dont need any more than this
Any suggestions?
A:
Found a solution. Can't use the default constructor with no arguments. For some reason this constructor just doesn't work. If it's called like such:
MessageDialog md = new MessageDialog (parent_window,
DialogFlags.DestroyWithParent,
MessageType.Error,
ButtonsType.Close, "Error loading file");
Then it works ok. Obviously something is buggered up somewhere, but I don't have the technical know how to figure out how to fix the underlying problem in either Gtk or Mono.
|
Mono-Develop throws error "" when trying to create select Gtk objects (dialogs), why?
|
I've recently started playing with Mono (1.9.1) on Ubuntu 8.04 with the Mono-Develop IDE (v1). I am attempting to use GTK-Sharp 2 to run the GUI for the play apps.
For some reason when I try to create gtk dialogs (ColorSelectionDialog or MessageDialog) the compiler throws the error "'Gtk.ColorSelectionDialog.ColorSelectionDialog(GLib.GType)' is inaccessible due to its protection level(CS0122)"
Perhaps these dialogs are not public objects in the GTK Libary?
Here is a sample of some c# code that throws the exception:
Gtk.ColorSelectionDialog dlg = new Gtk.ColorSelectionDialog(); //dont need any more than this
Any suggestions?
|
[
"Found a solution. Can't use the default constructor with no arguments. For some reason this constructor just doesn't work. If it's called like such:\nMessageDialog md = new MessageDialog (parent_window, \n DialogFlags.DestroyWithParent,\n MessageType.Error, \n ButtonsType.Close, \"Error loading file\");\n\nThen it works ok. Obviously something is buggered up somewhere, but I don't have the technical know how to figure out how to fix the underlying problem in either Gtk or Mono.\n"
] |
[
4
] |
[] |
[] |
[
"dialog",
"gtk",
"mono",
"monodevelop"
] |
stackoverflow_0000065516_dialog_gtk_mono_monodevelop.txt
|
Q:
Output parameters not readable when used with a DataReader
When using a DataReader object to access data from a database (such as SQL Server) through stored procedures, any output parameter added to the Command object before executing are not being filled after reading. I can read row data just fine, as well as all input parameters, but not output ones.
A:
This is due to the "by design" nature of DataReaders. Any parameters marked as ParameterDirection.Output won't be "filled" until the DataReader has been closed. While still open, all Output parameters will more than likely just come back null.
The full Microsoft KB article concerning this can be viewed here.
|
Output parameters not readable when used with a DataReader
|
When using a DataReader object to access data from a database (such as SQL Server) through stored procedures, any output parameter added to the Command object before executing are not being filled after reading. I can read row data just fine, as well as all input parameters, but not output ones.
|
[
"This is due to the \"by design\" nature of DataReaders. Any parameters marked as ParameterDirection.Output won't be \"filled\" until the DataReader has been closed. While still open, all Output parameters will more than likely just come back null.\nThe full Microsoft KB article concerning this can be viewed here.\n"
] |
[
16
] |
[] |
[] |
[
".net",
"ado.net",
"datareader",
"parameters",
"stored_procedures"
] |
stackoverflow_0000065662_.net_ado.net_datareader_parameters_stored_procedures.txt
|
Q:
How to add method using metaclass
How do I add an instance method to a class using a metaclass (yes I do need to use a metaclass)? The following kind of works, but the func_name will still be "foo":
def bar(self):
print "bar"
class MetaFoo(type):
def __new__(cls, name, bases, dict):
dict["foobar"] = bar
return type(name, bases, dict)
class Foo(object):
__metaclass__ = MetaFoo
>>> f = Foo()
>>> f.foobar()
bar
>>> f.foobar.func_name
'bar'
My problem is that some library code actually uses the func_name and later fails to find the 'bar' method of the Foo instance. I could do:
dict["foobar"] = types.FunctionType(bar.func_code, {}, "foobar")
There is also types.MethodType, but I need an instance that does'nt exist yet to use that. Am I missing someting here?
A:
Try dynamically extending the bases that way you can take advantage of the mro and the methods are actual methods:
class Parent(object):
def bar(self):
print "bar"
class MetaFoo(type):
def __new__(cls, name, bases, dict):
return type(name, (Parent,) + bases, dict)
class Foo(object):
__metaclass__ = MetaFoo
if __name__ == "__main__":
f = Foo()
f.bar()
print f.bar.func_name
A:
I think what you want to do is this:
>>> class Foo():
... def __init__(self, x):
... self.x = x
...
>>> def bar(self):
... print 'bar:', self.x
...
>>> bar.func_name = 'foobar'
>>> Foo.foobar = bar
>>> f = Foo(12)
>>> f.foobar()
bar: 12
>>> f.foobar.func_name
'foobar'
Now you are free to pass Foos to a library that expects Foo instances to have a method named foobar.
Unfortunately, (1) I don't know how to use metaclasses and (2) I'm not sure I read your question correctly, but I hope this helps.
Note that func_name is only assignable in Python 2.4 and higher.
|
How to add method using metaclass
|
How do I add an instance method to a class using a metaclass (yes I do need to use a metaclass)? The following kind of works, but the func_name will still be "foo":
def bar(self):
print "bar"
class MetaFoo(type):
def __new__(cls, name, bases, dict):
dict["foobar"] = bar
return type(name, bases, dict)
class Foo(object):
__metaclass__ = MetaFoo
>>> f = Foo()
>>> f.foobar()
bar
>>> f.foobar.func_name
'bar'
My problem is that some library code actually uses the func_name and later fails to find the 'bar' method of the Foo instance. I could do:
dict["foobar"] = types.FunctionType(bar.func_code, {}, "foobar")
There is also types.MethodType, but I need an instance that does'nt exist yet to use that. Am I missing someting here?
|
[
"Try dynamically extending the bases that way you can take advantage of the mro and the methods are actual methods:\nclass Parent(object):\n def bar(self):\n print \"bar\"\n\nclass MetaFoo(type):\n def __new__(cls, name, bases, dict):\n return type(name, (Parent,) + bases, dict)\n\nclass Foo(object):\n __metaclass__ = MetaFoo\n\nif __name__ == \"__main__\":\n f = Foo()\n f.bar()\n print f.bar.func_name\n\n",
"I think what you want to do is this:\n>>> class Foo():\n... def __init__(self, x):\n... self.x = x\n... \n>>> def bar(self):\n... print 'bar:', self.x\n... \n>>> bar.func_name = 'foobar'\n>>> Foo.foobar = bar\n>>> f = Foo(12)\n>>> f.foobar()\nbar: 12\n>>> f.foobar.func_name\n'foobar'\n\nNow you are free to pass Foos to a library that expects Foo instances to have a method named foobar.\nUnfortunately, (1) I don't know how to use metaclasses and (2) I'm not sure I read your question correctly, but I hope this helps. \nNote that func_name is only assignable in Python 2.4 and higher.\n"
] |
[
15,
2
] |
[] |
[] |
[
"metaclass",
"python"
] |
stackoverflow_0000065400_metaclass_python.txt
|
Q:
What are the best keyboard macros for programming in windows?
I like putting shortcuts of the form "g - google.lnk" in my start menu so google is two keystrokes away. Win, g.
My eight or so most frequent applications go there.
I also make links to my solution files I am always opening "x - Popular Project.lnk"
Are there any better ways to automate opening frequently used applications?
A:
AutoHotkey is a reasonably good program for implementing windows key shortcuts. You might instead define WIN + G to be "open browser to google" which gives you a better response time (don't have to wait for start menu to popup, etc)
There are macro programs that change the macros used based on the window that's in focus. I've never needed that much control, but you might want to look into that.
-Adam
A:
Get a keyboard launcher program like Launchy
A:
For shortcuts I use Launchy
For macros I use AutoHotKey
Others will suggest SlickRun for shortcuts also.
A:
I use a lot the "intellisense" snippets in Visual Studio. You can include your own snippets and press double tab when they appear in the list. That's definitely a time saver.
A:
I use QuickMacros and love it.
so much so, that I did some extensive training articles on it here.
A:
The holy grail-
Ctrl-C, Ctrl-V
I kid, I kid! Try the veal!
|
What are the best keyboard macros for programming in windows?
|
I like putting shortcuts of the form "g - google.lnk" in my start menu so google is two keystrokes away. Win, g.
My eight or so most frequent applications go there.
I also make links to my solution files I am always opening "x - Popular Project.lnk"
Are there any better ways to automate opening frequently used applications?
|
[
"AutoHotkey is a reasonably good program for implementing windows key shortcuts. You might instead define WIN + G to be \"open browser to google\" which gives you a better response time (don't have to wait for start menu to popup, etc)\nThere are macro programs that change the macros used based on the window that's in focus. I've never needed that much control, but you might want to look into that.\n-Adam\n",
"Get a keyboard launcher program like Launchy\n",
"For shortcuts I use Launchy\nFor macros I use AutoHotKey\nOthers will suggest SlickRun for shortcuts also.\n",
"I use a lot the \"intellisense\" snippets in Visual Studio. You can include your own snippets and press double tab when they appear in the list. That's definitely a time saver. \n",
"I use QuickMacros and love it.\nso much so, that I did some extensive training articles on it here.\n",
"The holy grail- \nCtrl-C, Ctrl-V\nI kid, I kid! Try the veal!\n"
] |
[
3,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"keyboard",
"macros",
"performance"
] |
stackoverflow_0000044350_keyboard_macros_performance.txt
|
Q:
How create threads under Python for Delphi
I'm hosting Python script with Python for Delphi components inside my Delphi application. I'd like to create background tasks which keep running by script.
Is it possible to create threads which keep running even if the script execution ends (but not the host process, which keeps going on). I've noticed that the program gets stuck if the executing script ends and there is thread running. However if I'll wait until the thread is finished everything goes fine.
I'm trying to use "threading" standard module for threads.
A:
Python has its own threading module that comes standard, if it helps. You can create thread objects using the threading module.
threading Documentation
thread Documentation
The thread module offers low level threading and synchronization using simple Lock objects.
Again, not sure if this helps since you're using Python under a Delphi environment.
A:
Threads by definition are part of the same process. If you want them to keep running, they need to be forked off into a new process; see os.fork() and friends.
You'll probably want the new process to end (via exit() or the like) immediately after spawning the script.
A:
If a process dies all it's threads die with it, so a solution might be a separate process.
See if creating a xmlrpc server might help you, that is a simple solution for interprocess communication.
|
How create threads under Python for Delphi
|
I'm hosting Python script with Python for Delphi components inside my Delphi application. I'd like to create background tasks which keep running by script.
Is it possible to create threads which keep running even if the script execution ends (but not the host process, which keeps going on). I've noticed that the program gets stuck if the executing script ends and there is thread running. However if I'll wait until the thread is finished everything goes fine.
I'm trying to use "threading" standard module for threads.
|
[
"Python has its own threading module that comes standard, if it helps. You can create thread objects using the threading module.\nthreading Documentation\nthread Documentation\nThe thread module offers low level threading and synchronization using simple Lock objects.\nAgain, not sure if this helps since you're using Python under a Delphi environment.\n",
"Threads by definition are part of the same process. If you want them to keep running, they need to be forked off into a new process; see os.fork() and friends.\nYou'll probably want the new process to end (via exit() or the like) immediately after spawning the script.\n",
"If a process dies all it's threads die with it, so a solution might be a separate process.\nSee if creating a xmlrpc server might help you, that is a simple solution for interprocess communication.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"delphi",
"python"
] |
stackoverflow_0000063681_delphi_python.txt
|
Q:
Lazy Loading with a WCF Service Domain Model?
I'm looking to push my domain model into a WCF Service API and wanted to get some thoughts on lazy loading techniques with this type of setup.
Any suggestions when taking this approach?
when I implemented this technique and step into my app, just before the server returns my list it hits the get of each property that is supposed to be lazy loaded ... Thus eager loading. Could you explain this issue or suggest a resolution?
Edit: It appears you can use the XMLIgnore attribute so it doesn’t get looked at during serialization .. still reading up on this though
A:
Don't do lazy loading over a service interface. Define explicit DTO's and consume those as your data contracts in WCF.
You can use NHibernate (or other ORMs) to properly fetch the objects you need to construct the DTOs.
A:
As for any remoting architecture, you'll want to avoid loading a full object graph "down the wire" in an uncontrolled way (unless you have a trivially small number of objects).
The Wikipedia article has the standard techniques pretty much summarised (and in C#. too!). I've used both ghosts and value holders and they work pretty well.
To implement this kind of technique, make sure that you separate concerns strictly. On the server, your service contract implementation classes should be the only bits of the code that work with data contracts. On the client, the service access layer should be the only code that works with the proxies.
Layering like this lets you adjust the way that the service is implemented relatively independently of the UI layers calling the service and the business tier that's being called. It also gives you half a chance of unit testing!
A:
You could try to use something REST based (e.g. ADO.NET Data Services) and wrap it transpariently into your client code.
|
Lazy Loading with a WCF Service Domain Model?
|
I'm looking to push my domain model into a WCF Service API and wanted to get some thoughts on lazy loading techniques with this type of setup.
Any suggestions when taking this approach?
when I implemented this technique and step into my app, just before the server returns my list it hits the get of each property that is supposed to be lazy loaded ... Thus eager loading. Could you explain this issue or suggest a resolution?
Edit: It appears you can use the XMLIgnore attribute so it doesn’t get looked at during serialization .. still reading up on this though
|
[
"Don't do lazy loading over a service interface. Define explicit DTO's and consume those as your data contracts in WCF.\nYou can use NHibernate (or other ORMs) to properly fetch the objects you need to construct the DTOs.\n",
"As for any remoting architecture, you'll want to avoid loading a full object graph \"down the wire\" in an uncontrolled way (unless you have a trivially small number of objects).\nThe Wikipedia article has the standard techniques pretty much summarised (and in C#. too!). I've used both ghosts and value holders and they work pretty well.\nTo implement this kind of technique, make sure that you separate concerns strictly. On the server, your service contract implementation classes should be the only bits of the code that work with data contracts. On the client, the service access layer should be the only code that works with the proxies. \nLayering like this lets you adjust the way that the service is implemented relatively independently of the UI layers calling the service and the business tier that's being called. It also gives you half a chance of unit testing!\n",
"You could try to use something REST based (e.g. ADO.NET Data Services) and wrap it transpariently into your client code.\n"
] |
[
4,
3,
1
] |
[] |
[] |
[
"domain_driven_design",
"wcf",
"web_services"
] |
stackoverflow_0000035560_domain_driven_design_wcf_web_services.txt
|
Q:
So which is faster truly? Flash, Silverlight or Animated gifs?
I am trying to develop a multimedia site and I am leaning heavily toward Silverlight however Flash is always a main player. I am a Speed and performance type developer. Which Technology will load fastest in the given scenarios? 56k, DSL and Cable?
A:
It all depends on what you're doing: animation, video, calculation, etc? There are some tests that show Silverlight being faster for raw computation, while Flash's graphics engine is farther along (GPU utilization, 3D, etc.).
If you're talking about load time, there are definitely things you can do in Silverlight to make your XAP file smaller than most images - the Hard Rock Memorabilia team got their XAP down under 70K, and that site browsed GB of photo data. I'm sure you can do the same in Flash.
While your question is focused on performance, as others have mentioned you do have to take into account the 4.5MB install for Silverlight, since it's not widely installed yet.
A:
Animater Gif's will mostly be faster than Flash/Silverlight. But Flash/Silverlight are in a different league.
WRT Flash Vs Silverlight:
Based on the demo's I have seen, flash seems to be faster/less CPU intensive than silverlight. It may be because Flash has matured a lot and there is a lot of known optimization code available.
A:
Actually, you have to assume that Flash is probably already installed on the user's browser, and SilverLight probably not. So the cost of installing silverlight (though a small download) has to be taken in to consideration as well.
Silverlight, however, does have some pretty neat out of the box multimedia support.
A:
It depends what content you're serving. If the image can be vector data and not a raster (like a .gif) then either flash or silverlight would be immensly smaller in size than the equivalent .gif.
It's hard to compare Silverlight to Flash, as it's still in beta. If you choose to use Silverlight, realize that Flash is installed on many more machines than Silverlight is, so you better have a good reason (missing feature from Flash) to use it, at this point in time.
A:
Silverlight doesn't yet have the market penetration for mission critical stuff. The big deployments of it have been mainly situations where Microsoft is trying to push market penetration by paying NBC to host Olympics content and the like.
Flash is the de facto standard for rich media sites. Animated GIFs are extremely limited and aren't likely to be a complete solution in most cases.
|
So which is faster truly? Flash, Silverlight or Animated gifs?
|
I am trying to develop a multimedia site and I am leaning heavily toward Silverlight however Flash is always a main player. I am a Speed and performance type developer. Which Technology will load fastest in the given scenarios? 56k, DSL and Cable?
|
[
"It all depends on what you're doing: animation, video, calculation, etc? There are some tests that show Silverlight being faster for raw computation, while Flash's graphics engine is farther along (GPU utilization, 3D, etc.).\nIf you're talking about load time, there are definitely things you can do in Silverlight to make your XAP file smaller than most images - the Hard Rock Memorabilia team got their XAP down under 70K, and that site browsed GB of photo data. I'm sure you can do the same in Flash.\nWhile your question is focused on performance, as others have mentioned you do have to take into account the 4.5MB install for Silverlight, since it's not widely installed yet.\n",
"Animater Gif's will mostly be faster than Flash/Silverlight. But Flash/Silverlight are in a different league.\nWRT Flash Vs Silverlight:\nBased on the demo's I have seen, flash seems to be faster/less CPU intensive than silverlight. It may be because Flash has matured a lot and there is a lot of known optimization code available. \n",
"Actually, you have to assume that Flash is probably already installed on the user's browser, and SilverLight probably not. So the cost of installing silverlight (though a small download) has to be taken in to consideration as well.\nSilverlight, however, does have some pretty neat out of the box multimedia support.\n",
"It depends what content you're serving. If the image can be vector data and not a raster (like a .gif) then either flash or silverlight would be immensly smaller in size than the equivalent .gif.\nIt's hard to compare Silverlight to Flash, as it's still in beta. If you choose to use Silverlight, realize that Flash is installed on many more machines than Silverlight is, so you better have a good reason (missing feature from Flash) to use it, at this point in time.\n",
"Silverlight doesn't yet have the market penetration for mission critical stuff. The big deployments of it have been mainly situations where Microsoft is trying to push market penetration by paying NBC to host Olympics content and the like.\nFlash is the de facto standard for rich media sites. Animated GIFs are extremely limited and aren't likely to be a complete solution in most cases.\n"
] |
[
3,
2,
0,
0,
0
] |
[] |
[] |
[
"animated_gif",
"flash",
"silverlight"
] |
stackoverflow_0000065694_animated_gif_flash_silverlight.txt
|
Q:
Execute shortcuts like programs
Example: You have a shortcut s to SomeProgram in the current directory.
In cmd.exe, you can type s and it will launch the program.
In PowerShell, typing s gives:
The term 's' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.
If you type s.lnk or SomeProgram, it runs the program just fine.
How can I configure PowerShell to execute shortcuts just like programs?
A:
You can also invoke a shortcut by using the "invoke-item" cmdlet. So for example if you wanted to launch "internet explorer.lnk" you can type the following command:
invoke-item 'Internet Explorer.lnk'
Or you could also use the alias
ii 'internet explorer.lnk'
Another cool thing is that you could do "invoke-item t.txt" and it would automatically open whatever the default handler for *.txt files were, such as notepad.
Note If you want to execute an application, app.exe, in the current directory you have to actually specify the path, relative or absolute, to execute. ".\app.exe" is what you would need to type to execute the application.
A:
On my Vista system typing S won't launch a lnk file unless I have the environment variable PATHEXT set with .lnk in the list. When I do. S will work in cmd.exe and I have to do .\S in powershell.
A:
After adding ;.LNK to the end of my PATHEXT environment variable, I can now execute shortcuts even without the preceding ./ notation. (Thanks bruceatk!)
I was also inspired by Steven's suggestion to create a little script that automatically aliases all the shortcuts in my PATH (even though I plan to stick with the simpler solution ;).
$env:path.Split( ';' ) |
Get-ChildItem -filter *.lnk |
select @{ Name='Path'; Expression={ $_.FullName } },
@{ Name='Name'; Expression={ [IO.Path]::GetFileNameWithoutExtension( $_.Name ) } } |
where { -not (Get-Alias $_.Name -ea 0) } |
foreach { Set-Alias $_.Name $_.Path }
A:
I don't believe you can. You might be better off aliasing commonly used commands in a script that you call from your profile script.
Example -
Set-Alias np c:\windows\notepad.exe
Then you have your short, easily typeable name available from the command line.
A:
For one, the shortcut is not "s" it is "s.lnk". E.g. you are not able to open a text file (say with notepad) by typing "t" when the name is "t.txt" :) Technet says
The PATHEXT environment variable
defines the list of file extensions
checked by Windows NT when searching
for an executable file. The default value of PATHEXT is .COM;.EXE;.BAT;.CMD
You can dot-source as described by others here, or you could also use the invocation character "&". This means that PS treats your string as something to execute rather than just text. This might be more important in a script though.
I'd add that you should pass any parameters OUTSIDE of the quotes (this one bit me before) note that the "-r" is not in the quoted string, only the exe.
& "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe" -r | out-null
A:
You can always use tab completion to type "s[TAB]" and press ENTER and that will execute it.
|
Execute shortcuts like programs
|
Example: You have a shortcut s to SomeProgram in the current directory.
In cmd.exe, you can type s and it will launch the program.
In PowerShell, typing s gives:
The term 's' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.
If you type s.lnk or SomeProgram, it runs the program just fine.
How can I configure PowerShell to execute shortcuts just like programs?
|
[
"You can also invoke a shortcut by using the \"invoke-item\" cmdlet. So for example if you wanted to launch \"internet explorer.lnk\" you can type the following command:\ninvoke-item 'Internet Explorer.lnk'\n\nOr you could also use the alias\nii 'internet explorer.lnk'\n\nAnother cool thing is that you could do \"invoke-item t.txt\" and it would automatically open whatever the default handler for *.txt files were, such as notepad.\nNote If you want to execute an application, app.exe, in the current directory you have to actually specify the path, relative or absolute, to execute. \".\\app.exe\" is what you would need to type to execute the application.\n",
"On my Vista system typing S won't launch a lnk file unless I have the environment variable PATHEXT set with .lnk in the list. When I do. S will work in cmd.exe and I have to do .\\S in powershell.\n",
"After adding ;.LNK to the end of my PATHEXT environment variable, I can now execute shortcuts even without the preceding ./ notation. (Thanks bruceatk!)\nI was also inspired by Steven's suggestion to create a little script that automatically aliases all the shortcuts in my PATH (even though I plan to stick with the simpler solution ;).\n$env:path.Split( ';' ) | \n Get-ChildItem -filter *.lnk | \n select @{ Name='Path'; Expression={ $_.FullName } }, \n @{ Name='Name'; Expression={ [IO.Path]::GetFileNameWithoutExtension( $_.Name ) } } | \n where { -not (Get-Alias $_.Name -ea 0) } | \n foreach { Set-Alias $_.Name $_.Path }\n\n",
"I don't believe you can. You might be better off aliasing commonly used commands in a script that you call from your profile script.\nExample -\nSet-Alias np c:\\windows\\notepad.exe\nThen you have your short, easily typeable name available from the command line.\n",
"For one, the shortcut is not \"s\" it is \"s.lnk\". E.g. you are not able to open a text file (say with notepad) by typing \"t\" when the name is \"t.txt\" :) Technet says \n\nThe PATHEXT environment variable\n defines the list of file extensions\n checked by Windows NT when searching\n for an executable file. The default value of PATHEXT is .COM;.EXE;.BAT;.CMD\n\nYou can dot-source as described by others here, or you could also use the invocation character \"&\". This means that PS treats your string as something to execute rather than just text. This might be more important in a script though.\nI'd add that you should pass any parameters OUTSIDE of the quotes (this one bit me before) note that the \"-r\" is not in the quoted string, only the exe.\n& \"C:\\WINDOWS\\Microsoft.NET\\Framework\\v2.0.50727\\aspnet_regiis.exe\" -r | out-null\n\n",
"You can always use tab completion to type \"s[TAB]\" and press ENTER and that will execute it.\n"
] |
[
18,
9,
5,
2,
2,
0
] |
[] |
[] |
[
"powershell"
] |
stackoverflow_0000022524_powershell.txt
|
Q:
How would I go about creating a custom search index much like Lucene?
I implemented a Lucene search solution awhile back, and it got me interested in compressed file indexes that are searchable. At the time I could not find any good information on how exactly you would go about creating a custom search index, so I wonder if anyone can point me in the right direction?
My primary interest is in file formatting, compression, and something similar to the concept of Lucene's documents and fields. It should not necessarily be language specific, but if you can point me to online resources that have language specific implementations with full descriptions of the process then that is okay, too.
A:
Managing Gigabytes by Alistair Moffat, Timothy C. Bell
A:
You may also try to look in the source code of excellent Sphinx search engine.
It is modern full-text open source search engine, and it uses smartly optimized indexes.
|
How would I go about creating a custom search index much like Lucene?
|
I implemented a Lucene search solution awhile back, and it got me interested in compressed file indexes that are searchable. At the time I could not find any good information on how exactly you would go about creating a custom search index, so I wonder if anyone can point me in the right direction?
My primary interest is in file formatting, compression, and something similar to the concept of Lucene's documents and fields. It should not necessarily be language specific, but if you can point me to online resources that have language specific implementations with full descriptions of the process then that is okay, too.
|
[
"Managing Gigabytes by Alistair Moffat, Timothy C. Bell\n\n",
"You may also try to look in the source code of excellent Sphinx search engine.\nIt is modern full-text open source search engine, and it uses smartly optimized indexes.\n"
] |
[
1,
1
] |
[] |
[] |
[
"indexing",
"multilingual",
"search"
] |
stackoverflow_0000065704_indexing_multilingual_search.txt
|
Q:
SharePoint WSS 3.0 Integration with Mac OSX (either Safari or Firefox)
We have a SharePoint WSS site and some of our users on on the Mac OSX platform. Are there any tips or tricks to get a similar experience to Windows with document shares and calendars on the Mac?
Edit: Browsing a SharePoint WSS site on a Mac, whether using Firefox or Safari, has a very similar look and feel as it does on Windows IE. The similar experience I am looking for has to do with integrating the calendars, document shares, etc. into the desktop.
For example, with IE you can go to a calendar and select "Actions -> Connect to Outlook" and it will make the calendar visible and manageable from within Outlook.
Is there any way to get the Mac to work similarly?
A:
Unfortunately, the "full" Sharepoint Experience is limited to running Internet Explorer 6/7 and Office 2007.
On the Mac, I recommend using Firefox (Camino?) which seems to work a bit better than Safari.
Edit: When you say "Similar experience", what exactly are you missing? I don't have any Mac here, but I was under the impression that Office 2008 will have a working integration with Sharepoint as well.
A:
Office 2008 allows limited connectivity to MOSS. However there is no Mac OS browser yet that is completely compatible to MOSS.
I do have it on good authority the Microsoft Mac BU team is working with the MOSS team to see this changing in future versions of the platform, specifically around the Safari support.
A:
ActiveX is used to enable the bridge between MOSS and Office, and as ActiveX is only on Windows, you will find that you cannot get the full experience if you do not use Windows as your OS.
A:
Yes, Sharepoint looks to client installs of Office applications and Active X in order to fully integrate.
|
SharePoint WSS 3.0 Integration with Mac OSX (either Safari or Firefox)
|
We have a SharePoint WSS site and some of our users on on the Mac OSX platform. Are there any tips or tricks to get a similar experience to Windows with document shares and calendars on the Mac?
Edit: Browsing a SharePoint WSS site on a Mac, whether using Firefox or Safari, has a very similar look and feel as it does on Windows IE. The similar experience I am looking for has to do with integrating the calendars, document shares, etc. into the desktop.
For example, with IE you can go to a calendar and select "Actions -> Connect to Outlook" and it will make the calendar visible and manageable from within Outlook.
Is there any way to get the Mac to work similarly?
|
[
"Unfortunately, the \"full\" Sharepoint Experience is limited to running Internet Explorer 6/7 and Office 2007.\nOn the Mac, I recommend using Firefox (Camino?) which seems to work a bit better than Safari.\nEdit: When you say \"Similar experience\", what exactly are you missing? I don't have any Mac here, but I was under the impression that Office 2008 will have a working integration with Sharepoint as well.\n",
"Office 2008 allows limited connectivity to MOSS. However there is no Mac OS browser yet that is completely compatible to MOSS.\nI do have it on good authority the Microsoft Mac BU team is working with the MOSS team to see this changing in future versions of the platform, specifically around the Safari support.\n",
"ActiveX is used to enable the bridge between MOSS and Office, and as ActiveX is only on Windows, you will find that you cannot get the full experience if you do not use Windows as your OS.\n",
"Yes, Sharepoint looks to client installs of Office applications and Active X in order to fully integrate.\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"macos",
"sharepoint",
"wss"
] |
stackoverflow_0000008950_macos_sharepoint_wss.txt
|
Q:
What is the deployment rate of the .NET framework?
I've been looking for this information for my commercial desktop product, with no avail.
Specifically, what I'm look for, is deployment statistics of the .NET framework for end-users (both granny "I'm just browsing the internet" XP, and high-end users, if possible), and in the commercial/business sector.
Edit: Other than the data points below, here's an interesting blog post about .NET deployment rates.
A:
Some statistics from 2005 I found at Scott Wiltamuth's blog (you can be sure these numbers are much higher now):
More than 120M copies of the .NET Framework have been downloaded and installed using either Microsoft downloads or Windows Update
More than 85% of new consumer PCs sold in 2004 had the .NET Framework installed
More than 58% of business PCs have the .NET Framework preinstalled or preloaded
Every new HP consumer imaging device (printer/scanner/camera) will install the .NET Framework if it’s not already there – that’s 3M units per year
Every new Microsoft IntelliPoint mouse software CD ships with the .NET Framework
It is also worth pointing out that Vista and Windows Server 2008 both ship with the .NET Framework. XP gets it via Windows Update.
A:
I don't have any hard numbers, but these days, it is pretty safe to assume most Windows XP and Vista users have at least .NET 2.0. I believe this was actually dropped via Windows Update for XP, and Vista came with at least 2.0 (apparently with 3.0 as pointed out in the comments to this answer).
A:
It depends a lot on which version of the framework you are targeting. I believe 1.1 (and even 2.0) are widely deployed. The later versions are not.
You should also visit this site for some very good information on .Net Framework Deployment: http://www.hanselman.com/smallestdotnet/
A:
I needed that same kind of information at my last job, where I was attempting to convince my manager to allow .NET development. The customer base was primarily dial-up users, so requiring a 20+ MB download was a tough sell. Unfortunately, I wasn't able to find any sort of statistics, either from Microsoft or from a research firm.
What I was able to get, however, was web analytics from the company's home page. .NET inserts its version number into the User Agent field, which I was able to log using our analytics package. From there, some Excel gruntwork was able to give me a rough idea of how many customers already had .NET installed, and which version(s).
Unfortunately that won't help you answer the broader question of deployment rates across multiple demographics, but it might be a useful technique for a single customer base.
|
What is the deployment rate of the .NET framework?
|
I've been looking for this information for my commercial desktop product, with no avail.
Specifically, what I'm look for, is deployment statistics of the .NET framework for end-users (both granny "I'm just browsing the internet" XP, and high-end users, if possible), and in the commercial/business sector.
Edit: Other than the data points below, here's an interesting blog post about .NET deployment rates.
|
[
"Some statistics from 2005 I found at Scott Wiltamuth's blog (you can be sure these numbers are much higher now):\n\nMore than 120M copies of the .NET Framework have been downloaded and installed using either Microsoft downloads or Windows Update\nMore than 85% of new consumer PCs sold in 2004 had the .NET Framework installed\nMore than 58% of business PCs have the .NET Framework preinstalled or preloaded\nEvery new HP consumer imaging device (printer/scanner/camera) will install the .NET Framework if it’s not already there – that’s 3M units per year\nEvery new Microsoft IntelliPoint mouse software CD ships with the .NET Framework\n\nIt is also worth pointing out that Vista and Windows Server 2008 both ship with the .NET Framework. XP gets it via Windows Update.\n",
"I don't have any hard numbers, but these days, it is pretty safe to assume most Windows XP and Vista users have at least .NET 2.0. I believe this was actually dropped via Windows Update for XP, and Vista came with at least 2.0 (apparently with 3.0 as pointed out in the comments to this answer).\n",
"It depends a lot on which version of the framework you are targeting. I believe 1.1 (and even 2.0) are widely deployed. The later versions are not.\nYou should also visit this site for some very good information on .Net Framework Deployment: http://www.hanselman.com/smallestdotnet/\n",
"I needed that same kind of information at my last job, where I was attempting to convince my manager to allow .NET development. The customer base was primarily dial-up users, so requiring a 20+ MB download was a tough sell. Unfortunately, I wasn't able to find any sort of statistics, either from Microsoft or from a research firm.\nWhat I was able to get, however, was web analytics from the company's home page. .NET inserts its version number into the User Agent field, which I was able to log using our analytics package. From there, some Excel gruntwork was able to give me a rough idea of how many customers already had .NET installed, and which version(s).\nUnfortunately that won't help you answer the broader question of deployment rates across multiple demographics, but it might be a useful technique for a single customer base.\n"
] |
[
4,
1,
1,
1
] |
[] |
[] |
[
".net",
"c#",
"deployment",
"statistics"
] |
stackoverflow_0000065749_.net_c#_deployment_statistics.txt
|
Q:
innerHTML manipulation in JavaScript
I am developing a web page code, which fetches dynamically the content from the server and then places this content to container nodes using something like
container.innerHTML = content;
Sometimes I have to overwrite some previous content in this node. This works fine, until it happens that previous content occupied more vertical space then a new one would occupy AND a user scrolled the page down -- scrolled more than new content would allow, provided its height.
In this case the page redraws incorrectly -- some artifacts of the old content remain. It works fine, and it is even possible to get rid of artifacts, by minimizing and restoring the browser (or force the window to be redrawn in an other way), however this does not seem very convenient.
I am testing this only under Safari (this is a iPhone-optimized website).
Does anybody have the idea how to deal with this?
A:
The easiest solution that I have found would be to place an anchor tag <a> at the top of the div you are editing:
<a name="ajax-div"></a>
Then when you change the content of the div, you can do this to have the browser jump to your anchor tag:
location.hash = 'ajax-div';
Use this to make sure the user isn't scrolled down too far when you update the content and you shouldn't get the issue in the first place.
(tested in the latest FF beta and latest safari)
A:
It sounds like the webkit rendering engine of Safari is not at first recognizing the content change, at least not fully enough to remove the previous html content. Minimizing and then restoring the windows initiates a redraw event in the browser's rendering engine.
I think I would explore 2 avenues: first could I use an iframe instead of the current 'content' node? Browsers expect IFrames to change, however as you're seeing they're not always so good at changing content of DIV or other elements.
Secondly, perhaps by modifying the scroll position as suggested earlier. You could simply move the scroll back to 0 as suggested or if that is to obtrusive you could try to restore the scroll after the content change. Subtract the height of the old content node from the current scroll position (reseting the browser's scroll to the content node's 0), change the node content, then add the new node's height to the scroll position.
Palehorse is right though (I can't vote his answer up at the moment - no points) an abstraction library like jQuery, Dojo, or even Prototype can often help with these matters. Especially if you see your page / site moving beyond simple DOM manipulation you'll find the tools and enhancements provided by libraries to be a huge help.
A:
It sounds like you are having a problem with the browser itself. Does this problem only occur in one browser?
One thing you might try is using a lightweight library like jQuery. It handles browser differences fairly nicely. To set the inner HTML for a div with the ID of container you would simply write this:
$('#container').html( content );
That will work in most browsers. I do not know if it will fix your problem specifically or not but it may be worth a try.
A:
Would it work to set the scroll position back to the top (element.scrollTop = 0; element.scrollLeft = 0; by heart) before replacing the content?
A:
Set the element's CSS height to 'auto' every time you update innerHTML.
A:
I would try doing container.innerHTML = ''; container.innerHTML = content;
|
innerHTML manipulation in JavaScript
|
I am developing a web page code, which fetches dynamically the content from the server and then places this content to container nodes using something like
container.innerHTML = content;
Sometimes I have to overwrite some previous content in this node. This works fine, until it happens that previous content occupied more vertical space then a new one would occupy AND a user scrolled the page down -- scrolled more than new content would allow, provided its height.
In this case the page redraws incorrectly -- some artifacts of the old content remain. It works fine, and it is even possible to get rid of artifacts, by minimizing and restoring the browser (or force the window to be redrawn in an other way), however this does not seem very convenient.
I am testing this only under Safari (this is a iPhone-optimized website).
Does anybody have the idea how to deal with this?
|
[
"The easiest solution that I have found would be to place an anchor tag <a> at the top of the div you are editing:\n<a name=\"ajax-div\"></a>\n\nThen when you change the content of the div, you can do this to have the browser jump to your anchor tag:\nlocation.hash = 'ajax-div';\n\nUse this to make sure the user isn't scrolled down too far when you update the content and you shouldn't get the issue in the first place.\n(tested in the latest FF beta and latest safari)\n",
"It sounds like the webkit rendering engine of Safari is not at first recognizing the content change, at least not fully enough to remove the previous html content. Minimizing and then restoring the windows initiates a redraw event in the browser's rendering engine. \nI think I would explore 2 avenues: first could I use an iframe instead of the current 'content' node? Browsers expect IFrames to change, however as you're seeing they're not always so good at changing content of DIV or other elements.\nSecondly, perhaps by modifying the scroll position as suggested earlier. You could simply move the scroll back to 0 as suggested or if that is to obtrusive you could try to restore the scroll after the content change. Subtract the height of the old content node from the current scroll position (reseting the browser's scroll to the content node's 0), change the node content, then add the new node's height to the scroll position.\nPalehorse is right though (I can't vote his answer up at the moment - no points) an abstraction library like jQuery, Dojo, or even Prototype can often help with these matters. Especially if you see your page / site moving beyond simple DOM manipulation you'll find the tools and enhancements provided by libraries to be a huge help.\n",
"It sounds like you are having a problem with the browser itself. Does this problem only occur in one browser?\nOne thing you might try is using a lightweight library like jQuery. It handles browser differences fairly nicely. To set the inner HTML for a div with the ID of container you would simply write this:\n$('#container').html( content );\n\nThat will work in most browsers. I do not know if it will fix your problem specifically or not but it may be worth a try.\n",
"Would it work to set the scroll position back to the top (element.scrollTop = 0; element.scrollLeft = 0; by heart) before replacing the content?\n",
"Set the element's CSS height to 'auto' every time you update innerHTML.\n",
"I would try doing container.innerHTML = ''; container.innerHTML = content; \n"
] |
[
4,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"dom",
"html",
"javascript"
] |
stackoverflow_0000063743_dom_html_javascript.txt
|
Q:
C++ Unit Testing Legacy Code: How to handle #include?
I've just started writing unit tests for a legacy code module with large physical dependencies using the #include directive. I've been dealing with them a few ways that felt overly tedious (providing empty headers to break long #include dependency lists, and using #define to prevent classes from being compiled) and was looking for some better strategies for handling these problems.
I've been frequently running into the problem of duplicating almost every header file with a blank version in order to separate the class I'm testing in it's entirety, and then writing substantial stub/mock/fake code for objects that will need to be replaced since they're now undefined.
Anyone know some better practices?
A:
The depression in the responses is overwhelming... But don't fear, we've got the holy book to exorcise the demons of legacy C++ code. Seriously just buy the book if you are in line for more than a week of jousting with legacy C++ code.
Turn to page 127: The case of the horrible include dependencies. (Now I am not even within miles of Michael Feathers but here as-short-as-I-could-manage answer..)
Problem: In C++ if a classA needs to know about ClassB, Class B's declaration is straight-lifted / textually included in the ClassA's source file. And since we programmers love to take it to the wrong extreme, a file can recursively include a zillion others transitively. Builds take years.. but hey atleast it builds.. we can wait.
Now to say 'instantiating ClassA under a test harness is difficult' is an understatement. (Quoting MF's example - Scheduler is our poster problem child with deps galore.)
#include "TestHarness.h"
#include "Scheduler.h"
TEST(create, Scheduler) // your fave C++ test framework macro
{
Scheduler scheduler("fred");
}
This will bring out the includes dragon with a flurry of build errors.
Blow#1 Patience-n-Persistence: Take on each include one at a time and decide if we really need that dependency. Let's assume SchedulerDisplay is one of them, whose displayEntry method is called in Scheduler's ctor.
Blow#2 Fake-it-till-you-make-it (Thanks RonJ):
#include "TestHarness.h"
#include "Scheduler.h"
void SchedulerDisplay::displayEntry(const string& entryDescription) {}
TEST(create, Scheduler)
{
Scheduler scheduler("fred");
}
And pop goes the dependency and all its transitive includes.
You can also reuse the Fake methods by encapsulating it in a Fakes.h file to be included in your test files.
Blow#3 Practice: It may not be always that simple.. but you get the idea. After the first few duels, the process of breaking deps will get easy-n-mechanical
Caveats (Did I mention there are caveats? :)
We need a separate build for test cases in this file ; we can have only 1 definition for the SchedulerDisplay::displayEntry method in a program. So create a separate program for scheduler tests.
We aren't breaking any dependencies in the program, so we are not making the code cleaner.
You need to maintain those fakes as long as we need the tests.
Your sense of aesthetics may be offended for a while.. just bite your lip and 'bear with us for a better tomorrow'
Use this technique for a very huge class with severe dependency issues. Don't use often or lightly.. Use this as a starting point for deeper refactorings. Over time this testing program can be taken behind the barn as you extract more classes (WITH their own tests).
For more.. please do read the book. Invaluable. Fight on bro!
A:
Since you're testing legacy code I'm assuming you can't refactor said code to have less dependencies (e.g. by using the pimpl idiom)
That leaves you with little options I'm afraid. Every header that was included for a type or function will need a mock object for that type or function for everything to compile, there's little you can do...
A:
I am not answering your question directly but I am afraid that unit testing just may not be the thing to do if you work with large amounts of legacy code.
After leading an XP team on a green field development project I really loved my Unit tests. Things happened and a few years later I find myself working on a large legacy code base that has lots of quality problems.
I tried to find a way to add units tests to the application but in the end just got stuck in a catch-22:
In order to write meaning full unit tests the code would need to be refactored.
Without unit tests it will be too dangerous to refactor the code.
If you feel like a hero and drink the cool-aid on unit testing then you may still give it a try but there is a real risk that you end up with just more test code of little value that now also needs to be maintained.
Sometimes it is just best to work on the code in the way that is "designed" to be worked on.
A:
I don't know if this will work for your project but
you might try to attack the problem from the link phase of your build.
This would completely eliminate your #include problem.
All you would need to do is re-implement the interfaces in the included files to do what ever you want and then just link to the mock object files that you have created to implement the interfaces in the include file.
The big disadvantage to this method is a more complected build system.
A:
If you keep writing stubs/mock/fake codes you risk doing unit testing on a class that has different behavior then when compiled on the main project.
But if those includes are there and have no added behavior then it's Ok.
I'd try not changing anything on the includes while doing the unit testing so you're sure (as far you can be on legacy code :) ) that you testing the real code.
A:
You're definitely between a rock and a hard place with legacy code with large dependencies. You've got a long hard slog ahead to sort it all out.
From what you say, it seems you are trying to keep the source code intact for each module in turn, placing it in a test harness with external dependencies mocked out. My suggestion here would be to take the even braver step of attempting some refactoring to eliminate (or invert) the dependencies, which is probably the very step you are trying to avoid.
I suggest this because I'm guessing the dependencies are going to kill you as you write tests. You will certainly be better off in the long term if you can eliminate the dependencies.
|
C++ Unit Testing Legacy Code: How to handle #include?
|
I've just started writing unit tests for a legacy code module with large physical dependencies using the #include directive. I've been dealing with them a few ways that felt overly tedious (providing empty headers to break long #include dependency lists, and using #define to prevent classes from being compiled) and was looking for some better strategies for handling these problems.
I've been frequently running into the problem of duplicating almost every header file with a blank version in order to separate the class I'm testing in it's entirety, and then writing substantial stub/mock/fake code for objects that will need to be replaced since they're now undefined.
Anyone know some better practices?
|
[
"The depression in the responses is overwhelming... But don't fear, we've got the holy book to exorcise the demons of legacy C++ code. Seriously just buy the book if you are in line for more than a week of jousting with legacy C++ code.\nTurn to page 127: The case of the horrible include dependencies. (Now I am not even within miles of Michael Feathers but here as-short-as-I-could-manage answer..)\nProblem: In C++ if a classA needs to know about ClassB, Class B's declaration is straight-lifted / textually included in the ClassA's source file. And since we programmers love to take it to the wrong extreme, a file can recursively include a zillion others transitively. Builds take years.. but hey atleast it builds.. we can wait. \nNow to say 'instantiating ClassA under a test harness is difficult' is an understatement. (Quoting MF's example - Scheduler is our poster problem child with deps galore.)\n#include \"TestHarness.h\"\n#include \"Scheduler.h\"\nTEST(create, Scheduler) // your fave C++ test framework macro\n{\n Scheduler scheduler(\"fred\");\n}\n\nThis will bring out the includes dragon with a flurry of build errors.\nBlow#1 Patience-n-Persistence: Take on each include one at a time and decide if we really need that dependency. Let's assume SchedulerDisplay is one of them, whose displayEntry method is called in Scheduler's ctor.\nBlow#2 Fake-it-till-you-make-it (Thanks RonJ):\n#include \"TestHarness.h\"\n#include \"Scheduler.h\"\nvoid SchedulerDisplay::displayEntry(const string& entryDescription) {}\nTEST(create, Scheduler)\n{\n Scheduler scheduler(\"fred\");\n}\n\nAnd pop goes the dependency and all its transitive includes. \nYou can also reuse the Fake methods by encapsulating it in a Fakes.h file to be included in your test files.\nBlow#3 Practice: It may not be always that simple.. but you get the idea. After the first few duels, the process of breaking deps will get easy-n-mechanical\nCaveats (Did I mention there are caveats? :) \n\nWe need a separate build for test cases in this file ; we can have only 1 definition for the SchedulerDisplay::displayEntry method in a program. So create a separate program for scheduler tests.\nWe aren't breaking any dependencies in the program, so we are not making the code cleaner.\nYou need to maintain those fakes as long as we need the tests.\nYour sense of aesthetics may be offended for a while.. just bite your lip and 'bear with us for a better tomorrow' \n\nUse this technique for a very huge class with severe dependency issues. Don't use often or lightly.. Use this as a starting point for deeper refactorings. Over time this testing program can be taken behind the barn as you extract more classes (WITH their own tests).\nFor more.. please do read the book. Invaluable. Fight on bro!\n",
"Since you're testing legacy code I'm assuming you can't refactor said code to have less dependencies (e.g. by using the pimpl idiom)\nThat leaves you with little options I'm afraid. Every header that was included for a type or function will need a mock object for that type or function for everything to compile, there's little you can do...\n",
"I am not answering your question directly but I am afraid that unit testing just may not be the thing to do if you work with large amounts of legacy code.\nAfter leading an XP team on a green field development project I really loved my Unit tests. Things happened and a few years later I find myself working on a large legacy code base that has lots of quality problems.\nI tried to find a way to add units tests to the application but in the end just got stuck in a catch-22:\n\nIn order to write meaning full unit tests the code would need to be refactored.\nWithout unit tests it will be too dangerous to refactor the code.\n\nIf you feel like a hero and drink the cool-aid on unit testing then you may still give it a try but there is a real risk that you end up with just more test code of little value that now also needs to be maintained.\nSometimes it is just best to work on the code in the way that is \"designed\" to be worked on.\n",
"I don't know if this will work for your project but\nyou might try to attack the problem from the link phase of your build.\nThis would completely eliminate your #include problem.\nAll you would need to do is re-implement the interfaces in the included files to do what ever you want and then just link to the mock object files that you have created to implement the interfaces in the include file.\nThe big disadvantage to this method is a more complected build system.\n",
"If you keep writing stubs/mock/fake codes you risk doing unit testing on a class that has different behavior then when compiled on the main project.\nBut if those includes are there and have no added behavior then it's Ok.\nI'd try not changing anything on the includes while doing the unit testing so you're sure (as far you can be on legacy code :) ) that you testing the real code.\n",
"You're definitely between a rock and a hard place with legacy code with large dependencies. You've got a long hard slog ahead to sort it all out.\nFrom what you say, it seems you are trying to keep the source code intact for each module in turn, placing it in a test harness with external dependencies mocked out. My suggestion here would be to take the even braver step of attempting some refactoring to eliminate (or invert) the dependencies, which is probably the very step you are trying to avoid.\nI suggest this because I'm guessing the dependencies are going to kill you as you write tests. You will certainly be better off in the long term if you can eliminate the dependencies.\n"
] |
[
10,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"c++",
"legacy",
"unit_testing"
] |
stackoverflow_0000065074_c++_legacy_unit_testing.txt
|
Q:
Work with PSDs in PHP
I was recently asked to come up with a script that will allow the end user to upload a PSD (Photoshop) file, and split it up and create images from each of the layers.
I would love to stay with PHP for this, but I am open to Python or Perl as well.
Any ideas would be greatly appreciated.
A:
You can try the PHP PSD Reader, which should at least get you started.
A:
Using GraphicsMagick or ImageMagick along with Magick++, you can then use imagick.
imagick has all of the calls necessary to convert PSDs from layers, including doing masks.
|
Work with PSDs in PHP
|
I was recently asked to come up with a script that will allow the end user to upload a PSD (Photoshop) file, and split it up and create images from each of the layers.
I would love to stay with PHP for this, but I am open to Python or Perl as well.
Any ideas would be greatly appreciated.
|
[
"You can try the PHP PSD Reader, which should at least get you started.\n",
"Using GraphicsMagick or ImageMagick along with Magick++, you can then use imagick.\nimagick has all of the calls necessary to convert PSDs from layers, including doing masks.\n"
] |
[
1,
1
] |
[] |
[] |
[
"php",
"psd"
] |
stackoverflow_0000065209_php_psd.txt
|
Q:
track down file handle
I have a huge ear that uses log4j and there is a single config file that is used to set it up. In this config file there is no mention of certain log files but, additional files apart from those specified in the config file get generated in the logs folder. I've searched for other combinations of (logger|log4j|log).(properties|xml) and haven't found anything promising in all of the jar files included in the ear. How do I track down which is the offending thread/class that is creating these extra files?
A:
Try placing a breakpoint in the File class' constructors and the mkdir and createNewFile methods. Generally, code will use the File class to create its files or directories. You should have the Java source code for these classes included with your JVM.
A:
Add -Dlog4j.debug to the command line and there will be extra info in standard output about how it is configured.
A:
Formally SysInternal's, now Microsoft's Process Explorer
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
"Find" menu item -> "Find Handle or DLL..."
A:
SysInternals may not help with Java class IO. Try getting a thread dump of the JVM (e.g., kill -3) while these logs are being written to. You should be able to catch a thread red handed with java.io packages near the top of the stack trace.
|
track down file handle
|
I have a huge ear that uses log4j and there is a single config file that is used to set it up. In this config file there is no mention of certain log files but, additional files apart from those specified in the config file get generated in the logs folder. I've searched for other combinations of (logger|log4j|log).(properties|xml) and haven't found anything promising in all of the jar files included in the ear. How do I track down which is the offending thread/class that is creating these extra files?
|
[
"Try placing a breakpoint in the File class' constructors and the mkdir and createNewFile methods. Generally, code will use the File class to create its files or directories. You should have the Java source code for these classes included with your JVM.\n",
"Add -Dlog4j.debug to the command line and there will be extra info in standard output about how it is configured.\n",
"Formally SysInternal's, now Microsoft's Process Explorer\nhttp://technet.microsoft.com/en-us/sysinternals/bb896653.aspx\n\"Find\" menu item -> \"Find Handle or DLL...\"\n",
"SysInternals may not help with Java class IO. Try getting a thread dump of the JVM (e.g., kill -3) while these logs are being written to. You should be able to catch a thread red handed with java.io packages near the top of the stack trace.\n"
] |
[
3,
3,
0,
0
] |
[] |
[] |
[
"file",
"jakarta_ee",
"java",
"logging",
"websphere"
] |
stackoverflow_0000065128_file_jakarta_ee_java_logging_websphere.txt
|
Q:
Detecting COMCTL32 version in .NET
How do I determine which version of comctl32.dll is being used by a C# .NET application? The answers I've seen to this question usually involve getting version info from the physical file in Windows\System, but that isn't necessarily the version that's actually in use due to side-by-side considerations.
A:
System.Diagnostics.Process.GetCurrentProcess.Modules gives you all the modules loaded in the current process. This also includes the unmanaged win32 dlls. You can search through the collection and check the FileVersionInfo property for the loaded version.
|
Detecting COMCTL32 version in .NET
|
How do I determine which version of comctl32.dll is being used by a C# .NET application? The answers I've seen to this question usually involve getting version info from the physical file in Windows\System, but that isn't necessarily the version that's actually in use due to side-by-side considerations.
|
[
"System.Diagnostics.Process.GetCurrentProcess.Modules gives you all the modules loaded in the current process. This also includes the unmanaged win32 dlls. You can search through the collection and check the FileVersionInfo property for the loaded version.\n"
] |
[
2
] |
[] |
[] |
[
".net",
"c#",
"comctl32"
] |
stackoverflow_0000065889_.net_c#_comctl32.txt
|
Q:
Apache Fall Back When PHP Fails
I was wondering if anybody knew of a method to configure apache to fall back to returning a static HTML page, should it (Apache) be able to determine that PHP has died? This would provide the developer with a elegant solution to displaying an error page and not (worst case scenario) the source code of the PHP page that should have been executed.
Thanks.
A:
The PHP source code is only displayed when apache is not configured correctly to handle php files. That is, when a proper handler has not been defined.
On errors, what is shown can be configured on php.ini, mainly the display_errors variable. That should be set to off and log_errors to on on a production environment.
If php actually dies, apache will return the appropriate HTTP status code (usually 500) with the page defined by the ErrorDocument directive. If it didn't die, but got stuck in a loop, there is not much you can do as far as I know.
You can specify a different page for different error codes.
A:
I would assume that this typically results in a 500 error, and you can configure apaches 500 handler to show a static page:
ErrorDocument 500 /500error.html
You can also read about error handlers on apaches documentation site
A:
The real problem is that PHP fatal errors don't cause Apache to return a 500 code. Errors except for E_FATAL and E_PARSE can be handled however you like using set_error_handler().
A:
There are 2 ways to use PHP and Apache.
1. Install PHP as an Apache module: this way the PHP execution is a thread inside the apache process. So if PHP execution fails, then Apache process fails too. there is no fallback strategy.
2. Install PHP as a CGI script handler: this way Apache will start a new PHP process for each request. If the PHP execution fails, then Apache will know that, and there might be a way to handle the error.
regardless of the way you install PHP, when PHP execution fails you can handle errors in the php.ini file.
|
Apache Fall Back When PHP Fails
|
I was wondering if anybody knew of a method to configure apache to fall back to returning a static HTML page, should it (Apache) be able to determine that PHP has died? This would provide the developer with a elegant solution to displaying an error page and not (worst case scenario) the source code of the PHP page that should have been executed.
Thanks.
|
[
"The PHP source code is only displayed when apache is not configured correctly to handle php files. That is, when a proper handler has not been defined. \nOn errors, what is shown can be configured on php.ini, mainly the display_errors variable. That should be set to off and log_errors to on on a production environment.\nIf php actually dies, apache will return the appropriate HTTP status code (usually 500) with the page defined by the ErrorDocument directive. If it didn't die, but got stuck in a loop, there is not much you can do as far as I know.\nYou can specify a different page for different error codes.\n",
"I would assume that this typically results in a 500 error, and you can configure apaches 500 handler to show a static page:\nErrorDocument 500 /500error.html\nYou can also read about error handlers on apaches documentation site\n",
"The real problem is that PHP fatal errors don't cause Apache to return a 500 code. Errors except for E_FATAL and E_PARSE can be handled however you like using set_error_handler().\n",
"There are 2 ways to use PHP and Apache.\n1. Install PHP as an Apache module: this way the PHP execution is a thread inside the apache process. So if PHP execution fails, then Apache process fails too. there is no fallback strategy.\n\n2. Install PHP as a CGI script handler: this way Apache will start a new PHP process for each request. If the PHP execution fails, then Apache will know that, and there might be a way to handle the error.\n\nregardless of the way you install PHP, when PHP execution fails you can handle errors in the php.ini file.\n"
] |
[
8,
3,
1,
0
] |
[] |
[] |
[
"apache",
"configuration",
"php"
] |
stackoverflow_0000062012_apache_configuration_php.txt
|
Q:
Pin Emacs buffers to windows (for cscope)
For my day job, I live in Emacs. Utterly. I also have become pretty dependent on CScope to help me find things in the code.
Normally, I have 2 windows in a split (C-x 3):
alt text http://bitthicket.com/files/emacs-2split.JPG
And I use the right window for code buffers and the left window for the CScope search buffer. When you do a CScope search and select a result, it automatically updates the right-side window to show the buffer referred to by the result. This is all well and good, except that it causes me to lose my place in some other buffer that I was studying. Sometimes this is no biggie, because [C-s u] gets me back to where I was.
What would be better, though, is to have 3 split windows like this ([C-x 2] in the left window):
alt text http://bitthicket.com/files/emacs-3split.jpg
And have the bottom left window contain the CScope search buffer, and the top left window be the only buffer that CScope ever updates. That way, I can see my CScope searches and navigate around the code without losing the buffer I'm focused on.
Anyone know how I can do that?
A:
Put this in your .emacs file:
;; Toggle window dedication
(defun toggle-window-dedicated ()
"Toggle whether the current active window is dedicated or not"
(interactive)
(message
(if (let (window (get-buffer-window (current-buffer)))
(set-window-dedicated-p window
(not (window-dedicated-p window))))
"Window '%s' is dedicated"
"Window '%s' is normal")
(current-buffer)))
Then bind it to some key - I use the Pause key:
(global-set-key [pause] 'toggle-window-dedicated)
And then use it to "dedicate" the window you want locked. then cscope can only open files from its result window in some OTHER window. Works a charm. I specifically use it for exactly this purpose - keeping one source file always on screen, while using cscope in a second buffer/window, and looking at cscope results in a third.
A:
Well, I decided to not be a reputation-whore and find the answer myself. I looked in cscope.el as shown on the Emacs wiki, as well as the xcscope.el that comes with the cscope RPM package on RHEL.
Neither appear to give a way to do what I'm wanting. The way is probably to edit the ELisp by adding a package variable like *browse-buffer* or something and just initialize that variable if not already initialized the first time the user does [C-c C-s g] or whatever, and always have the resulting code shown in *browse-buffer*. Then the user can put the *browse-buffer* wherever he wants it.
|
Pin Emacs buffers to windows (for cscope)
|
For my day job, I live in Emacs. Utterly. I also have become pretty dependent on CScope to help me find things in the code.
Normally, I have 2 windows in a split (C-x 3):
alt text http://bitthicket.com/files/emacs-2split.JPG
And I use the right window for code buffers and the left window for the CScope search buffer. When you do a CScope search and select a result, it automatically updates the right-side window to show the buffer referred to by the result. This is all well and good, except that it causes me to lose my place in some other buffer that I was studying. Sometimes this is no biggie, because [C-s u] gets me back to where I was.
What would be better, though, is to have 3 split windows like this ([C-x 2] in the left window):
alt text http://bitthicket.com/files/emacs-3split.jpg
And have the bottom left window contain the CScope search buffer, and the top left window be the only buffer that CScope ever updates. That way, I can see my CScope searches and navigate around the code without losing the buffer I'm focused on.
Anyone know how I can do that?
|
[
"Put this in your .emacs file:\n;; Toggle window dedication\n\n(defun toggle-window-dedicated ()\n\n\"Toggle whether the current active window is dedicated or not\"\n\n(interactive)\n\n(message \n\n (if (let (window (get-buffer-window (current-buffer)))\n\n (set-window-dedicated-p window \n\n (not (window-dedicated-p window))))\n\n \"Window '%s' is dedicated\"\n\n \"Window '%s' is normal\")\n\n (current-buffer)))\n\nThen bind it to some key - I use the Pause key:\n(global-set-key [pause] 'toggle-window-dedicated)\n\nAnd then use it to \"dedicate\" the window you want locked. then cscope can only open files from its result window in some OTHER window. Works a charm. I specifically use it for exactly this purpose - keeping one source file always on screen, while using cscope in a second buffer/window, and looking at cscope results in a third.\n",
"Well, I decided to not be a reputation-whore and find the answer myself. I looked in cscope.el as shown on the Emacs wiki, as well as the xcscope.el that comes with the cscope RPM package on RHEL.\nNeither appear to give a way to do what I'm wanting. The way is probably to edit the ELisp by adding a package variable like *browse-buffer* or something and just initialize that variable if not already initialized the first time the user does [C-c C-s g] or whatever, and always have the resulting code shown in *browse-buffer*. Then the user can put the *browse-buffer* wherever he wants it.\n"
] |
[
35,
0
] |
[] |
[] |
[
"cscope",
"emacs"
] |
stackoverflow_0000043765_cscope_emacs.txt
|
Q:
When would I use Server.Transfer over PostBackURL?
Or vice versa.
Update:
Hmm, let's assume I have a shopping cart app, the user clicks on the Checkout button.
The next thing I want to do is send the user to a Invoice.aspx page (or similar). When the user hits checkout, I could Button.PostBackURL = "Invoice.aspx"
or I could do
Server.Transfer("Invoice.aspx")
(I also changed the title since the method is called Transfer and not TransferURL)
A:
Server.TransferURL will not result
in a roundtrip of HTTP
request/response. The address bar
will not update, as far as the
browser knows it has received only
one document. Server.Transfer also retains execution context, so the script "keeps going" as opposed to "starts anew".
PostbackURL ensures an
HTTP request, resulting in a
possibly different URL and of course
incurring network latency costs.
Usually when you are attempting to "decide between the two" it means you are better off using PostbackURL.
Feel free to expand your question with specifics and we can look at your precise needs.
A:
Here is a good breakdown between the two:
Server.Transfer vs Response.Redirect
A:
Server.Transfer is done entirely from the server. Postback is initiated from the client for posting form contents and postback url identifies the page to post to.
Maybe you meant to compare with Response.Redirect, which forces the client to submit a new request for a new url.
|
When would I use Server.Transfer over PostBackURL?
|
Or vice versa.
Update:
Hmm, let's assume I have a shopping cart app, the user clicks on the Checkout button.
The next thing I want to do is send the user to a Invoice.aspx page (or similar). When the user hits checkout, I could Button.PostBackURL = "Invoice.aspx"
or I could do
Server.Transfer("Invoice.aspx")
(I also changed the title since the method is called Transfer and not TransferURL)
|
[
"\nServer.TransferURL will not result\nin a roundtrip of HTTP\nrequest/response. The address bar\nwill not update, as far as the\nbrowser knows it has received only\none document. Server.Transfer also retains execution context, so the script \"keeps going\" as opposed to \"starts anew\".\nPostbackURL ensures an\nHTTP request, resulting in a\npossibly different URL and of course\nincurring network latency costs.\n\nUsually when you are attempting to \"decide between the two\" it means you are better off using PostbackURL. \nFeel free to expand your question with specifics and we can look at your precise needs.\n",
"Here is a good breakdown between the two:\nServer.Transfer vs Response.Redirect\n",
"Server.Transfer is done entirely from the server. Postback is initiated from the client for posting form contents and postback url identifies the page to post to.\nMaybe you meant to compare with Response.Redirect, which forces the client to submit a new request for a new url.\n"
] |
[
6,
3,
1
] |
[] |
[] |
[
"asp.net"
] |
stackoverflow_0000065956_asp.net.txt
|
Q:
Alternative Style(CSS) methods in SAP Portal?
I am overriding a lot of SAP's Portal functionality in my current project. I have to create a custom fixed width framework, custom iView trays, custom KM API functionality, and more.
With all of these custom parts, I will not be using a lot of the style functionality implemented by SAP's Theme editor. What I would like to do is create an external CSS, store it outside of the Portal and reference it. Storing externally will allow for easier updates rather than storing the CSS within a portal application. It would also allow for all custom pieces to have their styles in once place.
Unfortunately, I've not found a way to gain access to the HEAD portion of the page that allows me to insert an external stylesheet. Portal Applications can do so using the IResource object to gain access to internal references, but not items on another server.
I'm looking for any ideas that would allow me to gain this functionality. I have x-posted on SAP's SDN, but I suspect I'll get a better answer here.
A:
I'd consider it dirty hack, but as a non-Portal developer I'd consider using JavaScript to insert a new link element in the head pointing to your new CSS file. Of course you'd have a flash of un-styled content because the script probably won't run until after part of the page has been downloaded and rendered, but it may be an adequate solution.
A:
I hate that I'm answering my own question, but I did find a potential solution that's not documented well and in typical SAP fashion uses deprecated methods. So it might be a slightly less dirty hack than what Eric suggested. I found it through an unrelated SDN forum post.
Basically, you dive into the request object and gather the PortalNode. Once you have that, you ask it for a value of a IPortalResponse. This object can be cast to a PortalHtmlResponse. That object has a deprecated method called getHtmlDocument. Using that method, you can use some Html mirror objects to get the head and insert new links.
Sample:
IPortalNode node = request.getNode().getPortalNode();
IPortalResponse resp = (IPortalResponse) node.getValue(IPortalResponse.class.getName());
if (resp instanceof PortalHtmlResponse) {
PortalHtmlResponse htmlResp = (PortalHtmlResponse) resp;
HtmlDocument doc = htmlResp.getHtmlDocument();
HtmlHead myHead = doc.getHead();
HtmlLink cssLink = new HtmlLink("http://myserver.com/css/mycss.css");
cssLink.setType("text/css");
cssLink.setRel("stylesheet");
myHead.addElement(cssLink);
}
|
Alternative Style(CSS) methods in SAP Portal?
|
I am overriding a lot of SAP's Portal functionality in my current project. I have to create a custom fixed width framework, custom iView trays, custom KM API functionality, and more.
With all of these custom parts, I will not be using a lot of the style functionality implemented by SAP's Theme editor. What I would like to do is create an external CSS, store it outside of the Portal and reference it. Storing externally will allow for easier updates rather than storing the CSS within a portal application. It would also allow for all custom pieces to have their styles in once place.
Unfortunately, I've not found a way to gain access to the HEAD portion of the page that allows me to insert an external stylesheet. Portal Applications can do so using the IResource object to gain access to internal references, but not items on another server.
I'm looking for any ideas that would allow me to gain this functionality. I have x-posted on SAP's SDN, but I suspect I'll get a better answer here.
|
[
"I'd consider it dirty hack, but as a non-Portal developer I'd consider using JavaScript to insert a new link element in the head pointing to your new CSS file. Of course you'd have a flash of un-styled content because the script probably won't run until after part of the page has been downloaded and rendered, but it may be an adequate solution.\n",
"I hate that I'm answering my own question, but I did find a potential solution that's not documented well and in typical SAP fashion uses deprecated methods. So it might be a slightly less dirty hack than what Eric suggested. I found it through an unrelated SDN forum post.\nBasically, you dive into the request object and gather the PortalNode. Once you have that, you ask it for a value of a IPortalResponse. This object can be cast to a PortalHtmlResponse. That object has a deprecated method called getHtmlDocument. Using that method, you can use some Html mirror objects to get the head and insert new links.\nSample:\nIPortalNode node = request.getNode().getPortalNode();\nIPortalResponse resp = (IPortalResponse) node.getValue(IPortalResponse.class.getName());\nif (resp instanceof PortalHtmlResponse) {\n PortalHtmlResponse htmlResp = (PortalHtmlResponse) resp;\n HtmlDocument doc = htmlResp.getHtmlDocument();\n HtmlHead myHead = doc.getHead();\n HtmlLink cssLink = new HtmlLink(\"http://myserver.com/css/mycss.css\");\n cssLink.setType(\"text/css\");\n cssLink.setRel(\"stylesheet\");\n myHead.addElement(cssLink);\n}\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"css",
"sap_enterprise_portal"
] |
stackoverflow_0000062406_css_sap_enterprise_portal.txt
|
Q:
What is the best implementation of an exception mechanism?
Most program languages have some kind of exception handling; some languages have return codes, others have try/catch, or rescue/retry, etc., each with its own pecularities in readability, robustness, and practical effectiveness in a large group development effort. Which one is the best and why ?
A:
I would say that depends on the nature of your problem. Different problem domains could require almost arbitrary error messages, while other trivial tasks just can return NULL or -1 on error.
The problem with error return codes is that you're polluting/masking the error since it can be ignored (sometimes without the API client not knowing they should check for the error code). It gives a (reasonably) valid output from the method at hand.
Imagine you have an API where you ask for a index key for some map, store it in a list, and then continue running. The API then at a later moment sends a callback, and that method might then traverse the table, using the key which might be -1 in this example (the error code). BOOM, the application crashes as you index to -1 in some array, and those kinds of problems can be very hard to nail down. This is still a trivial example, but it illustrates a problem with error codes.
On the other hand, error codes are faster than throwing exceptions, and you might want to use them for frequently accessed method calls - if it is appropriate to return such an error code. I would say that trying to encapsulate these kinds of error codes within a private assembly would be quite OK since you're not exposing those error codes to the client of the API. Always remember to document these methods rigorously since these kinds of application nukes can linger around in an application for a long time since they were triggered before it goes off.
Personally, I prefer a mix of them both to some extent. I use exceptions just for that - exceptions - when the program runs into a state which was not expected and needs to inform something has gone way out of plan. I am not a sucker of writing try/catch blocks all over my code, but it's all down to personal preference.
A:
Best for what? Language design is always about tradeoffs. The advantage of return codes is that they don't require any runtime support beyond regular function calls; the disadvantages are 1) you always have to check them 2) the return type has to have a failure value that isn't a valid result of the function call.
The advantage of automatic exception handling is that error conditions in your code don't disappear.
The differences between exception handling semantics in various languages (and Lisp's condition system, E's ejectors, etc) mainly show up in how stack unwinding is dealt with when program execution should continue.
To summarize, though: automatic exception handling is extremely valuable when you need to write readable, robust software, especially in a large team. Letting the computer track error conditions for you gives you one less thing to think about when reading code, and it removes an opportunity for error. The only time I'd use return codes to indicate errors is if I was implementing a language with exception handling in one that didn't have it.
A:
try/catch/finally does the job admirably.
It allows the programmer to handle specific conditions as well as general failures gracefully.
All said and done I'm sure that each is as good as any other.
A:
I'd have to go with the try / catch concept. I feel like in terms of readability this provides the most to a code maintainer. It should be fairly straight forward to find the chain of function calls as long as the exception is properly typed and the associated message contains detailed enough data (I personally do not like including stack traces but I know plenty who do and this would make this even more traceable.) The return code implementation requires an external table of code definitions on a program by program basis. Which from personal experience is both unwieldy to maintain and reference.
A:
For unusual perspective on exception handling, see Haskell's Control.Exception monad
|
What is the best implementation of an exception mechanism?
|
Most program languages have some kind of exception handling; some languages have return codes, others have try/catch, or rescue/retry, etc., each with its own pecularities in readability, robustness, and practical effectiveness in a large group development effort. Which one is the best and why ?
|
[
"I would say that depends on the nature of your problem. Different problem domains could require almost arbitrary error messages, while other trivial tasks just can return NULL or -1 on error. \nThe problem with error return codes is that you're polluting/masking the error since it can be ignored (sometimes without the API client not knowing they should check for the error code). It gives a (reasonably) valid output from the method at hand. \nImagine you have an API where you ask for a index key for some map, store it in a list, and then continue running. The API then at a later moment sends a callback, and that method might then traverse the table, using the key which might be -1 in this example (the error code). BOOM, the application crashes as you index to -1 in some array, and those kinds of problems can be very hard to nail down. This is still a trivial example, but it illustrates a problem with error codes. \nOn the other hand, error codes are faster than throwing exceptions, and you might want to use them for frequently accessed method calls - if it is appropriate to return such an error code. I would say that trying to encapsulate these kinds of error codes within a private assembly would be quite OK since you're not exposing those error codes to the client of the API. Always remember to document these methods rigorously since these kinds of application nukes can linger around in an application for a long time since they were triggered before it goes off.\nPersonally, I prefer a mix of them both to some extent. I use exceptions just for that - exceptions - when the program runs into a state which was not expected and needs to inform something has gone way out of plan. I am not a sucker of writing try/catch blocks all over my code, but it's all down to personal preference. \n",
"Best for what? Language design is always about tradeoffs. The advantage of return codes is that they don't require any runtime support beyond regular function calls; the disadvantages are 1) you always have to check them 2) the return type has to have a failure value that isn't a valid result of the function call.\nThe advantage of automatic exception handling is that error conditions in your code don't disappear.\nThe differences between exception handling semantics in various languages (and Lisp's condition system, E's ejectors, etc) mainly show up in how stack unwinding is dealt with when program execution should continue.\nTo summarize, though: automatic exception handling is extremely valuable when you need to write readable, robust software, especially in a large team. Letting the computer track error conditions for you gives you one less thing to think about when reading code, and it removes an opportunity for error. The only time I'd use return codes to indicate errors is if I was implementing a language with exception handling in one that didn't have it.\n",
"try/catch/finally does the job admirably. \nIt allows the programmer to handle specific conditions as well as general failures gracefully.\nAll said and done I'm sure that each is as good as any other.\n",
"I'd have to go with the try / catch concept. I feel like in terms of readability this provides the most to a code maintainer. It should be fairly straight forward to find the chain of function calls as long as the exception is properly typed and the associated message contains detailed enough data (I personally do not like including stack traces but I know plenty who do and this would make this even more traceable.) The return code implementation requires an external table of code definitions on a program by program basis. Which from personal experience is both unwieldy to maintain and reference.\n",
"For unusual perspective on exception handling, see Haskell's Control.Exception monad\n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"exception",
"language_agnostic"
] |
stackoverflow_0000066016_exception_language_agnostic.txt
|
Q:
What are the C# documentation tags?
In C# documentation tags allow you to produce output similar to MSDN. What are a list of allowable tags for use inside the /// (triple slash) comment area above classes, methods, and properties?
A:
If you type this just above a method or class, intellisense should prompt you with a list of available tags:
/// <
A:
Here's a list:
summary
param
returns
example
code
see
seealso
list
value
file
copyright
Here's an example:
<file>
<copyright>(c) Extreme Designers Inc. 2008.</copyright>
<datecreated>2008-09-15</datecreated>
<summary>
Here's my summary
</summary>
<remarks>
<para>The <see cref="TextReader"/> can be used in the following ways:</para>
<list type="number">
<item>first item</item>
<item>second item</item>
</list>
</remarks>
<example>
<code>
System.Console.WriteLine("Hello, World");
</code>
</example>
<param name="aParam">My first param</param>
<returns>an object that represents a summary</returns>
</file>
A:
Check out Great documentation on the various C# XML documentation tags. (Go to the bottom to see the tags)
A:
MSDN article from 2002 detailing all the tags and when to use them:
http://msdn.microsoft.com/en-us/magazine/cc302121.aspx
A:
GhostDoc helps by creating a stub comment for your method/class.
A:
See the excellent MSDN article here as your first stop.
A:
Look inside the docs for Sandcastle. This is the new documentation standard for .NET.
|
What are the C# documentation tags?
|
In C# documentation tags allow you to produce output similar to MSDN. What are a list of allowable tags for use inside the /// (triple slash) comment area above classes, methods, and properties?
|
[
"If you type this just above a method or class, intellisense should prompt you with a list of available tags:\n/// <\n\n",
"Here's a list:\n\nsummary \nparam \nreturns \nexample \ncode\nsee\nseealso\nlist\nvalue\nfile\ncopyright\n\nHere's an example:\n\n<file>\n<copyright>(c) Extreme Designers Inc. 2008.</copyright>\n<datecreated>2008-09-15</datecreated>\n<summary>\nHere's my summary\n</summary>\n<remarks>\n<para>The <see cref=\"TextReader\"/> can be used in the following ways:</para>\n<list type=\"number\">\n<item>first item</item>\n<item>second item</item>\n</list>\n</remarks>\n<example>\n<code>\nSystem.Console.WriteLine(\"Hello, World\");\n</code>\n</example>\n<param name=\"aParam\">My first param</param>\n<returns>an object that represents a summary</returns>\n</file>\n\n",
"Check out Great documentation on the various C# XML documentation tags. (Go to the bottom to see the tags)\n",
"MSDN article from 2002 detailing all the tags and when to use them:\nhttp://msdn.microsoft.com/en-us/magazine/cc302121.aspx\n",
"GhostDoc helps by creating a stub comment for your method/class.\n",
"See the excellent MSDN article here as your first stop.\n",
"Look inside the docs for Sandcastle. This is the new documentation standard for .NET.\n"
] |
[
15,
14,
8,
4,
3,
2,
1
] |
[] |
[] |
[
"c#",
"documentation",
"xml"
] |
stackoverflow_0000065969_c#_documentation_xml.txt
|
Q:
Why does Tomcat 5.5 (with Java 1.4, running on Windows XP 32-bit) suddenly hang?
I've been running Tomcat 5.5 with Java 1.4 for a while now with a huge webapp. Most of the time it runs fine, but sometimes it will just hang, with no exception generated, and no apparant way of getting it to run again other than re-starting Tomcat. The tomcat instance is allowed a gigabyte of memory on the heap, but rarely exceeds 300 MB. Has anyone else run into this issue, and is there a solution for it?
For clarification: I determined how much memory it is using via Task Manager and via Eclipse (I've also tried running it outside of Eclipse, but get the same problem eventually, though it takes a little longer). With Eclipse, I look at the memory allocated via its little (optional) memory pane and the amount allocated to javaw.exe via the task manager. I use the sysdeo? tomcat plugin for Eclipse.
A:
For any jvm process, force a thread dump. In windows, this can be done with CTRL-BREAK, I believe, in the console window.
In *nix, it is almost always "kill -3 jvm-pid".
This may show if you have threads waiting on db connection pool/thread pool, etc.
Another thing to check out is how many connections you have currently to the JVM -- either use NETSTAT or SysInternals utility such as tcpconn/tcpview (google it).
Also, try to run with the verbose:gc JVM flag. For Sun's JVM, run like "java -verbose:gc". This will show your garbage collections. If it is collecting a lot (FULL COLLECTIONS, expecially) then you probably have a memory leak. The full collections are costly, especially on large heaps like that.
How are you determining that only 300mb are being used?
A:
It sounds like you're hitting a deadlock.
If you can reproduce it in a dev environment then try attaching a debugger once it's happened. Take a look at your threads and see if you have any deadlocks.
If you can't get a debugger to attach you should be able to generate a thread dump, as Dustin pointed out.
A:
Try increasing the logging sensitivity for the Tomcat application server.
http://tomcat.apache.org/tomcat-5.5-doc/logging.html
You can increase the sensitivity to FINEST or ALL for most of them for a few days and see if that helps you catch anything.
A:
I agree with creating multiple thread dumps and viewing them though this: Thread Dump Analyzer
|
Why does Tomcat 5.5 (with Java 1.4, running on Windows XP 32-bit) suddenly hang?
|
I've been running Tomcat 5.5 with Java 1.4 for a while now with a huge webapp. Most of the time it runs fine, but sometimes it will just hang, with no exception generated, and no apparant way of getting it to run again other than re-starting Tomcat. The tomcat instance is allowed a gigabyte of memory on the heap, but rarely exceeds 300 MB. Has anyone else run into this issue, and is there a solution for it?
For clarification: I determined how much memory it is using via Task Manager and via Eclipse (I've also tried running it outside of Eclipse, but get the same problem eventually, though it takes a little longer). With Eclipse, I look at the memory allocated via its little (optional) memory pane and the amount allocated to javaw.exe via the task manager. I use the sysdeo? tomcat plugin for Eclipse.
|
[
"For any jvm process, force a thread dump. In windows, this can be done with CTRL-BREAK, I believe, in the console window.\nIn *nix, it is almost always \"kill -3 jvm-pid\".\nThis may show if you have threads waiting on db connection pool/thread pool, etc.\nAnother thing to check out is how many connections you have currently to the JVM -- either use NETSTAT or SysInternals utility such as tcpconn/tcpview (google it).\nAlso, try to run with the verbose:gc JVM flag. For Sun's JVM, run like \"java -verbose:gc\". This will show your garbage collections. If it is collecting a lot (FULL COLLECTIONS, expecially) then you probably have a memory leak. The full collections are costly, especially on large heaps like that.\nHow are you determining that only 300mb are being used?\n",
"It sounds like you're hitting a deadlock.\nIf you can reproduce it in a dev environment then try attaching a debugger once it's happened. Take a look at your threads and see if you have any deadlocks.\nIf you can't get a debugger to attach you should be able to generate a thread dump, as Dustin pointed out.\n",
"Try increasing the logging sensitivity for the Tomcat application server.\nhttp://tomcat.apache.org/tomcat-5.5-doc/logging.html\nYou can increase the sensitivity to FINEST or ALL for most of them for a few days and see if that helps you catch anything. \n",
"I agree with creating multiple thread dumps and viewing them though this: Thread Dump Analyzer\n"
] |
[
3,
0,
0,
0
] |
[] |
[] |
[
"java",
"tomcat",
"windows"
] |
stackoverflow_0000066104_java_tomcat_windows.txt
|
Q:
Export ASPX to HTML
We're building a CMS. The site will be built and managed by the users in aspx pages, but we would like to create a static site of HTML's.
The way we're doing it now is with code I found here that overloads the Render method in the Aspx Page and writes the HTML string to a file. This works fine for a single page, but the thing with our CMS is that we want to automatically create a few HTML pages for a site right from the start, even before the creator has edited anything in the system.
Does anyone know of any way to do this?
A:
I seem to have found the solution for my problemby using the Server.Ecxcute method.
I found an article that demonstared the use of it:
TextWriter textWriter = new StringWriter();
Server.Execute("myOtherPage.aspx", textWriter);
Then I do a few maniulatons on the textWriter, and insert it into an html file. Et voila! It works!
A:
Calling the Render method is still pretty simple. Just create an instance of your page, create a stub WebContext along with the WebRequest object, and call the Render method of the page. You are then free to do whatever you want with the results.
Alternatively, write a little curl or wget script to download and store whichever pages you want to make static.
A:
You could use wget (a command line tool) to recursively query each page and save them to html files. It would update all necessary links in the resulting html to reference .html files instead of .aspx. This way, you can code all your site as if you were using server-generated pages (easier to test), and then convert it to static pages.
If you need static HTML for performance reasons only, my preference would be to use ASP.Net output caching.
A:
I recommend you do this a very simple way and don't do it in code. It will allow your CMS code to do what the CMS code should do and will keep it as simple as possible.
Use a product such as HTTrack. It calls itself a "website copier". It crawls a site and creates html output. It is fast and free. You can just have it run at whatever frequency you think is best.
It decouples your HTML output needs from your CMS design and implementation. It reduces complexity and gives you some flexibility in how you output the HTML without introducing failure points in your CMS code.
A:
@ckarras: I would rather not use an external tool, because I want the HTML pages to be created programmatically and not manually.
@jttraino: I don't have a time interval in which the site needs to be outputted- the uotput has to occur when a user creates a new site.
@Frank Krueger: I don't really understand how to create an instance of my page using WebContext and WebRequest.
I searched for "wget" in searchdotnet, and got to a post about a .net class called WebClient. It seems to do what I want if I use the DownloadString() method - gets a string from a specific url. The problem is that because our CMS needs to be logged in to, when the method tries to reach the page it's thrown to the login page, and therefore returns the login.aspx HTML...
Any thoughts as to how I can continue from here?
|
Export ASPX to HTML
|
We're building a CMS. The site will be built and managed by the users in aspx pages, but we would like to create a static site of HTML's.
The way we're doing it now is with code I found here that overloads the Render method in the Aspx Page and writes the HTML string to a file. This works fine for a single page, but the thing with our CMS is that we want to automatically create a few HTML pages for a site right from the start, even before the creator has edited anything in the system.
Does anyone know of any way to do this?
|
[
"I seem to have found the solution for my problemby using the Server.Ecxcute method.\nI found an article that demonstared the use of it:\nTextWriter textWriter = new StringWriter();\nServer.Execute(\"myOtherPage.aspx\", textWriter);\n\nThen I do a few maniulatons on the textWriter, and insert it into an html file. Et voila! It works!\n",
"Calling the Render method is still pretty simple. Just create an instance of your page, create a stub WebContext along with the WebRequest object, and call the Render method of the page. You are then free to do whatever you want with the results.\nAlternatively, write a little curl or wget script to download and store whichever pages you want to make static.\n",
"You could use wget (a command line tool) to recursively query each page and save them to html files. It would update all necessary links in the resulting html to reference .html files instead of .aspx. This way, you can code all your site as if you were using server-generated pages (easier to test), and then convert it to static pages.\nIf you need static HTML for performance reasons only, my preference would be to use ASP.Net output caching.\n",
"I recommend you do this a very simple way and don't do it in code. It will allow your CMS code to do what the CMS code should do and will keep it as simple as possible.\nUse a product such as HTTrack. It calls itself a \"website copier\". It crawls a site and creates html output. It is fast and free. You can just have it run at whatever frequency you think is best.\nIt decouples your HTML output needs from your CMS design and implementation. It reduces complexity and gives you some flexibility in how you output the HTML without introducing failure points in your CMS code.\n",
"@ckarras: I would rather not use an external tool, because I want the HTML pages to be created programmatically and not manually.\n@jttraino: I don't have a time interval in which the site needs to be outputted- the uotput has to occur when a user creates a new site.\n@Frank Krueger: I don't really understand how to create an instance of my page using WebContext and WebRequest. \nI searched for \"wget\" in searchdotnet, and got to a post about a .net class called WebClient. It seems to do what I want if I use the DownloadString() method - gets a string from a specific url. The problem is that because our CMS needs to be logged in to, when the method tries to reach the page it's thrown to the login page, and therefore returns the login.aspx HTML...\nAny thoughts as to how I can continue from here?\n"
] |
[
4,
2,
2,
2,
1
] |
[] |
[] |
[
"asp.net",
"html",
"screen_scraping"
] |
stackoverflow_0000056279_asp.net_html_screen_scraping.txt
|
Q:
Advanced directory switching in bash
I know a few advanced ways, to change directories. pushd and popd (directory stack) or cd - (change to last directory).
But I am looking for quick way to achieve the following:
Say, I am in a rather deep dir:
/this/is/a/very/deep/directory/structure/with\ lot\ of\ nasty/names
and I want to switch to
/this/is/another/very/deep/directory/structure/with\ lot\ of\ nasty/names
Is there a cool/quick/geeky way to do it (without the mouse)?
A:
Do you mean that the path names are the same, and only one directory name changes ("a" becomes "another")? In that case:
cd ${PWD/a/another}
will switch to the other directory. $PWD holds your current directory, and ${var/foo/bar} gives you $var with the string 'foo' replaced by 'bar'.
A:
What about setting up your CDPATH variable?
A:
cd ^/a/^/another/
|
Advanced directory switching in bash
|
I know a few advanced ways, to change directories. pushd and popd (directory stack) or cd - (change to last directory).
But I am looking for quick way to achieve the following:
Say, I am in a rather deep dir:
/this/is/a/very/deep/directory/structure/with\ lot\ of\ nasty/names
and I want to switch to
/this/is/another/very/deep/directory/structure/with\ lot\ of\ nasty/names
Is there a cool/quick/geeky way to do it (without the mouse)?
|
[
"Do you mean that the path names are the same, and only one directory name changes (\"a\" becomes \"another\")? In that case:\ncd ${PWD/a/another}\n\nwill switch to the other directory. $PWD holds your current directory, and ${var/foo/bar} gives you $var with the string 'foo' replaced by 'bar'.\n",
"What about setting up your CDPATH variable?\n",
"cd ^/a/^/another/\n\n"
] |
[
10,
3,
1
] |
[] |
[] |
[
"bash"
] |
stackoverflow_0000060874_bash.txt
|
Q:
Batch insert using JPA/Toplink
I have a web application that receives messages through an HTTP interface, e.g.:
http://server/application?source=123&destination=234&text=hello
This request contains the ID of the sender, the ID of the recipient and the text of the message.
This message should be processed like:
finding the matching User object for both the source and the destination from the database
creating a tree of objects: a Message that contains a field for the message text and two User objects for the source and the destination
persisting this tree to a database.
The tree will be loaded by other applications that I can't touch.
I use Oracle as the backing database and JPA with Toplink for the database handling tasks. If possible, I'd stay with these.
Without much optimization I can achieve ~30 requests/sec throughput in my environment. That's not much, I'd require ~300 requests/sec. So I measured where the performance bottleneck is and found that the calls to em.persist() takes most of the time. If I simply comment out that line, the throughput go well over 1000 requests/sec.
I tried to write a small test application that used simple JDBC calls to persist 1 million messages to the same database. I used batching, meaning I did 100 inserts then a commit, and repeated until all the records was in the database. I measured ~500 requests/sec throughput in this scenario, that would meet my needs.
It is clear that I need to optimize insert performance here. However as I mentioned earlier I would like to keep using JPA and Toplink for this, not pure JDBC.
Do you know a way to create batch inserts with JPA and Toplink? Can you recommend any other technique for improving JPA persist performance?
ADDITIONAL INFO:
"requests/sec" means here: total number of requests / total time from beginning of test to last record written to database.
I tried to make the calls to em.persist() asynchronous by creating an in-memory queue between the servlet stuff and the persister. It helped the performance greatly. However the queue did grow really fast and as the application will receive ~200 requests/second continuously, It is not an acceptable solution for me.
In this decoupled approach I collected requests for 100 msec and called em.persist() on all collected items before commiting the transaction. The EntityManagerFactory is cached between each transaction.
A:
You should decouple from the JPA interface and use the bare TopLink API. You can probably chuck the objects you're persisting into a UnitOfWork and commit the UnitOfWork on your schedule (sync or async). Note that one of the costs of em.persist() is the implicit clone that happens of the whole object graph. TopLink will work rather better if you uow.registerObject() your two user objects yourself, saving itself the identity tests it has to otherwise do. So you'll end up with:
uow=sess.acquireUnitOfWork();
for (job in batch) {
thingyCl=uow.registerObject(new Thingy());
user1Cl=uow.registerObject(user1);
user2Cl=uow.registerObject(user2);
thingyCl.setUsers(user1Cl,user2Cl);
}
uow.commit();
This is very old school TopLink btw ;)
Note that the batch will help a lot, because batch writing and more especially batch writing with parameter binding will kick in which for this simple example will probably have a very large impact on your performance.
Other things to look for: your sequencing size. A lot of the time spent writing objects in TopLink is actually spent reading sequencing information from the database, especially with the small defaults (I would probably have several hundred or even more as my sequence size).
A:
What is your measure of "requests/sec"? In other words, what happens for the 31st request? What resource is being blocked? If it is the front-end/servlet/web portion, can you run em.persist() in another thread and return immediately?
Also, are you creating transactions each time? Are you creating EntityManagerFactory objects with each request?
|
Batch insert using JPA/Toplink
|
I have a web application that receives messages through an HTTP interface, e.g.:
http://server/application?source=123&destination=234&text=hello
This request contains the ID of the sender, the ID of the recipient and the text of the message.
This message should be processed like:
finding the matching User object for both the source and the destination from the database
creating a tree of objects: a Message that contains a field for the message text and two User objects for the source and the destination
persisting this tree to a database.
The tree will be loaded by other applications that I can't touch.
I use Oracle as the backing database and JPA with Toplink for the database handling tasks. If possible, I'd stay with these.
Without much optimization I can achieve ~30 requests/sec throughput in my environment. That's not much, I'd require ~300 requests/sec. So I measured where the performance bottleneck is and found that the calls to em.persist() takes most of the time. If I simply comment out that line, the throughput go well over 1000 requests/sec.
I tried to write a small test application that used simple JDBC calls to persist 1 million messages to the same database. I used batching, meaning I did 100 inserts then a commit, and repeated until all the records was in the database. I measured ~500 requests/sec throughput in this scenario, that would meet my needs.
It is clear that I need to optimize insert performance here. However as I mentioned earlier I would like to keep using JPA and Toplink for this, not pure JDBC.
Do you know a way to create batch inserts with JPA and Toplink? Can you recommend any other technique for improving JPA persist performance?
ADDITIONAL INFO:
"requests/sec" means here: total number of requests / total time from beginning of test to last record written to database.
I tried to make the calls to em.persist() asynchronous by creating an in-memory queue between the servlet stuff and the persister. It helped the performance greatly. However the queue did grow really fast and as the application will receive ~200 requests/second continuously, It is not an acceptable solution for me.
In this decoupled approach I collected requests for 100 msec and called em.persist() on all collected items before commiting the transaction. The EntityManagerFactory is cached between each transaction.
|
[
"You should decouple from the JPA interface and use the bare TopLink API. You can probably chuck the objects you're persisting into a UnitOfWork and commit the UnitOfWork on your schedule (sync or async). Note that one of the costs of em.persist() is the implicit clone that happens of the whole object graph. TopLink will work rather better if you uow.registerObject() your two user objects yourself, saving itself the identity tests it has to otherwise do. So you'll end up with:\nuow=sess.acquireUnitOfWork();\nfor (job in batch) {\n thingyCl=uow.registerObject(new Thingy());\n user1Cl=uow.registerObject(user1);\n user2Cl=uow.registerObject(user2);\n thingyCl.setUsers(user1Cl,user2Cl);\n}\nuow.commit();\n\nThis is very old school TopLink btw ;)\nNote that the batch will help a lot, because batch writing and more especially batch writing with parameter binding will kick in which for this simple example will probably have a very large impact on your performance.\nOther things to look for: your sequencing size. A lot of the time spent writing objects in TopLink is actually spent reading sequencing information from the database, especially with the small defaults (I would probably have several hundred or even more as my sequence size).\n",
"What is your measure of \"requests/sec\"? In other words, what happens for the 31st request? What resource is being blocked? If it is the front-end/servlet/web portion, can you run em.persist() in another thread and return immediately?\nAlso, are you creating transactions each time? Are you creating EntityManagerFactory objects with each request?\n"
] |
[
3,
0
] |
[] |
[] |
[
"java",
"jpa",
"oracle",
"toplink"
] |
stackoverflow_0000064781_java_jpa_oracle_toplink.txt
|
Q:
PHP Forms-Based Authentication on Windows using Local User Accounts
I'm running PHP, Apache, and Windows. I do not have a domain setup, so I would like my website's forms-based authentication to use the local user accounts database built in to Windows (I think it's called SAM).
I know that if Active Directory is setup, you can use the PHP LDAP module to connect and authenticate in your script, but without AD there is no LDAP. What is the equivalent for standalone machines?
A:
I haven't found a simple solution either. There are examples using CreateObject and the WinNT ADSI provider. But eventually they all bump into
User authentication issues with the Active Directory Service Interfaces WinNT provider. I'm not 100% sure but I guess the WSH/network connect approach has the same problem.
According to How to validate user credentials on Microsoft operating systems you should use LogonUser or SSPI.
It also saysLogonUser Win32 API does not require TCB privilege in Microsoft Windows Server 2003, however, for downlevel compatibility, this is still the best approach.
On Windows XP, it is no longer required that a process have the SE_TCB_NAME privilege in order to call LogonUser. Therefore, the simplest method to validate a user's credentials on Windows XP, is to call the LogonUser API.Therefore, if I were certain Win9x/2000 support isn't needed, I would write an extension that exposes LogonUser to php.
You might also be interested in User Authentication from NT Accounts. It uses the w32api extension, and needs a support dll ...I'd rather write that small LogonUser-extension ;-)
If that's not feasible I'd probably look into the fastcgi module for IIS and how stable it is and let the IIS handle the authentication.
edit:
I've also tried to utilize System.Security.Principal.WindowsIdentity and php's com/.net extension. But the dotnet constructor doesn't seem to allow passing parameters to the objects constructor and my "experiment" to get the assembly (and with it CreateInstance()) from GetType() has failed with an "unknown zval" error.
A:
Good Question!
I've given this some thought... and I can't think of a good solution. What I can think of is a horrible horrible hack that just might work. After seeing that no one has posted an answer to this question for nearly a day, I figured a bad, but working answer would be ok.
The SAM file is off limits while the system is running. There are some DLL Injection tricks which you may be able to get working but in the end you'll just end up with password hashes and you'd have to hash the user provided passwords to match against them anyway.
What you really want is something that tries to authenticate the user against the SAM file. I think you can do this by doing something like the following.
Create a File Share on the server and make it so that only accounts that you want to be able to log in as are granted access to it.
In PHP use the system command to invoke a wsh script that: mounts the share using the username and password that the website user provides. records if it works, and then unmounts the drive if it does.
Collect the result somehow. The result can be returned to php either on the stdout of the script, or hopefully using the return code for the script.
I know it's not pretty, but it should work.
I feel dirty :|
Edit: reason for invoking the external wsh script is that PHP doesn't allow you to use UNC paths (as far as I can remember).
A:
Building a PHP extension requires a pretty large investment in terms of hard disk space. Since I already have the GCC (MinGW) environment installed, I have decided to take the performance hit of launching a process from the PHP script. The source code is below.
// Usage: logonuser.exe /user username /password password [/domain domain]
// Exit code is 0 on logon success and 1 on failure.
#include <windows.h>
int main(int argc, char *argv[]) {
HANDLE r = 0;
char *user = 0;
char *password = 0;
char *domain = 0;
int i;
for(i = 1; i < argc; i++) {
if(!strcmp(argv[i], "/user")) {
if(i + 1 < argc) {
user = argv[i + 1];
i++;
}
} else if(!strcmp(argv[i], "/domain")) {
if(i + 1 < argc) {
domain = argv[i + 1];
i++;
}
} else if(!strcmp(argv[i], "/password")) {
if(i + 1 < argc) {
password = argv[i + 1];
i++;
}
}
}
if(user && password) {
LogonUser(user, domain, password, LOGON32_LOGON_BATCH, LOGON32_PROVIDER_DEFAULT, &r);
}
return r ? 0 : 1;
}
Next is the PHP source demonstrating its use.
if($_SERVER['REQUEST_METHOD'] == 'POST') {
if(isset($_REQUEST['user'], $_REQUEST['password'], $_REQUEST['domain'])) {
$failure = 1;
$user = $_REQUEST['user'];
$password = $_REQUEST['password'];
$domain = $_REQUEST['domain'];
if($user && $password) {
$cmd = "logonuser.exe /user " . escapeshellarg($user) . " /password " . escapeshellarg($password);
if($domain) $cmd .= " /domain " . escapeshellarg($domain);
system($cmd, $failure);
}
if($failure) {
echo("Incorrect credentials.");
} else {
echo("Correct credentials!");
}
}
}
?>
<form action="<?php echo(htmlentities($_SERVER['PHP_SELF'])); ?>" method="post">
Username: <input type="text" name="user" value="<?php echo(htmlentities($user)); ?>" /><br />
Password: <input type="password" name="password" value="" /><br />
Domain: <input type="text" name="domain" value="<?php echo(htmlentities($domain)); ?>" /><br />
<input type="submit" value="logon" />
</form>
|
PHP Forms-Based Authentication on Windows using Local User Accounts
|
I'm running PHP, Apache, and Windows. I do not have a domain setup, so I would like my website's forms-based authentication to use the local user accounts database built in to Windows (I think it's called SAM).
I know that if Active Directory is setup, you can use the PHP LDAP module to connect and authenticate in your script, but without AD there is no LDAP. What is the equivalent for standalone machines?
|
[
"I haven't found a simple solution either. There are examples using CreateObject and the WinNT ADSI provider. But eventually they all bump into \nUser authentication issues with the Active Directory Service Interfaces WinNT provider. I'm not 100% sure but I guess the WSH/network connect approach has the same problem.\nAccording to How to validate user credentials on Microsoft operating systems you should use LogonUser or SSPI.\n\nIt also saysLogonUser Win32 API does not require TCB privilege in Microsoft Windows Server 2003, however, for downlevel compatibility, this is still the best approach.\nOn Windows XP, it is no longer required that a process have the SE_TCB_NAME privilege in order to call LogonUser. Therefore, the simplest method to validate a user's credentials on Windows XP, is to call the LogonUser API.Therefore, if I were certain Win9x/2000 support isn't needed, I would write an extension that exposes LogonUser to php.\nYou might also be interested in User Authentication from NT Accounts. It uses the w32api extension, and needs a support dll ...I'd rather write that small LogonUser-extension ;-)\nIf that's not feasible I'd probably look into the fastcgi module for IIS and how stable it is and let the IIS handle the authentication.\nedit:\nI've also tried to utilize System.Security.Principal.WindowsIdentity and php's com/.net extension. But the dotnet constructor doesn't seem to allow passing parameters to the objects constructor and my \"experiment\" to get the assembly (and with it CreateInstance()) from GetType() has failed with an \"unknown zval\" error.\n",
"Good Question!\nI've given this some thought... and I can't think of a good solution. What I can think of is a horrible horrible hack that just might work. After seeing that no one has posted an answer to this question for nearly a day, I figured a bad, but working answer would be ok. \nThe SAM file is off limits while the system is running. There are some DLL Injection tricks which you may be able to get working but in the end you'll just end up with password hashes and you'd have to hash the user provided passwords to match against them anyway.\nWhat you really want is something that tries to authenticate the user against the SAM file. I think you can do this by doing something like the following. \n\nCreate a File Share on the server and make it so that only accounts that you want to be able to log in as are granted access to it.\nIn PHP use the system command to invoke a wsh script that: mounts the share using the username and password that the website user provides. records if it works, and then unmounts the drive if it does. \nCollect the result somehow. The result can be returned to php either on the stdout of the script, or hopefully using the return code for the script.\n\nI know it's not pretty, but it should work.\nI feel dirty :|\nEdit: reason for invoking the external wsh script is that PHP doesn't allow you to use UNC paths (as far as I can remember).\n",
"Building a PHP extension requires a pretty large investment in terms of hard disk space. Since I already have the GCC (MinGW) environment installed, I have decided to take the performance hit of launching a process from the PHP script. The source code is below.\n// Usage: logonuser.exe /user username /password password [/domain domain]\n// Exit code is 0 on logon success and 1 on failure.\n\n#include <windows.h>\n\nint main(int argc, char *argv[]) {\n HANDLE r = 0;\n char *user = 0;\n char *password = 0;\n char *domain = 0;\n int i;\n\n for(i = 1; i < argc; i++) {\n if(!strcmp(argv[i], \"/user\")) {\n if(i + 1 < argc) {\n user = argv[i + 1];\n i++;\n }\n } else if(!strcmp(argv[i], \"/domain\")) {\n if(i + 1 < argc) {\n domain = argv[i + 1];\n i++;\n }\n } else if(!strcmp(argv[i], \"/password\")) {\n if(i + 1 < argc) {\n password = argv[i + 1];\n i++;\n }\n }\n }\n\n if(user && password) {\n LogonUser(user, domain, password, LOGON32_LOGON_BATCH, LOGON32_PROVIDER_DEFAULT, &r);\n }\n return r ? 0 : 1;\n}\n\nNext is the PHP source demonstrating its use.\n \n\nif($_SERVER['REQUEST_METHOD'] == 'POST') {\n if(isset($_REQUEST['user'], $_REQUEST['password'], $_REQUEST['domain'])) {\n $failure = 1;\n $user = $_REQUEST['user'];\n $password = $_REQUEST['password'];\n $domain = $_REQUEST['domain'];\n\n if($user && $password) {\n $cmd = \"logonuser.exe /user \" . escapeshellarg($user) . \" /password \" . escapeshellarg($password);\n if($domain) $cmd .= \" /domain \" . escapeshellarg($domain);\n system($cmd, $failure);\n }\n\n if($failure) {\n echo(\"Incorrect credentials.\");\n } else {\n echo(\"Correct credentials!\");\n }\n }\n}\n?>\n<form action=\"<?php echo(htmlentities($_SERVER['PHP_SELF'])); ?>\" method=\"post\">\n Username: <input type=\"text\" name=\"user\" value=\"<?php echo(htmlentities($user)); ?>\" /><br />\n Password: <input type=\"password\" name=\"password\" value=\"\" /><br />\n Domain: <input type=\"text\" name=\"domain\" value=\"<?php echo(htmlentities($domain)); ?>\" /><br />\n <input type=\"submit\" value=\"logon\" />\n</form>\n\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"apache",
"authentication",
"php",
"sam",
"windows"
] |
stackoverflow_0000060585_apache_authentication_php_sam_windows.txt
|
Q:
python cgi on IIS
How do you set up IIS so that you can call python scripts from asp pages?
Ok, so I found the answer to that question here: http://support.microsoft.com/kb/276494
So on to my next question: How do you call a cgi script from within classic asp (vb) code? Particularly one which is not in the web root directory.
A:
You could also do it this way.
A:
I don't believe that VBScript as hosted by IIS has any way of executing an external process. If you are using python as an AXscripting engine then you could just use the sys module. If the script you're calling is actually meant to be a cgi script you'll have to mimic all the environment variables that the cgi uses. The alternative is to put the script on the python path, import it and hope that it is modular enough that you can call the pieces you need and bypass the cgi handling code.
|
python cgi on IIS
|
How do you set up IIS so that you can call python scripts from asp pages?
Ok, so I found the answer to that question here: http://support.microsoft.com/kb/276494
So on to my next question: How do you call a cgi script from within classic asp (vb) code? Particularly one which is not in the web root directory.
|
[
"You could also do it this way.\n",
"I don't believe that VBScript as hosted by IIS has any way of executing an external process. If you are using python as an AXscripting engine then you could just use the sys module. If the script you're calling is actually meant to be a cgi script you'll have to mimic all the environment variables that the cgi uses. The alternative is to put the script on the python path, import it and hope that it is modular enough that you can call the pieces you need and bypass the cgi handling code.\n"
] |
[
2,
1
] |
[] |
[] |
[
"asp_classic",
"cgi",
"iis",
"python",
"vbscript"
] |
stackoverflow_0000061781_asp_classic_cgi_iis_python_vbscript.txt
|
Q:
Modifying the MBR of Windows
I need to modify the MBR of Windows, and I would really like to do this from Windows.
Here are my questions. I know that I can get a handle on a physical device with a call to CreateFile. Will the MBR always be on \\.\PHYSICALDRIVE0? Also, I'm still learning the Windows API to read directly from the disk. Is readabsolutesectors and writeabsolutesectdors the two functions I'm going to need to use to read/write to the disk sectors which contain the MBR?
Edit from from what I've learned on my own.
The MBR will not always be on \\.\PHYSICALDRIVE0. Also, you can write to the bootsector (at least as Administrator on XP) by call CreateFile with the device name of the drive that contains the MBR. Also, you can write to this drive by simply calling WriteFile and passing the handle of the device created by calling CreateFile.
Edit to address Joel Coehoorn.
I need to edit the MBR because I'm working on a project that needs to modify hardware registers after POST in BIOS, but before Windows will be allowed to boot. Our plan is to make these changes by modifying the bootloader to execute our code before Windows boots up.
Edit for Cd-MaN.
Thanks for the info. There isn't anything in your answer, though, that I didn't know and your answer doesn't address my question. The registry in particular absolutely will not do what we need for multiple reasons. The big reason being that Windows is the highest layer among multiple software layers that will be running with our product. These changes need to occur even before the lower levels run, and so the registry won't work.
P.S. for Cd-MaN.
As I understand it, the information you give isn't quite correct. For Vista, I think you can write to a volume if the sectors being written to are boot sectors. See http://support.microsoft.com/kb/942448
A:
Once the OS is started the MBR is typically protected for virus reasons - this is one of the oldest virus tricks in the books - goes back to passing viruses from floppy to floppy.
Even if it wasn't restricted, you have to write low level code - it isn't part of the file system, but exists on a specific location on the hard drive.
Due to that, you pretty much are restricted to writing low level (most programs implement this in assembly) or C code targeting 16 bit DOS.
Most of these programs use the BIOS interface (13h, I believe) to access the sectors of the disk directly. You can access these in C using some inline assembly, or compiler provided interfaces. You will generally not get access to BIOS without the cooperation of the OS, though, so your program, again, will be restricted to DOS. If you can access these you're almost home free - the nice thing about BIOS is you don't have to worry about what type of HD is in the system - even RAID cards often insert themselves into the BIOS routines so they can be accessed without knowing where in memory the ATA or SATA controller is, and executing commands on that low level.
If you absolutely must access it within an OS, though, you pretty much have to write a device driver to access the BIOS or the memory space where the HD controllers exist. I wouldn't recommend it, though, as this is very tricky to deal with - modern computers put the HD controllers in different spots in memory, with different IRQs, and each chipset has become a little more esoteric because they can provide a minimum interface to bios for bootup, and then a specific driver for Windows. They skip all the other interface niceties that would be considered compatible with other controllers because it's more expensive to be compatible.
You may find that at the driver level inside windows you'll have methods for accessing the drive sectors directly (or pseudo directly), but again, they are likely very well protected due to the aforementioned virus issues.
Good luck!
A:
Modifying the bootloader is bad, bad idea. Here are just a few of the possible gotcha's:
it will potentially kill full disk encryption products (Truecrypt, PGP, Vista's BitLocker, etc)
it will potentially trip up AV products (scaring users)
it will potentially kill complicated booting scenarios (chained boot loaders, etc)
it will kill off the chain of trust when using the TPM module (because it checks the MBR for change before executing it)
direct disk access is not allowed starting from Vista (only using drivers)
Alternatives (like modifying the hardware register during the Windows bootup via a driver which is set to load at boot time or after Windows has booted) should really be considered. If the modification is as simple as writing to a port, ie:
OUT AX, BL
then drivers exists for all versions of Window which can do this (reading/writing a value from/to a certain port) which can be called from user mode.
A:
Maybe a PXE boot scenario could help you? Simply boot on your crafted PXE image which modify the hardware registers you need to modify, and then return the control to the Master Boot Record or to the active partition's boot record.
This way you don't have to modify the boot records.
|
Modifying the MBR of Windows
|
I need to modify the MBR of Windows, and I would really like to do this from Windows.
Here are my questions. I know that I can get a handle on a physical device with a call to CreateFile. Will the MBR always be on \\.\PHYSICALDRIVE0? Also, I'm still learning the Windows API to read directly from the disk. Is readabsolutesectors and writeabsolutesectdors the two functions I'm going to need to use to read/write to the disk sectors which contain the MBR?
Edit from from what I've learned on my own.
The MBR will not always be on \\.\PHYSICALDRIVE0. Also, you can write to the bootsector (at least as Administrator on XP) by call CreateFile with the device name of the drive that contains the MBR. Also, you can write to this drive by simply calling WriteFile and passing the handle of the device created by calling CreateFile.
Edit to address Joel Coehoorn.
I need to edit the MBR because I'm working on a project that needs to modify hardware registers after POST in BIOS, but before Windows will be allowed to boot. Our plan is to make these changes by modifying the bootloader to execute our code before Windows boots up.
Edit for Cd-MaN.
Thanks for the info. There isn't anything in your answer, though, that I didn't know and your answer doesn't address my question. The registry in particular absolutely will not do what we need for multiple reasons. The big reason being that Windows is the highest layer among multiple software layers that will be running with our product. These changes need to occur even before the lower levels run, and so the registry won't work.
P.S. for Cd-MaN.
As I understand it, the information you give isn't quite correct. For Vista, I think you can write to a volume if the sectors being written to are boot sectors. See http://support.microsoft.com/kb/942448
|
[
"Once the OS is started the MBR is typically protected for virus reasons - this is one of the oldest virus tricks in the books - goes back to passing viruses from floppy to floppy.\nEven if it wasn't restricted, you have to write low level code - it isn't part of the file system, but exists on a specific location on the hard drive.\nDue to that, you pretty much are restricted to writing low level (most programs implement this in assembly) or C code targeting 16 bit DOS.\nMost of these programs use the BIOS interface (13h, I believe) to access the sectors of the disk directly. You can access these in C using some inline assembly, or compiler provided interfaces. You will generally not get access to BIOS without the cooperation of the OS, though, so your program, again, will be restricted to DOS. If you can access these you're almost home free - the nice thing about BIOS is you don't have to worry about what type of HD is in the system - even RAID cards often insert themselves into the BIOS routines so they can be accessed without knowing where in memory the ATA or SATA controller is, and executing commands on that low level.\nIf you absolutely must access it within an OS, though, you pretty much have to write a device driver to access the BIOS or the memory space where the HD controllers exist. I wouldn't recommend it, though, as this is very tricky to deal with - modern computers put the HD controllers in different spots in memory, with different IRQs, and each chipset has become a little more esoteric because they can provide a minimum interface to bios for bootup, and then a specific driver for Windows. They skip all the other interface niceties that would be considered compatible with other controllers because it's more expensive to be compatible. \nYou may find that at the driver level inside windows you'll have methods for accessing the drive sectors directly (or pseudo directly), but again, they are likely very well protected due to the aforementioned virus issues.\nGood luck!\n",
"Modifying the bootloader is bad, bad idea. Here are just a few of the possible gotcha's:\n\nit will potentially kill full disk encryption products (Truecrypt, PGP, Vista's BitLocker, etc)\nit will potentially trip up AV products (scaring users)\nit will potentially kill complicated booting scenarios (chained boot loaders, etc)\nit will kill off the chain of trust when using the TPM module (because it checks the MBR for change before executing it)\ndirect disk access is not allowed starting from Vista (only using drivers)\n\nAlternatives (like modifying the hardware register during the Windows bootup via a driver which is set to load at boot time or after Windows has booted) should really be considered. If the modification is as simple as writing to a port, ie:\nOUT AX, BL\n\nthen drivers exists for all versions of Window which can do this (reading/writing a value from/to a certain port) which can be called from user mode.\n",
"Maybe a PXE boot scenario could help you? Simply boot on your crafted PXE image which modify the hardware registers you need to modify, and then return the control to the Master Boot Record or to the active partition's boot record.\nThis way you don't have to modify the boot records.\n"
] |
[
7,
4,
2
] |
[] |
[] |
[
"mbr",
"windows"
] |
stackoverflow_0000039503_mbr_windows.txt
|
Q:
Fatal warnings on Windows
While working between a Windows MySQL server and a Debian MySQL server, I noticed that warnings were fatal on Windows, but silently ignored on Debian. I'd like to make the warnings fatal on both servers while I'm doing development, but I wasn't able to find a setting that effected this behavior. Anyone have any ideas?
A:
I think what you're looking for is the sql_mode parameter in my.conf. STRICT_ALL_TABLES is the value. I guess it depends what you mean by "fatal".
http://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html
A:
Look at enabling strict mode in the /etc/my.ini file.
|
Fatal warnings on Windows
|
While working between a Windows MySQL server and a Debian MySQL server, I noticed that warnings were fatal on Windows, but silently ignored on Debian. I'd like to make the warnings fatal on both servers while I'm doing development, but I wasn't able to find a setting that effected this behavior. Anyone have any ideas?
|
[
"I think what you're looking for is the sql_mode parameter in my.conf. STRICT_ALL_TABLES is the value. I guess it depends what you mean by \"fatal\".\nhttp://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html\n",
"Look at enabling strict mode in the /etc/my.ini file.\n"
] |
[
3,
1
] |
[] |
[] |
[
"mysql"
] |
stackoverflow_0000064233_mysql.txt
|
Q:
Is it possible to share a transaction between a .Net application and a COM+ object?
I did some tests a while ago and never figured out how to make this work.
The ingredients:
COM+ transactional object (developed in VB6)
.Net web application (with transaction) in IIS that...
makes a call to the COM+ component
updates a row in a SQL database
Testing:
Run the .Net application and force an exception.
Result:
The update made from the .Net application rolls back.
The update made by the COM+ object does not roll back.
If I call the COM+ object from an old ASP page the rollback works.
I know some people may be thinking "what?! COM+ and .Net you must be out of your mind!", but there are some places in this world where there still are a lot of COM+ components. I was just curious if someone ever faced this and if you figured out how to make this work.
A:
Because VB and .NET will use different SQL connections (and there is no way to make ADO and ADO.NET share the same connection), your only possibility is to enlist the DTC (Distributed Transaction Coordinator). The DTC will coordinates the two independent transactions so they commit or are rolled-back together.
From .NET, EnterpriseServices manages COM+ functionality, such as the DTC. In .NET 2.0 and forward, you can use the System.Transactions namespace, which makes things a little nicer. I think something like this should work (untested code):
void SomeMethod()
{
EnterpriseServicesInteropOption e = EnterpriseServicesInteropOption.Full;
using (TransactionScope s = new TransactionScope(e))
{
MyComPlusClass o = new MyComPlusClass();
o.SomeTransactionalMethod();
}
}
I am not familiar enough with this to give you more advice at this point.
On the COM+ side, your object needs to be configured to use (most likely "require") a distributed transaction. You can do that from COM+ Explorer, by going to your object's Properties, selecting the Transaction tab, and clicking on "Required". I don't remember if you can do this from code as well; VB6 was created before COM+ was released, so it doesn't fully support everything COM+ does (its transactional support was meant for COM+'s predecessor, called MS Transaction Server).
If everything works correctly, your COM+ object should be enlisting in the existing Context created by your .NET code.
You can use the "Distributed Transaction Coordinator\Transaction List" node in "Component Services" to check and see the distributed transaction being created during the call.
Be aware that you cannot see the changes from the COM+ component reflected on data queries from the .NET side until the Transaction is committed! In fact, it is possible to deadlock! Remember that DTC will make sure that the two transactions are paired, but they are still separate database transactions.
A:
How are you implementing this? If you are using EnterpriseServices to manage the .NET transaction, then both transactions should get rolled back, since you're using the same context for them both.
|
Is it possible to share a transaction between a .Net application and a COM+ object?
|
I did some tests a while ago and never figured out how to make this work.
The ingredients:
COM+ transactional object (developed in VB6)
.Net web application (with transaction) in IIS that...
makes a call to the COM+ component
updates a row in a SQL database
Testing:
Run the .Net application and force an exception.
Result:
The update made from the .Net application rolls back.
The update made by the COM+ object does not roll back.
If I call the COM+ object from an old ASP page the rollback works.
I know some people may be thinking "what?! COM+ and .Net you must be out of your mind!", but there are some places in this world where there still are a lot of COM+ components. I was just curious if someone ever faced this and if you figured out how to make this work.
|
[
"Because VB and .NET will use different SQL connections (and there is no way to make ADO and ADO.NET share the same connection), your only possibility is to enlist the DTC (Distributed Transaction Coordinator). The DTC will coordinates the two independent transactions so they commit or are rolled-back together.\nFrom .NET, EnterpriseServices manages COM+ functionality, such as the DTC. In .NET 2.0 and forward, you can use the System.Transactions namespace, which makes things a little nicer. I think something like this should work (untested code):\nvoid SomeMethod()\n{\n EnterpriseServicesInteropOption e = EnterpriseServicesInteropOption.Full;\n using (TransactionScope s = new TransactionScope(e))\n {\n MyComPlusClass o = new MyComPlusClass();\n\n o.SomeTransactionalMethod();\n }\n}\n\nI am not familiar enough with this to give you more advice at this point.\nOn the COM+ side, your object needs to be configured to use (most likely \"require\") a distributed transaction. You can do that from COM+ Explorer, by going to your object's Properties, selecting the Transaction tab, and clicking on \"Required\". I don't remember if you can do this from code as well; VB6 was created before COM+ was released, so it doesn't fully support everything COM+ does (its transactional support was meant for COM+'s predecessor, called MS Transaction Server).\nIf everything works correctly, your COM+ object should be enlisting in the existing Context created by your .NET code.\nYou can use the \"Distributed Transaction Coordinator\\Transaction List\" node in \"Component Services\" to check and see the distributed transaction being created during the call.\nBe aware that you cannot see the changes from the COM+ component reflected on data queries from the .NET side until the Transaction is committed! In fact, it is possible to deadlock! Remember that DTC will make sure that the two transactions are paired, but they are still separate database transactions.\n",
"How are you implementing this? If you are using EnterpriseServices to manage the .NET transaction, then both transactions should get rolled back, since you're using the same context for them both.\n"
] |
[
2,
1
] |
[] |
[] |
[
".net",
"com+",
"database",
"transactions"
] |
stackoverflow_0000021589_.net_com+_database_transactions.txt
|
Q:
Apps that support both DirectX 9 and 10
I have a noobish question for any graphics programmer.
I am confused how some games (like Crysis) can support both DirectX 9 (in XP) and 10 (in Vista)?
What I understand so far is that if you write a DX10 app, then it can only runs in Vista.
Maybe they have 2 code bases -- one written in DX9 and another in DX10? But isn't that an overkill?
A:
They have two rendering pipelines, one using DX9 calls and one using DX10 calls. The APIs are not compatible, though a majority of any game engine can be reused for either. If you want some Open Source examples of how different rendering pipelines are done, look at something like Ogre3d, which supports OpenGL, DX9, and (soon)DX10 rendering.
A:
The rendering layer of games is usually a fairly well isolated/abstracted part of the whole application. As far as the game engine is concerned, each frame you are simply building up a list of conceptual objects (trees, characters, etc.). If the game engine chooses to render a particular object, then it's up to the rendering layer how to actually translate that intent into DX draw calls. A DX10 rendering will generate a different set of draw calls to a DX9 layer, but conceptually they are still performing the same action - 'render this tree'.
Rendering is nicely abstracted because it's rare that you want to get any information back from the rendering layer, once the 'render this tree' action is performed, the game engine will just assume that the rendering looks correct. There is little need to handle different potential results from DX9/DX10 rendering calls because 99.9% of the information is going from the engine to the graphics system, and the 0.1% that comes back likely takes the same form between the two APIs.
The application setup is a little more icky, because you've got to ask the system whether or not DX10 is supported and gracefully fall back on DX9 otherwise, but this is standard fare for application setup (in the same way that the game has to pick a resolution, refresh rate, input device, etc.).
A:
It is likely that they have an abstraction layer and they develop against that. At run-time they instantiate the DX9 or DX10 wrapping concrete engines.
I imagine their abstraction is positioned very close to the DirectX layer and simply provides DX9 with sensible manual implementations of DX10 functions or enhances DX9 logic when running on DX10.
|
Apps that support both DirectX 9 and 10
|
I have a noobish question for any graphics programmer.
I am confused how some games (like Crysis) can support both DirectX 9 (in XP) and 10 (in Vista)?
What I understand so far is that if you write a DX10 app, then it can only runs in Vista.
Maybe they have 2 code bases -- one written in DX9 and another in DX10? But isn't that an overkill?
|
[
"They have two rendering pipelines, one using DX9 calls and one using DX10 calls. The APIs are not compatible, though a majority of any game engine can be reused for either. If you want some Open Source examples of how different rendering pipelines are done, look at something like Ogre3d, which supports OpenGL, DX9, and (soon)DX10 rendering.\n",
"The rendering layer of games is usually a fairly well isolated/abstracted part of the whole application. As far as the game engine is concerned, each frame you are simply building up a list of conceptual objects (trees, characters, etc.). If the game engine chooses to render a particular object, then it's up to the rendering layer how to actually translate that intent into DX draw calls. A DX10 rendering will generate a different set of draw calls to a DX9 layer, but conceptually they are still performing the same action - 'render this tree'.\nRendering is nicely abstracted because it's rare that you want to get any information back from the rendering layer, once the 'render this tree' action is performed, the game engine will just assume that the rendering looks correct. There is little need to handle different potential results from DX9/DX10 rendering calls because 99.9% of the information is going from the engine to the graphics system, and the 0.1% that comes back likely takes the same form between the two APIs.\nThe application setup is a little more icky, because you've got to ask the system whether or not DX10 is supported and gracefully fall back on DX9 otherwise, but this is standard fare for application setup (in the same way that the game has to pick a resolution, refresh rate, input device, etc.).\n",
"It is likely that they have an abstraction layer and they develop against that. At run-time they instantiate the DX9 or DX10 wrapping concrete engines.\nI imagine their abstraction is positioned very close to the DirectX layer and simply provides DX9 with sensible manual implementations of DX10 functions or enhances DX9 logic when running on DX10.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"compatibility",
"directx",
"graphics"
] |
stackoverflow_0000065998_compatibility_directx_graphics.txt
|
Q:
Java VNC Libraries
Are there any VNC Libraries for Java, I need to build a JSP/Servlet based VNC server, to allow user to share their desktops with helpdesk. I've seen jVNC, but i'd like to build it myself, for a University project.
In particular, I'm looking for Java Libraries that I can use inside another servlet based application. Unfortuatnely tight VNC's source is in C.
A:
have you looked at the tightVNC source? It is fairly terse http://www.tightvnc.com/download.html
|
Java VNC Libraries
|
Are there any VNC Libraries for Java, I need to build a JSP/Servlet based VNC server, to allow user to share their desktops with helpdesk. I've seen jVNC, but i'd like to build it myself, for a University project.
In particular, I'm looking for Java Libraries that I can use inside another servlet based application. Unfortuatnely tight VNC's source is in C.
|
[
"have you looked at the tightVNC source? It is fairly terse http://www.tightvnc.com/download.html\n"
] |
[
5
] |
[] |
[] |
[
"java",
"vnc"
] |
stackoverflow_0000066504_java_vnc.txt
|
Q:
SQL/Oracle: when indexes on multiple columns can be used
If I create an index on columns (A, B, C), in that order, my understanding is that the database will be able to use it even if I search only on (A), or (A and B), or (A and B and C), but not if I search only on (B), or (C), or (B and C). Is this correct?
A:
There are actually three index-based access methods that Oracle can use when a predicate is placed on a non-leading column of an index.
i) Index skip-scan: http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#PFGRF10105
ii) Fast full index scan: http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#i52044
iii) Index full scan: http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#i82107
I've most often seen the fast full index scan "in the wild", but all are possible.
A:
That is not correct. Always best to come up with a test case that represents your data and see for yourself. If you want to really understand the Oracle SQL Optimizer google Jonathan Lewis, read his books, read his blog, check out his website, the guy is amazing, and he always generates test cases.
create table mytab nologging as (
select mod(rownum, 3) x, rownum y, mod(rownum, 3) z from all_objects, (select 'x' from user_tables where rownum < 4)
);
create index i on mytab (x, y, z);
exec dbms_stats.gather_table_stats(ownname=>'DBADMIN',tabname=>'MYTAB', cascade=>true);
set autot trace exp
select * from mytab where y=5000;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=10)
1 0 INDEX (SKIP SCAN) OF 'I' (INDEX) (Cost=1 Card=1 Bytes=10)
A:
Up to version Oracle 8 an index will never be used unless the first column is included in the SQL.
In Oracle 9i the Skip Scan Index Access feature was introduced, which lets the Oracle CBO attempt to use indexes even when the prefix column is not available.
Good overview of how skip scan works here: http://www.quest-pipelines.com/newsletter-v5/1004_C.htm
|
SQL/Oracle: when indexes on multiple columns can be used
|
If I create an index on columns (A, B, C), in that order, my understanding is that the database will be able to use it even if I search only on (A), or (A and B), or (A and B and C), but not if I search only on (B), or (C), or (B and C). Is this correct?
|
[
"There are actually three index-based access methods that Oracle can use when a predicate is placed on a non-leading column of an index.\ni) Index skip-scan: http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#PFGRF10105\nii) Fast full index scan: http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#i52044\niii) Index full scan: http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#i82107\nI've most often seen the fast full index scan \"in the wild\", but all are possible.\n",
"That is not correct. Always best to come up with a test case that represents your data and see for yourself. If you want to really understand the Oracle SQL Optimizer google Jonathan Lewis, read his books, read his blog, check out his website, the guy is amazing, and he always generates test cases.\ncreate table mytab nologging as (\nselect mod(rownum, 3) x, rownum y, mod(rownum, 3) z from all_objects, (select 'x' from user_tables where rownum < 4)\n);\n\ncreate index i on mytab (x, y, z);\n\nexec dbms_stats.gather_table_stats(ownname=>'DBADMIN',tabname=>'MYTAB', cascade=>true);\n\nset autot trace exp\n\nselect * from mytab where y=5000;\n\nExecution Plan\n----------------------------------------------------------\n 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=10)\n 1 0 INDEX (SKIP SCAN) OF 'I' (INDEX) (Cost=1 Card=1 Bytes=10)\n\n",
"Up to version Oracle 8 an index will never be used unless the first column is included in the SQL. \nIn Oracle 9i the Skip Scan Index Access feature was introduced, which lets the Oracle CBO attempt to use indexes even when the prefix column is not available. \nGood overview of how skip scan works here: http://www.quest-pipelines.com/newsletter-v5/1004_C.htm\n"
] |
[
14,
9,
5
] |
[] |
[] |
[
"indexing",
"oracle"
] |
stackoverflow_0000057878_indexing_oracle.txt
|
Q:
How can I get Column number of the cursor in a TextBox in C#?
I've got a multiline textBox that I would like to have a label on the form displaying the current line and column position of, as Visual Studio does.
I know I can get the line # with GetLineFromCharIndex, but how can I get the column # on that line?
(I really want the Cursor Position on that line, not 'column', per se)
A:
int line = textbox.GetLineFromCharIndex(textbox.SelectionStart);
int column = textbox.SelectionStart - textbox.GetFirstCharIndexFromLine(line);
A:
textBox.SelectionStart -
textBox.GetFirstCharIndexFromLine(textBox.GetLineFromCharIndex(textBox.SelectionStart))
A:
Off the top of my head, I think you want the SelectionStart property.
|
How can I get Column number of the cursor in a TextBox in C#?
|
I've got a multiline textBox that I would like to have a label on the form displaying the current line and column position of, as Visual Studio does.
I know I can get the line # with GetLineFromCharIndex, but how can I get the column # on that line?
(I really want the Cursor Position on that line, not 'column', per se)
|
[
"int line = textbox.GetLineFromCharIndex(textbox.SelectionStart);\nint column = textbox.SelectionStart - textbox.GetFirstCharIndexFromLine(line);\n\n",
"textBox.SelectionStart -\ntextBox.GetFirstCharIndexFromLine(textBox.GetLineFromCharIndex(textBox.SelectionStart))\n\n",
"Off the top of my head, I think you want the SelectionStart property.\n"
] |
[
12,
2,
0
] |
[] |
[] |
[
"c#",
"user_interface",
"windows"
] |
stackoverflow_0000066475_c#_user_interface_windows.txt
|
Q:
List operations in Lisp
I have been searching everywhere for the following functionality in Lisp, and have gotten nowhere:
find the index of something in a list. example:
(index-of item InThisList)
replace something at a specific spot in a list. example:
(replace item InThisList AtThisIndex) ;i think this can be done with 'setf'?
return an item at a specific index. example:
(return InThisList ItemAtThisIndex)
Up until this point, I've been faking it with my own functions. I'm wondering if I'm just creating more work for myself.
This is how I've been faking number 1:
(defun my-index (findMe mylist)
(let ((counter 0) (found 1))
(dolist (item mylist)
(cond
((eq item findMe) ;this works because 'eq' checks place in memory,
;and as long as 'findMe' was from the original list, this will work.
(setq found nil)
(found (incf counter))))
counter))
A:
You can use setf and nth to replace and retrieve values by index.
(let ((myList '(1 2 3 4 5 6)))
(setf (nth 4 myList) 101); <----
myList)
(1 2 3 4 101 6)
To find by index you can use the position function.
(let ((myList '(1 2 3 4 5 6)))
(setf (nth 4 myList) 101)
(list myList (position 101 myList)))
((1 2 3 4 101 6) 4)
I found these all in this index of functions.
A:
find the index of something in a list.
In Emacs Lisp and Common Lisp, you have the position function:
> (setq numbers (list 1 2 3 4))
(1 2 3 4)
> (position 3 numbers)
2
In Scheme, here's a tail recursive implementation from DrScheme's doc:
(define list-position
(lambda (o l)
(let loop ((i 0) (l l))
(if (null? l) #f
(if (eqv? (car l) o) i
(loop (+ i 1) (cdr l)))))))
----------------------------------------------------
> (define numbers (list 1 2 3 4))
> (list-position 3 numbers)
2
>
But if you're using a list as a collection of slots to store structured data, maybe you should have a look at defstruct or even some kind of Lisp Object System like CLOS.
If you're learning Lisp, make sure you have a look at Practical Common Lisp and / or The Little Schemer.
Cheers!
A:
Answers:
(position item sequence &key from-end (start 0) end key test test-not)
http://lispdoc.com/?q=position&search=Basic+search
(setf (elt sequence index) value)
(elt sequence index)
http://lispdoc.com/?q=elt&search=Basic+search
NOTE: elt is preferable to nth because elt works on any sequence, not just lists
A:
Jeremy's answers should work; but that said, if you find yourself writing code like
(setf (nth i my-list) new-elt)
you're probably using the wrong datastructure. Lists are simply linked lists, so they're O(N) to access by index. You might be better off using arrays.
Or maybe you're using lists as tuples. In that case, they should be fine. But you probably want to name accessors so someone reading your code doesn't have to remember what "nth 4" is supposed to mean. Something like
(defun my-attr (list)
(nth 4 list))
(defun (setf my-attr) (new list)
(setf (nth 4 list) new))
A:
+2 for "Practical Common Lisp". It is a mixture of a Common Lisp Cookbook and a quality Teach Yourself Lisp book.
There's also "Successful Common Lisp" (http://www.psg.com/~dlamkins/sl/cover.html and http://www.psg.com/~dlamkins/sl/contents.html) which seemed to fill a few gaps / extend things in "Practical Common Lisp".
I've also read Paul Graham's "ANSI Common Lisp" which is more about the basics of the language, but a bit more of a reference manual.
A:
I have to agree with Thomas. If you use lists like arrays then that's just going to be slow (and possibly awkward). So you should either use arrays or stick with the functions you've written but move them "up" in a way so that you can easily replace the slow lists with arrays later.
|
List operations in Lisp
|
I have been searching everywhere for the following functionality in Lisp, and have gotten nowhere:
find the index of something in a list. example:
(index-of item InThisList)
replace something at a specific spot in a list. example:
(replace item InThisList AtThisIndex) ;i think this can be done with 'setf'?
return an item at a specific index. example:
(return InThisList ItemAtThisIndex)
Up until this point, I've been faking it with my own functions. I'm wondering if I'm just creating more work for myself.
This is how I've been faking number 1:
(defun my-index (findMe mylist)
(let ((counter 0) (found 1))
(dolist (item mylist)
(cond
((eq item findMe) ;this works because 'eq' checks place in memory,
;and as long as 'findMe' was from the original list, this will work.
(setq found nil)
(found (incf counter))))
counter))
|
[
"You can use setf and nth to replace and retrieve values by index.\n(let ((myList '(1 2 3 4 5 6)))\n (setf (nth 4 myList) 101); <----\n myList)\n\n(1 2 3 4 101 6)\n\nTo find by index you can use the position function.\n(let ((myList '(1 2 3 4 5 6)))\n (setf (nth 4 myList) 101)\n (list myList (position 101 myList)))\n\n((1 2 3 4 101 6) 4)\n\nI found these all in this index of functions.\n",
"\n\nfind the index of something in a list.\n\n\nIn Emacs Lisp and Common Lisp, you have the position function:\n> (setq numbers (list 1 2 3 4))\n(1 2 3 4)\n> (position 3 numbers)\n2\n\nIn Scheme, here's a tail recursive implementation from DrScheme's doc:\n(define list-position \n (lambda (o l)\n (let loop ((i 0) (l l))\n (if (null? l) #f\n (if (eqv? (car l) o) i\n (loop (+ i 1) (cdr l)))))))\n\n----------------------------------------------------\n\n> (define numbers (list 1 2 3 4))\n> (list-position 3 numbers)\n2\n> \n\nBut if you're using a list as a collection of slots to store structured data, maybe you should have a look at defstruct or even some kind of Lisp Object System like CLOS.\nIf you're learning Lisp, make sure you have a look at Practical Common Lisp and / or The Little Schemer.\nCheers!\n",
"Answers:\n\n(position item sequence &key from-end (start 0) end key test test-not)\nhttp://lispdoc.com/?q=position&search=Basic+search \n(setf (elt sequence index) value)\n(elt sequence index)\nhttp://lispdoc.com/?q=elt&search=Basic+search\nNOTE: elt is preferable to nth because elt works on any sequence, not just lists\n\n",
"Jeremy's answers should work; but that said, if you find yourself writing code like\n(setf (nth i my-list) new-elt)\nyou're probably using the wrong datastructure. Lists are simply linked lists, so they're O(N) to access by index. You might be better off using arrays.\nOr maybe you're using lists as tuples. In that case, they should be fine. But you probably want to name accessors so someone reading your code doesn't have to remember what \"nth 4\" is supposed to mean. Something like\n(defun my-attr (list)\n (nth 4 list))\n\n(defun (setf my-attr) (new list)\n (setf (nth 4 list) new))\n\n",
"+2 for \"Practical Common Lisp\". It is a mixture of a Common Lisp Cookbook and a quality Teach Yourself Lisp book.\nThere's also \"Successful Common Lisp\" (http://www.psg.com/~dlamkins/sl/cover.html and http://www.psg.com/~dlamkins/sl/contents.html) which seemed to fill a few gaps / extend things in \"Practical Common Lisp\".\nI've also read Paul Graham's \"ANSI Common Lisp\" which is more about the basics of the language, but a bit more of a reference manual.\n",
"I have to agree with Thomas. If you use lists like arrays then that's just going to be slow (and possibly awkward). So you should either use arrays or stick with the functions you've written but move them \"up\" in a way so that you can easily replace the slow lists with arrays later.\n"
] |
[
23,
11,
7,
4,
4,
0
] |
[] |
[] |
[
"functional_programming",
"lisp",
"list"
] |
stackoverflow_0000045227_functional_programming_lisp_list.txt
|
Q:
.NET Development on a Mac Tips
I have just got a MacBook Pro and have been using it (+Fusion) to develop on for about a month now. The purpose of this question is similar to Hidden Features of C#; to become a how-to of tips and trick for windows development on a mac.
I should clarify that I am aware of boot camp but do not use it (nor do I have any interest to), hence my use of steady state to make sure nothing happens to my OS partition without my knowledge. However; as Sara pointed out, Apple makes great hardware and I absolutely LOVE the form factor of my MBP so for someone who is looking for a windows only laptop a mac with boot camp should not be overlooked as the hardware is amazing.
My environment is as follows
* MacBook Pro 15" 2.4Ghz 2GB RAM (Going to upgrade to 4GB soon)
* VMWare Fusion 2.0 Beta
* Windows XP Pro SP3 (Slipstreamed BEFORE install)
Tips:
* Use Windows Steady State to keep OS consistent
* Use svn+ssh to connect to the mac for small repositories then use time machine to backup.
* Use spaces.
A:
@Andrew - I'm exactly in your situation. I use a MBP while my company work is purely Microsoft based: i.e., .NET, COM etc. While nothing can beat running Vista natively in Boot Camp (I've never seen Vista run so fast), the niceties of having your Mac OS be the "main" OS, for internet, mail etc. has gotten me to the following configuration. Works like a charm:
Hardware
Load up your MBP with the max possible - 4GB. It's really worth every $.
Upgrade your hard drive (if not already) to 7200RPM. Major performance boost here.
Software
Parallels Desktop for Mac for virtualization. You can either have multiple VM, or use a boot camp partition. The latter is supposed to be faster, but I haven't really measured it (I use it for having the option to boot natively if I really need speed). The former allows you to have multiple OS. I gave my VM 1GB memory. I can do more if you want it more snappy.
Micorsoft Visual Studio 2005/8 for .NET and C++. I have yet to see any IDE for .NET which beats this one. The intellisense is really amazing.
Code Gear (yes we have some Delphi)
For non development occasional need I also keep Microsoft Office 2007 installed. They do have MAC ports, but those don't always cut it.
A:
One more thing, there is a Deep Fried Bytes Podcast that is entirely about .NET development on Mac - you may find some nuggets in there too.
A:
The extra RAM is great for your OS X environment, but my experience has shown you shouldn't exceed VMWare's recommended RAM settings of 1G.
I was unsuccessful at getting a good experience running my VM(s) from an external drive. And it's a firewire 800. Keep your dev image pruned to as little space as possible and run directly from your internal drive.
If you're sticking with XP (good choice BTW), you might want to give VirtualBox a try. It's VERY zippy. However, it chokes on Vista.
If you have a thought about trying Parallels ... DON'T!!! It worked well enough for a while but eventually became very unstable, crashing often when host files were accessed and freezing 2 out of 3 times during startup. Also, their implementation of networking is convoluted and difficult to setup if, say, you wanted to browse an Apache site on your host from your guest.
If you need to resize your image, there's a good tutorial for Parallels using GParted and Partition Magic. I'm sure it would be simple to adapt it to VMWare.
Your use of SVN is almost exactly what I do (repo is on host, backed up with Time Machine). However, you could speed it up and remove the bloat of a server if you go with simply a file-based repository.
A:
I develop in ASP.Net on my mac almost daily, and I have to question why you aren't interested in Boot Camp. Yeah, VMWare is nice, but for my money nothing beats the performance of running Windows by itself on the Mac.
A:
Just extending this out slightly from the original question, there are some of us doing Delphi Windows development work on virtual machines, too.
I've got a MacBook Pro (1st gen) with a couple of gigs of ram, and a recent iMac (with 4 gigs of ram). I've had more luck than xanadont with external drives, running a couple of different brands on Firewire 400 and finding them to be fine with 16-20Gb VMs. If I'm going to be in one place for a few days (either in the office on the iMac or on the road with the MBP) then I'll copy the VM to the local drive but as a rule it's worked fine for about 2 years now.
I started with Parallels, but there came a point when they started releasing versions that hadn't been regression tested, and sometimes basic stuff would suddenly be broken in the current release. Simple fix, stop downloading the new version and stay 3-6 months behind everyone else. Then I needed to give a VM to a colleague and had to go through a few hoops getting it out of Parallels and into VMware. At that point I tried the Fusion beta, had first-hand experience of moving a VM between Mac and Windows (with no real fuss at all) and that persuaded me to switch to Fusion. I have to say, Fusion is an excellent, stable, reliable tool.
I run WInXP Pro SP 3, Delphi 7, Delphi 2007, SQL Express and various development tools on my VMs (I tend to have a VM for each of my clients).
And I agree with xanadont about the 1Gig ram thing - mine tend to have a gig and no more - I didn't see any real change in behaviour/performance with >1Gb in the vm, so it's better off given to the host operating system rather than the virtual one.
A:
I'm in the same boat; VMware on a MBP, doing .NET development (and a little Mono, but that's a different beast). I would recommend updating to the Fusion 2.0 betas if you haven't yet; they're faster and offer some great new features (multiple snapshots! application linking!) and, in my experience, are just as stable as the 1.x releases.
A:
I believe project mono has mac support.
This assumes you want to develop directly on the mac and that you are happy to forgo some of the MS specific features and tools (so no C#3.0, libraries like WPF and Visual Studio).
Of course, using paralles/vmware/virtualbox or any other virtual machine with a windows guest as you describe will also work fine.
A:
Oded, it depends on what type of .NET development one is trying to do, and for what platform. If you're targeting Windows and building something other than console apps, you're best off not using Mono, as Mono projects are not necessarily drop-in-to-Windows-and-go solutions.
A:
This is not purely .NET related but it is in the vein of the using Spaces item in the question.
Trackpad tips for a MacBook running Leopard (may not be supported in earlier OS X versions):
Set System Preferences, Keyboard & Mouse, Trackpad to use Two Finger Secondary Click. This allows you to use two finger taps instead of the Control + Click combo for the Secondary Click (better know as the context menu to us .NET developers).
Set System Preferences, Keyboard & Mouse, Trackpad to use Two Finger Screen Zoom for magnifying an area in the screen by holding the Control key and scrolling up or down. This is useful for quickly magnifying small fonts or image detail in any Mac application and in Windows running under VMware Fusion. You can pick either the Control, Option or Command keys for zooming by clicking the Options button along with other settings.
|
.NET Development on a Mac Tips
|
I have just got a MacBook Pro and have been using it (+Fusion) to develop on for about a month now. The purpose of this question is similar to Hidden Features of C#; to become a how-to of tips and trick for windows development on a mac.
I should clarify that I am aware of boot camp but do not use it (nor do I have any interest to), hence my use of steady state to make sure nothing happens to my OS partition without my knowledge. However; as Sara pointed out, Apple makes great hardware and I absolutely LOVE the form factor of my MBP so for someone who is looking for a windows only laptop a mac with boot camp should not be overlooked as the hardware is amazing.
My environment is as follows
* MacBook Pro 15" 2.4Ghz 2GB RAM (Going to upgrade to 4GB soon)
* VMWare Fusion 2.0 Beta
* Windows XP Pro SP3 (Slipstreamed BEFORE install)
Tips:
* Use Windows Steady State to keep OS consistent
* Use svn+ssh to connect to the mac for small repositories then use time machine to backup.
* Use spaces.
|
[
"@Andrew - I'm exactly in your situation. I use a MBP while my company work is purely Microsoft based: i.e., .NET, COM etc. While nothing can beat running Vista natively in Boot Camp (I've never seen Vista run so fast), the niceties of having your Mac OS be the \"main\" OS, for internet, mail etc. has gotten me to the following configuration. Works like a charm:\nHardware\n\nLoad up your MBP with the max possible - 4GB. It's really worth every $.\nUpgrade your hard drive (if not already) to 7200RPM. Major performance boost here.\n\nSoftware\n\nParallels Desktop for Mac for virtualization. You can either have multiple VM, or use a boot camp partition. The latter is supposed to be faster, but I haven't really measured it (I use it for having the option to boot natively if I really need speed). The former allows you to have multiple OS. I gave my VM 1GB memory. I can do more if you want it more snappy.\nMicorsoft Visual Studio 2005/8 for .NET and C++. I have yet to see any IDE for .NET which beats this one. The intellisense is really amazing.\nCode Gear (yes we have some Delphi)\n\nFor non development occasional need I also keep Microsoft Office 2007 installed. They do have MAC ports, but those don't always cut it. \n",
"One more thing, there is a Deep Fried Bytes Podcast that is entirely about .NET development on Mac - you may find some nuggets in there too.\n",
"\nThe extra RAM is great for your OS X environment, but my experience has shown you shouldn't exceed VMWare's recommended RAM settings of 1G.\nI was unsuccessful at getting a good experience running my VM(s) from an external drive. And it's a firewire 800. Keep your dev image pruned to as little space as possible and run directly from your internal drive.\nIf you're sticking with XP (good choice BTW), you might want to give VirtualBox a try. It's VERY zippy. However, it chokes on Vista.\nIf you have a thought about trying Parallels ... DON'T!!! It worked well enough for a while but eventually became very unstable, crashing often when host files were accessed and freezing 2 out of 3 times during startup. Also, their implementation of networking is convoluted and difficult to setup if, say, you wanted to browse an Apache site on your host from your guest.\nIf you need to resize your image, there's a good tutorial for Parallels using GParted and Partition Magic. I'm sure it would be simple to adapt it to VMWare.\nYour use of SVN is almost exactly what I do (repo is on host, backed up with Time Machine). However, you could speed it up and remove the bloat of a server if you go with simply a file-based repository.\n\n",
"I develop in ASP.Net on my mac almost daily, and I have to question why you aren't interested in Boot Camp. Yeah, VMWare is nice, but for my money nothing beats the performance of running Windows by itself on the Mac.\n",
"Just extending this out slightly from the original question, there are some of us doing Delphi Windows development work on virtual machines, too. \nI've got a MacBook Pro (1st gen) with a couple of gigs of ram, and a recent iMac (with 4 gigs of ram). I've had more luck than xanadont with external drives, running a couple of different brands on Firewire 400 and finding them to be fine with 16-20Gb VMs. If I'm going to be in one place for a few days (either in the office on the iMac or on the road with the MBP) then I'll copy the VM to the local drive but as a rule it's worked fine for about 2 years now.\nI started with Parallels, but there came a point when they started releasing versions that hadn't been regression tested, and sometimes basic stuff would suddenly be broken in the current release. Simple fix, stop downloading the new version and stay 3-6 months behind everyone else. Then I needed to give a VM to a colleague and had to go through a few hoops getting it out of Parallels and into VMware. At that point I tried the Fusion beta, had first-hand experience of moving a VM between Mac and Windows (with no real fuss at all) and that persuaded me to switch to Fusion. I have to say, Fusion is an excellent, stable, reliable tool.\nI run WInXP Pro SP 3, Delphi 7, Delphi 2007, SQL Express and various development tools on my VMs (I tend to have a VM for each of my clients).\nAnd I agree with xanadont about the 1Gig ram thing - mine tend to have a gig and no more - I didn't see any real change in behaviour/performance with >1Gb in the vm, so it's better off given to the host operating system rather than the virtual one.\n",
"I'm in the same boat; VMware on a MBP, doing .NET development (and a little Mono, but that's a different beast). I would recommend updating to the Fusion 2.0 betas if you haven't yet; they're faster and offer some great new features (multiple snapshots! application linking!) and, in my experience, are just as stable as the 1.x releases.\n",
"I believe project mono has mac support.\nThis assumes you want to develop directly on the mac and that you are happy to forgo some of the MS specific features and tools (so no C#3.0, libraries like WPF and Visual Studio).\nOf course, using paralles/vmware/virtualbox or any other virtual machine with a windows guest as you describe will also work fine.\n",
"Oded, it depends on what type of .NET development one is trying to do, and for what platform. If you're targeting Windows and building something other than console apps, you're best off not using Mono, as Mono projects are not necessarily drop-in-to-Windows-and-go solutions.\n",
"This is not purely .NET related but it is in the vein of the using Spaces item in the question.\nTrackpad tips for a MacBook running Leopard (may not be supported in earlier OS X versions):\n\nSet System Preferences, Keyboard & Mouse, Trackpad to use Two Finger Secondary Click. This allows you to use two finger taps instead of the Control + Click combo for the Secondary Click (better know as the context menu to us .NET developers).\nSet System Preferences, Keyboard & Mouse, Trackpad to use Two Finger Screen Zoom for magnifying an area in the screen by holding the Control key and scrolling up or down. This is useful for quickly magnifying small fonts or image detail in any Mac application and in Windows running under VMware Fusion. You can pick either the Control, Option or Command keys for zooming by clicking the Options button along with other settings.\n\n"
] |
[
15,
7,
1,
1,
1,
1,
0,
0,
0
] |
[
"I use a Mac Book Pro as well but I run Vista. I set aside a little space so I could also run Leopard and just use Boot Camp. You can use Boot Camp to just boot from windows so you never have to deal with Leopard unless you want to. \nI would highly reccomend it because Apple makes great hardware while Microsoft makes great tools (and also great OSs, I love Vista) \ngo ahead and downmod me for being a fangirl, but I've found what works for me. \n"
] |
[
-1
] |
[
".net",
"macos",
"virtualization",
"vmware"
] |
stackoverflow_0000044601_.net_macos_virtualization_vmware.txt
|
Q:
Save registry values in WinCE using a C# app
I'm working on a WinCE 6.0 system with a touchscreen that stores its calibration data (x-y location, offset, etc.) in the system registry (HKLM\HARDWARE\TOUCH). Right now, I'm placing the cal values into registry keys that get put into the OS image at build time. That works fine for the monitor that I get the original cal values from, but when I load this image into another system with a different monitor, the touchscreen pointer location is (understandably) off, because the two monitors do not have the same cal values.
My problem is that I don't know how to properly store values into the registry so that they persist after a power cycle. See, I can recalibrate the screen on the second system, but the new values only exist in volatile memory. I suggested to my boss that we could just tell our customer to leave the power on the unit at all times -- that didn't go over well.
I need advice on how to save the new constants into the registry, so that we can calibrate the monitors once before shipping them out to our customer, and not have to make separate OS images for each unit we build.
A C# method that is known to work in CE6.0 would be helpful. Thanks.
-Odbasta
A:
Follow-up on this question:
Thanks DannySmurf, flushing the registry key was ultimately what needed to be done. However, there were a few steps that I was missing before reaching that stage. So, here's what came to light:
I was using a RAM-based registry, where by design the registry does not persist after a cold boot. I had to switch the registry to hive-based.
When switching to a hive-based registry structure, you need to make sure that the hive exists on a non-volatile medium. This is specified in the platform.reg file:
[HKEY_LOCAL_MACHINE\init\BootVars]
"SystemHive"="\\Hard Disk\\system.hv"
"ProfileDir"="\\Documents and Settings"
"RegistryFlags"=dword:1 ; Flush hive on every RegCloseKey call
"SystemHiveInitialSize"=dword:19000 ; Initial size for hive-registry file
"Start DevMgr"=dword:1
Once the system.hv file is on the hard disk (CF card in my case), the values in the registry will persist after a cold boot. Note that the system.hv file contains all the HKLM keys.
It's also important to note that any drivers that need to be initialized on boot have to be specified as such in the .reg files of the solution. For example, I had to make sure that the hard disk drivers (PCMCIA) were loaded before trying to read the system hive file from them. The way to do this is to add a directive in the following format around each driver init key:
;HIVE BOOT SECTION
[HKEY_LOCAL_MACHINE\Drivers\PCCARD\PCMCIA\TEMPLATE\PCMCIA]
"Dll"="pcmcia.dll"
"NoConfig"=dword:1
"IClass"=multi_sz:"{6BEAB08A-8914-42fd-B33F-61968B9AAB32}=PCMCIA Card Services"
"Flags"=dword:1000
;END HIVE BOOT SECTION
That, plus a lot of luck, is about it.
A:
I think what you're probably looking for is the Flush function of the RegistryKey class. This is normally not necessary (the registry is lazily-flushed by default), but if the power is turned off on the device before the system has a chance to do this, changes will be discarded:
http://msdn.microsoft.com/en-us/library/microsoft.win32.registrykey.flush.aspx
This function is available in .NET Compact Framework version 2.0 and better.
A:
As I understood you need to know how to set a value to the registry during runtime. I hope the codes bellow can help you.
using Microsoft.Win32;
/// <summary>
/// store a key value in registry. if it don't exist it will be created.
/// </summary>
/// <param name="mainKey">the main key of key path</param>
/// <param name="subKey">the path below the main key</param>
/// <param name="keyName">the key name</param>
/// <param name="value">the value to be stored</param>
public static void SetRegistry(int mainKey, String subKey, String keyName, object value)
{
if (mainKey != CURRENT_USER && mainKey != LOCAL_MACHINE)
{
throw new ArgumentOutOfRangeException("mainKey", "\'mainKey\' argument can only be AppUtils.CURRENT_USER or AppUtils.LOCAL_MACHINE values");
}
if (subKey == null)
{
throw new ArgumentNullException("subKey", "\'subKey\' argument cannot be null");
}
if (keyName == null)
{
throw new ArgumentNullException("keyName", "\'keyName\' argument cannot be null");
}
const Boolean WRITABLE = true;
RegistryKey key = null;
if (mainKey == CURRENT_USER)
{
key = Registry.CurrentUser.OpenSubKey(subKey, WRITABLE);
if (key == null)
{
key = Registry.CurrentUser.CreateSubKey(subKey);
}
}
else if (mainKey == LOCAL_MACHINE)
{
key = Registry.LocalMachine.OpenSubKey(subKey, WRITABLE);
if (key == null)
{
key = Registry.LocalMachine.CreateSubKey(subKey);
}
}
key.SetValue(keyName, value);
}
/// <summary>
/// find a key value in registry. if it don't exist the default value will be returned.
/// </summary>
/// <param name="mainKey">the main key of key path</param>
/// <param name="subKey">the path below the main key</param>
/// <param name="keyName">the key name</param>
/// <param name="defaultValue">the value to be stored</param>
public static object GetRegistry(int mainKey, String subKey, String keyName, object defaultValue)
{
if (mainKey != CURRENT_USER && mainKey != LOCAL_MACHINE)
{
throw new ArgumentOutOfRangeException("mainKey", "\'mainKey\' argument can only be AppUtils.CURRENT_USER or AppUtils.LOCAL_MACHINE values");
}
if (subKey == null)
{
throw new ArgumentNullException("subKey", "\'subKey\' argument cannot be null");
}
if (keyName == null)
{
throw new ArgumentNullException("keyName", "\'keyName\' argument cannot be null");
}
RegistryKey key = Registry.CurrentUser.OpenSubKey(subKey);
if (mainKey == CURRENT_USER)
{
key = Registry.CurrentUser.OpenSubKey(subKey);
}
else if (mainKey == LOCAL_MACHINE)
{
key = Registry.LocalMachine.OpenSubKey(subKey);
}
object result = defaultValue;
if (key != null)
{
result = key.GetValue(keyName, defaultValue);
}
return result;
}
|
Save registry values in WinCE using a C# app
|
I'm working on a WinCE 6.0 system with a touchscreen that stores its calibration data (x-y location, offset, etc.) in the system registry (HKLM\HARDWARE\TOUCH). Right now, I'm placing the cal values into registry keys that get put into the OS image at build time. That works fine for the monitor that I get the original cal values from, but when I load this image into another system with a different monitor, the touchscreen pointer location is (understandably) off, because the two monitors do not have the same cal values.
My problem is that I don't know how to properly store values into the registry so that they persist after a power cycle. See, I can recalibrate the screen on the second system, but the new values only exist in volatile memory. I suggested to my boss that we could just tell our customer to leave the power on the unit at all times -- that didn't go over well.
I need advice on how to save the new constants into the registry, so that we can calibrate the monitors once before shipping them out to our customer, and not have to make separate OS images for each unit we build.
A C# method that is known to work in CE6.0 would be helpful. Thanks.
-Odbasta
|
[
"Follow-up on this question:\nThanks DannySmurf, flushing the registry key was ultimately what needed to be done. However, there were a few steps that I was missing before reaching that stage. So, here's what came to light:\n\nI was using a RAM-based registry, where by design the registry does not persist after a cold boot. I had to switch the registry to hive-based.\nWhen switching to a hive-based registry structure, you need to make sure that the hive exists on a non-volatile medium. This is specified in the platform.reg file:\n[HKEY_LOCAL_MACHINE\\init\\BootVars]\n\"SystemHive\"=\"\\\\Hard Disk\\\\system.hv\"\n\"ProfileDir\"=\"\\\\Documents and Settings\"\n\"RegistryFlags\"=dword:1 ; Flush hive on every RegCloseKey call\n\"SystemHiveInitialSize\"=dword:19000 ; Initial size for hive-registry file \n\"Start DevMgr\"=dword:1\n\nOnce the system.hv file is on the hard disk (CF card in my case), the values in the registry will persist after a cold boot. Note that the system.hv file contains all the HKLM keys.\nIt's also important to note that any drivers that need to be initialized on boot have to be specified as such in the .reg files of the solution. For example, I had to make sure that the hard disk drivers (PCMCIA) were loaded before trying to read the system hive file from them. The way to do this is to add a directive in the following format around each driver init key:\n;HIVE BOOT SECTION\n[HKEY_LOCAL_MACHINE\\Drivers\\PCCARD\\PCMCIA\\TEMPLATE\\PCMCIA]\n \"Dll\"=\"pcmcia.dll\"\n \"NoConfig\"=dword:1\n \"IClass\"=multi_sz:\"{6BEAB08A-8914-42fd-B33F-61968B9AAB32}=PCMCIA Card Services\"\n \"Flags\"=dword:1000\n;END HIVE BOOT SECTION\n\n\nThat, plus a lot of luck, is about it.\n",
"I think what you're probably looking for is the Flush function of the RegistryKey class. This is normally not necessary (the registry is lazily-flushed by default), but if the power is turned off on the device before the system has a chance to do this, changes will be discarded:\nhttp://msdn.microsoft.com/en-us/library/microsoft.win32.registrykey.flush.aspx\nThis function is available in .NET Compact Framework version 2.0 and better.\n",
"As I understood you need to know how to set a value to the registry during runtime. I hope the codes bellow can help you.\nusing Microsoft.Win32; \n /// <summary>\n /// store a key value in registry. if it don't exist it will be created. \n /// </summary>\n /// <param name=\"mainKey\">the main key of key path</param>\n /// <param name=\"subKey\">the path below the main key</param>\n /// <param name=\"keyName\">the key name</param>\n /// <param name=\"value\">the value to be stored</param>\n public static void SetRegistry(int mainKey, String subKey, String keyName, object value)\n {\n if (mainKey != CURRENT_USER && mainKey != LOCAL_MACHINE)\n {\n throw new ArgumentOutOfRangeException(\"mainKey\", \"\\'mainKey\\' argument can only be AppUtils.CURRENT_USER or AppUtils.LOCAL_MACHINE values\");\n }\n\n if (subKey == null)\n {\n throw new ArgumentNullException(\"subKey\", \"\\'subKey\\' argument cannot be null\");\n }\n\n if (keyName == null)\n {\n throw new ArgumentNullException(\"keyName\", \"\\'keyName\\' argument cannot be null\");\n }\n\n const Boolean WRITABLE = true;\n RegistryKey key = null;\n\n if (mainKey == CURRENT_USER)\n {\n key = Registry.CurrentUser.OpenSubKey(subKey, WRITABLE);\n\n if (key == null)\n {\n key = Registry.CurrentUser.CreateSubKey(subKey);\n }\n }\n else if (mainKey == LOCAL_MACHINE)\n {\n key = Registry.LocalMachine.OpenSubKey(subKey, WRITABLE);\n\n if (key == null)\n {\n key = Registry.LocalMachine.CreateSubKey(subKey);\n }\n }\n\n key.SetValue(keyName, value);\n\n }\n\n /// <summary>\n /// find a key value in registry. if it don't exist the default value will be returned.\n /// </summary>\n /// <param name=\"mainKey\">the main key of key path</param>\n /// <param name=\"subKey\">the path below the main key</param>\n /// <param name=\"keyName\">the key name</param>\n /// <param name=\"defaultValue\">the value to be stored</param>\n\n public static object GetRegistry(int mainKey, String subKey, String keyName, object defaultValue)\n {\n if (mainKey != CURRENT_USER && mainKey != LOCAL_MACHINE)\n {\n throw new ArgumentOutOfRangeException(\"mainKey\", \"\\'mainKey\\' argument can only be AppUtils.CURRENT_USER or AppUtils.LOCAL_MACHINE values\");\n }\n\n if (subKey == null)\n {\n throw new ArgumentNullException(\"subKey\", \"\\'subKey\\' argument cannot be null\");\n }\n\n if (keyName == null)\n {\n throw new ArgumentNullException(\"keyName\", \"\\'keyName\\' argument cannot be null\");\n }\n\n RegistryKey key = Registry.CurrentUser.OpenSubKey(subKey);\n\n if (mainKey == CURRENT_USER)\n {\n key = Registry.CurrentUser.OpenSubKey(subKey);\n }\n else if (mainKey == LOCAL_MACHINE)\n {\n key = Registry.LocalMachine.OpenSubKey(subKey);\n }\n\n object result = defaultValue;\n\n if (key != null)\n {\n result = key.GetValue(keyName, defaultValue);\n }\n\n return result;\n }\n\n"
] |
[
5,
3,
0
] |
[] |
[] |
[
"c#",
"registry",
"windows_ce"
] |
stackoverflow_0000057609_c#_registry_windows_ce.txt
|
Q:
Getting IIS Worker Process Crash dumps
I'm doing something bad in my ASP.NET app. It could be the any number of CTP libraries I'm using or I'm just not disposing something properly. But when I redeploy my ASP.NET to my Vista IIS7 install or my server's IIS6 install I crash an IIS worker process.
I've narrowed the problem down to my HTTP crawler, which is a multithreaded beast that crawls sites for useful information when asked to. After I start a crawler and redeploy the app over the top, rather than gracefully unloading the appDomain and reloading, an IIS worker process will crash (popping up a crash message) and continue reloading the app domain.
When this crash happens, where can I find the crash dump for analysis?
A:
Download Debugging tools for Windows:
http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx
Debugging Tools for Windows has has a script (ADPLUS) that allows you to create dumps when a process CRASHES:
http://support.microsoft.com/kb/286350
The command should be something like (if you are using IIS6):
cscript adplus.vbs -crash -pn w3wp.exe
This command will attach the debugger to the worker process. When the crash occurs it will generate a dump (a *.DMP file).
You can open it in WinDBG (also included in the Debugging Tools for Windows). File > Open Crash dump...
By default, WinDBG will show you (next to the command line) the thread were the process crashed.
The first thing you need to do in WinDBG is to load the .NET Framework extensions:
.loadby sos mscorwks
then, you will display the managed callstack:
!clrstack
if the thread was not running managed code, then you'll need to check the native stack:
kpn 200
This should give you some ideas. To continue troubleshooting I recommend you read the following article:
http://msdn.microsoft.com/en-us/library/ee817663.aspx
A:
A quick search found IISState - it relies on the Windows debugging tools and needs to be running when a crash occurs, but given the circumstances you've described, this shouldn't be a problem,
|
Getting IIS Worker Process Crash dumps
|
I'm doing something bad in my ASP.NET app. It could be the any number of CTP libraries I'm using or I'm just not disposing something properly. But when I redeploy my ASP.NET to my Vista IIS7 install or my server's IIS6 install I crash an IIS worker process.
I've narrowed the problem down to my HTTP crawler, which is a multithreaded beast that crawls sites for useful information when asked to. After I start a crawler and redeploy the app over the top, rather than gracefully unloading the appDomain and reloading, an IIS worker process will crash (popping up a crash message) and continue reloading the app domain.
When this crash happens, where can I find the crash dump for analysis?
|
[
"Download Debugging tools for Windows:\nhttp://www.microsoft.com/whdc/DevTools/Debugging/default.mspx\nDebugging Tools for Windows has has a script (ADPLUS) that allows you to create dumps when a process CRASHES:\nhttp://support.microsoft.com/kb/286350\nThe command should be something like (if you are using IIS6):\ncscript adplus.vbs -crash -pn w3wp.exe\n\nThis command will attach the debugger to the worker process. When the crash occurs it will generate a dump (a *.DMP file).\nYou can open it in WinDBG (also included in the Debugging Tools for Windows). File > Open Crash dump...\nBy default, WinDBG will show you (next to the command line) the thread were the process crashed.\nThe first thing you need to do in WinDBG is to load the .NET Framework extensions:\n.loadby sos mscorwks\n\nthen, you will display the managed callstack:\n!clrstack\n\nif the thread was not running managed code, then you'll need to check the native stack:\nkpn 200\n\nThis should give you some ideas. To continue troubleshooting I recommend you read the following article:\nhttp://msdn.microsoft.com/en-us/library/ee817663.aspx\n",
"A quick search found IISState - it relies on the Windows debugging tools and needs to be running when a crash occurs, but given the circumstances you've described, this shouldn't be a problem,\n"
] |
[
16,
1
] |
[] |
[] |
[
"asp.net",
"c#",
"debugging",
"iis",
"multithreading"
] |
stackoverflow_0000053435_asp.net_c#_debugging_iis_multithreading.txt
|
Q:
How do I convert GMT to LocalTime in Win32 C?
I want to convert various location/date/times in history from GMT to local time. It seems that SystemTimeToTzSpecificLocalTime is better than FileTimeToLocalFileTime. When the date/time pairs also include various locations, the conversion gets hairy. I've found a data set at ftp://ftp.iana.org/tz/releases — was ftp://elsie.nci.nih.gov/pub/ — that seems to be nicely complete through history and space, but it appears to be designed to be compiled for one time zone instead of all of them.
GetDynamicTimeZoneInformation and GetTimeZoneInformationForYear functions are only available beginning in Vista / Server 2008 and I have machines back to NT 4.0. I will probably try to use them conditionally on the newer systems.
Is there a nice C package that will solve the problem for me for Windows XP and NT 4.0?
A:
SystemTimeToTzSpecificLocalTime is the right function for this, but you need a way to populate a complete database of TIMEZONE_INFO structures.
For details on how you can build a set of TIMEZONE_INFO structures out of the registry itself, see this thread on egghead cafe:
http://www.eggheadcafe.com/software/aspnet/31656478/get-current-time-in-diffe.aspx
To get things truly correct across legislative changes, you'll need GetDynamicTimeZoneInformation and GetTimeZoneInformationForYear. See http://msdn.microsoft.com/en-us/library/ms724421(VS.85).aspx.
|
How do I convert GMT to LocalTime in Win32 C?
|
I want to convert various location/date/times in history from GMT to local time. It seems that SystemTimeToTzSpecificLocalTime is better than FileTimeToLocalFileTime. When the date/time pairs also include various locations, the conversion gets hairy. I've found a data set at ftp://ftp.iana.org/tz/releases — was ftp://elsie.nci.nih.gov/pub/ — that seems to be nicely complete through history and space, but it appears to be designed to be compiled for one time zone instead of all of them.
GetDynamicTimeZoneInformation and GetTimeZoneInformationForYear functions are only available beginning in Vista / Server 2008 and I have machines back to NT 4.0. I will probably try to use them conditionally on the newer systems.
Is there a nice C package that will solve the problem for me for Windows XP and NT 4.0?
|
[
"SystemTimeToTzSpecificLocalTime is the right function for this, but you need a way to populate a complete database of TIMEZONE_INFO structures. \nFor details on how you can build a set of TIMEZONE_INFO structures out of the registry itself, see this thread on egghead cafe:\nhttp://www.eggheadcafe.com/software/aspnet/31656478/get-current-time-in-diffe.aspx\nTo get things truly correct across legislative changes, you'll need GetDynamicTimeZoneInformation and GetTimeZoneInformationForYear. See http://msdn.microsoft.com/en-us/library/ms724421(VS.85).aspx.\n"
] |
[
1
] |
[] |
[] |
[
"c",
"timezone",
"winapi"
] |
stackoverflow_0000066457_c_timezone_winapi.txt
|
Q:
Handling empty values with ADO.NET and AddWithValue()
I have a control that, upon postback, saves form results back to the database. It populates the values to be saved by iterating through the querystring. So, for the following SQL statement (vastly simplified for the sake of discussion)...
UPDATE MyTable
SET MyVal1 = @val1,
MyVal2 = @val2
WHERE @id = @id
...it would cycle through the querystring keys thusly:
For Each Key As String In Request.QueryString.Keys
Command.Parameters.AddWithValue("@" & Key, Request.QueryString(Key))
Next
HOWEVER, I'm now running into a situation where, under certain circumstances, some of these variables may not be present in the querystring. If I don't pass along val2 in the querystring, I get an error: System.Data.SqlClient.SqlException: Must declare the scalar value "@val2".
Attempts to detect the missing value in the SQL statement...
IF @val2 IS NOT NULL
UPDATE MyTable
SET MyVal1 = @val1,
MyVal2 = @val2
WHERE @id = @id
... have failed.
What's the best way to attack this? Must I parse the SQL block with RegEx, scanning for variable names not present in the querystring? Or, is there a more elegant way to approach?
UPDATE: Detecting null values in the VB codebehind defeats the purpose of decoupling the code from its context. I'd rather not litter my function with conditions for every conceivable variable that might be passed, or not passed.
A:
First of all, I would suggest against adding all entries on the querystring as parameter names, I'm not sure this is unsafe, but I wouldn't take that chance.
The problem is you're calling
Command.Parameters.AddWithValue("@val2", null)
Instead of this you should be calling:
If MyValue Is Nothing Then
Command.Parameters.AddWithValue("@val2", DBNull.Value)
Else
Command.Parameters.AddWithValue("@val2", MyValue)
End If
A:
Update: The solution I gave is based on the assumption that it is a stored proc.
Will giving a default value of Null to the SQL Stored proc parameters work?
If it is dynamic sql, always pass the correct number of params, whether it is null or the actual value or specify default values.
A:
I like using the AddWithValue method.
I always specify default SQL parameters for the "optional" parameters. That way, if it is empty, ADO.NET will not include the parameter, and the stored procedure will use it's default value.
I don't have to deal with checking/passing in DBNull.Value that way.
A:
After struggling to find a simpler solution, I gave up and wrote a routine to parse my SQL query for variable names:
Dim FieldRegEx As New Regex("@([A-Z_]+)", RegexOptions.IgnoreCase)
Dim Fields As Match = FieldRegEx.Match(Query)
Dim Processed As New ArrayList
While Fields.Success
Dim Key As String = Fields.Groups(1).Value
Dim Val As Object = Request.QueryString(Key)
If Val = "" Then Val = DBNull.Value
If Not Processed.Contains(Key) Then
Command.Parameters.AddWithValue("@" & Key, Val)
Processed.Add(Key)
End If
Fields = Fields.NextMatch()
End While
It's a bit of a hack, but it allows me to keep my code blissfully ignorant of the context of my SQL query.
|
Handling empty values with ADO.NET and AddWithValue()
|
I have a control that, upon postback, saves form results back to the database. It populates the values to be saved by iterating through the querystring. So, for the following SQL statement (vastly simplified for the sake of discussion)...
UPDATE MyTable
SET MyVal1 = @val1,
MyVal2 = @val2
WHERE @id = @id
...it would cycle through the querystring keys thusly:
For Each Key As String In Request.QueryString.Keys
Command.Parameters.AddWithValue("@" & Key, Request.QueryString(Key))
Next
HOWEVER, I'm now running into a situation where, under certain circumstances, some of these variables may not be present in the querystring. If I don't pass along val2 in the querystring, I get an error: System.Data.SqlClient.SqlException: Must declare the scalar value "@val2".
Attempts to detect the missing value in the SQL statement...
IF @val2 IS NOT NULL
UPDATE MyTable
SET MyVal1 = @val1,
MyVal2 = @val2
WHERE @id = @id
... have failed.
What's the best way to attack this? Must I parse the SQL block with RegEx, scanning for variable names not present in the querystring? Or, is there a more elegant way to approach?
UPDATE: Detecting null values in the VB codebehind defeats the purpose of decoupling the code from its context. I'd rather not litter my function with conditions for every conceivable variable that might be passed, or not passed.
|
[
"First of all, I would suggest against adding all entries on the querystring as parameter names, I'm not sure this is unsafe, but I wouldn't take that chance.\nThe problem is you're calling\nCommand.Parameters.AddWithValue(\"@val2\", null)\n\nInstead of this you should be calling:\nIf MyValue Is Nothing Then\n Command.Parameters.AddWithValue(\"@val2\", DBNull.Value)\nElse\n Command.Parameters.AddWithValue(\"@val2\", MyValue)\nEnd If\n\n",
"Update: The solution I gave is based on the assumption that it is a stored proc.\nWill giving a default value of Null to the SQL Stored proc parameters work?\nIf it is dynamic sql, always pass the correct number of params, whether it is null or the actual value or specify default values.\n",
"I like using the AddWithValue method.\nI always specify default SQL parameters for the \"optional\" parameters. That way, if it is empty, ADO.NET will not include the parameter, and the stored procedure will use it's default value.\nI don't have to deal with checking/passing in DBNull.Value that way.\n",
"After struggling to find a simpler solution, I gave up and wrote a routine to parse my SQL query for variable names:\nDim FieldRegEx As New Regex(\"@([A-Z_]+)\", RegexOptions.IgnoreCase)\nDim Fields As Match = FieldRegEx.Match(Query)\nDim Processed As New ArrayList\n\nWhile Fields.Success\n Dim Key As String = Fields.Groups(1).Value\n Dim Val As Object = Request.QueryString(Key)\n If Val = \"\" Then Val = DBNull.Value\n If Not Processed.Contains(Key) Then\n Command.Parameters.AddWithValue(\"@\" & Key, Val)\n Processed.Add(Key)\n End If\n Fields = Fields.NextMatch()\nEnd While\n\nIt's a bit of a hack, but it allows me to keep my code blissfully ignorant of the context of my SQL query.\n"
] |
[
9,
0,
0,
0
] |
[] |
[] |
[
"ado.net",
"sql",
"vb.net"
] |
stackoverflow_0000065566_ado.net_sql_vb.net.txt
|
Q:
Problem with unicode String literal in unit test
I have a JUnit test that tests adding Strings to a Dictionary custom type. Everything works fine for everyone else on a Linux/Windows machine, however, being the first dev in my shop on a mac, this unit test fails for me. The offending lines are where unicode string literals are used:
dict.add( "Su字/会意pin", "Su字/会意pin" );
dict.add( "字/会意", "字/会意" );
Is there a platform-independent way to specify the unicode string? I've tried changing the encoding of the file in Eclipse to UTF-8 instead of the default MacRoman, but the test still fails.
A:
In the flags for the javac compiler, set the -encoding flag, so in your case you'd mark it as
javac -encoding UTF-8
|
Problem with unicode String literal in unit test
|
I have a JUnit test that tests adding Strings to a Dictionary custom type. Everything works fine for everyone else on a Linux/Windows machine, however, being the first dev in my shop on a mac, this unit test fails for me. The offending lines are where unicode string literals are used:
dict.add( "Su字/会意pin", "Su字/会意pin" );
dict.add( "字/会意", "字/会意" );
Is there a platform-independent way to specify the unicode string? I've tried changing the encoding of the file in Eclipse to UTF-8 instead of the default MacRoman, but the test still fails.
|
[
"In the flags for the javac compiler, set the -encoding flag, so in your case you'd mark it as\njavac -encoding UTF-8\n\n"
] |
[
4
] |
[] |
[] |
[
"encoding",
"java",
"macos",
"unicode"
] |
stackoverflow_0000066668_encoding_java_macos_unicode.txt
|
Q:
Fail fast finally clause in Java
Is there a way to detect, from within the finally clause, that an exception is in the process of being thrown?
See the example below:
try {
// code that may or may not throw an exception
} finally {
SomeCleanupFunctionThatThrows();
// if currently executing an exception, exit the program,
// otherwise just let the exception thrown by the function
// above propagate
}
or is ignoring one of the exceptions the only thing you can do?
In C++ it doesn't even let you ignore one of the exceptions and just calls terminate(). Most other languages use the same rules as java.
A:
Set a flag variable, then check for it in the finally clause, like so:
boolean exceptionThrown = true;
try {
mightThrowAnException();
exceptionThrown = false;
} finally {
if (exceptionThrown) {
// Whatever you want to do
}
}
A:
If you find yourself doing this, then you might have a problem with your design. The idea of a "finally" block is that you want something done regardless of how the method exits. Seems to me like you don't need a finally block at all, and should just use the try-catch blocks:
try {
doSomethingDangerous(); // can throw exception
onSuccess();
} catch (Exception ex) {
onFailure();
}
A:
If a function throws and you want to catch the exception, you'll have to wrap the function in a try block, it's the safest way. So in your example:
try {
// ...
} finally {
try {
SomeCleanupFunctionThatThrows();
} catch(Throwable t) { //or catch whatever you want here
// exception handling code, or just ignore it
}
}
A:
Do you mean you want the finally block to act differently depending on whether the try block completed successfully?
If so, you could always do something like:
boolean exceptionThrown = false;
try {
// ...
} catch(Throwable t) {
exceptionThrown = true;
// ...
} finally {
try {
SomeCleanupFunctionThatThrows();
} catch(Throwable t) {
if(exceptionThrown) ...
}
}
That's getting pretty convoluted, though... you might want to think of a way to restructure your code to make doing this unnecessary.
|
Fail fast finally clause in Java
|
Is there a way to detect, from within the finally clause, that an exception is in the process of being thrown?
See the example below:
try {
// code that may or may not throw an exception
} finally {
SomeCleanupFunctionThatThrows();
// if currently executing an exception, exit the program,
// otherwise just let the exception thrown by the function
// above propagate
}
or is ignoring one of the exceptions the only thing you can do?
In C++ it doesn't even let you ignore one of the exceptions and just calls terminate(). Most other languages use the same rules as java.
|
[
"Set a flag variable, then check for it in the finally clause, like so:\nboolean exceptionThrown = true;\ntry {\n mightThrowAnException();\n exceptionThrown = false;\n} finally {\n if (exceptionThrown) {\n // Whatever you want to do\n }\n}\n\n",
"If you find yourself doing this, then you might have a problem with your design. The idea of a \"finally\" block is that you want something done regardless of how the method exits. Seems to me like you don't need a finally block at all, and should just use the try-catch blocks:\ntry {\n doSomethingDangerous(); // can throw exception\n onSuccess();\n} catch (Exception ex) {\n onFailure();\n}\n\n",
"If a function throws and you want to catch the exception, you'll have to wrap the function in a try block, it's the safest way. So in your example:\ntry {\n // ...\n} finally {\n try {\n SomeCleanupFunctionThatThrows();\n } catch(Throwable t) { //or catch whatever you want here\n // exception handling code, or just ignore it\n }\n}\n\n",
"Do you mean you want the finally block to act differently depending on whether the try block completed successfully?\nIf so, you could always do something like:\nboolean exceptionThrown = false;\ntry {\n // ...\n} catch(Throwable t) {\n exceptionThrown = true;\n // ...\n} finally {\n try {\n SomeCleanupFunctionThatThrows();\n } catch(Throwable t) { \n if(exceptionThrown) ...\n }\n}\n\nThat's getting pretty convoluted, though... you might want to think of a way to restructure your code to make doing this unnecessary.\n"
] |
[
14,
10,
1,
0
] |
[
"No I do not believe so. The catch block will run to completion before the finally block.\ntry {\n // code that may or may not throw an exception\n} catch {\n// catch block must exist.\nfinally {\n SomeCleanupFunctionThatThrows();\n// this portion is ran after catch block finishes\n}\n\nOtherwise you can add a synchronize() object that the exception code will use, that you can check in the finally block, which would help you identify if in a seperate thread you are running an exception.\n"
] |
[
-1
] |
[
"exception",
"fault_tolerance",
"java"
] |
stackoverflow_0000066643_exception_fault_tolerance_java.txt
|
Q:
How should I structure a Java application, where do I put my classes?
First of all, I know how to build a Java application. But I have always been puzzled about where to put my classes. There are proponents for organizing the packages in a strictly domain oriented fashion, others separate by tier.
I myself have always had problems with
naming,
placing
So,
Where do you put your domain specific constants (and what is the best name for such a class)?
Where do you put classes for stuff which is both infrastructural and domain specific (for instance I have a FileStorageStrategy class, which stores the files either in the database, or alternatively in database)?
Where to put Exceptions?
Are there any standards to which I can refer?
A:
I've really come to like Maven's Standard Directory Layout.
One of the key ideas for me is to have two source roots - one for production code and one for test code like so:
MyProject/src/main/java/com/acme/Widget.java
MyProject/src/test/java/com/acme/WidgetTest.java
(here, both src/main/java and src/test/java are source roots).
Advantages:
Your tests have package (or "default") level access to your classes under test.
You can easily package only your production sources into a JAR by dropping src/test/java as a source root.
One rule of thumb about class placement and packages:
Generally speaking, well structured projects will be free of circular dependencies. Learn when they are bad (and when they are not), and consider a tool like JDepend or SonarJ that will help you eliminate them.
A:
I'm a huge fan of organized sources, so I always create the following directory structure:
/src - for your packages & classes
/test - for unit tests
/docs - for documentation, generated and manually edited
/lib - 3rd party libraries
/etc - unrelated stuff
/bin (or /classes) - compiled classes, output of your compile
/dist - for distribution packages, hopefully auto generated by a build system
In /src I'm using the default Java patterns: Package names starting with your domain (org.yourdomain.yourprojectname) and class names reflecting the OOP aspect you're creating with the class (see the other commenters). Common package names like util, model, view, events are useful, too.
I tend to put constants for a specific topic in an own class, like SessionConstants or ServiceConstants in the same package of the domain classes.
A:
Where I'm working, we're using Maven 2 and we have a pretty nice archetype for our projects. The goal was to obtain a good separation of concerns, thus we defined a project structure using multiple modules (one for each application 'layer'):
- common: common code used by the other layers (e.g., i18n)
- entities: the domain entities
- repositories: this module contains the daos interfaces and implementations
- services-intf: interfaces for the services (e.g, UserService, ...)
- services-impl: implementations of the services (e.g, UserServiceImpl)
- web: everything regarding the web content (e.g., css, jsps, jsf pages, ...)
- ws: web services
Each module has its own dependencies (e.g., repositories could have jpa) and some are project wide (thus they belong in the common module). Dependencies between the different project modules clearly separate things (e.g., the web layer depends on the service layer but doesn't know about the repository layer).
Each module has its own base package, for example if the application package is "com.foo.bar", then we have:
com.foo.bar.common
com.foo.bar.entities
com.foo.bar.repositories
com.foo.bar.services
com.foo.bar.services.impl
...
Each module respects the standard maven project structure:
src\
..main\java
...\resources
..test\java
...\resources
Unit tests for a given layer easily find their place under \src\test... Everything that is domain specific has it's place in the entities module. Now something like a FileStorageStrategy should go into the repositories module, since we don't need to know exactly what the implementation is. In the services layer, we only know the repository interface, we do not care what the specific implementation is (separation of concerns).
There are multiple advantages to this approach:
clear separation of concerns
each module is packageable as a jar (or a war in the case of the web module) and thus allows for easier code reuse (e.g., we could install the module in the maven repository and reuse it in another project)
maximum independence of each part of the project
I know this doesn't answer all your questions, but I think this could put you on the right path and could prove useful to others.
A:
Class names should always be descriptive and self-explanatory. If you have multiple domains of responsibility for your classes then they should probably be refactored.
Likewise for you packages. They should be grouped by domain of responsibility. Every domain has it's own exceptions.
Generally don't sweat it until you get to a point where it is becoming overwhelming and bloated. Then sit down and don't code, just refactor the classes out, compiling regularly to make sure everything works. Then continue as you did before.
A:
Use packages to group related functionality together.
Usually the top of your package tree is your domain name reversed (com.domain.subdomain) to guarantee uniqueness, and then usually there will be a package for your application. Then subdivide that by related area, so your FileStorageStrategy might go in, say, com.domain.subdomain.myapp.storage, and then there might be specific implementations/subclasses/whatever in com.domain.subdomain.myapp.storage.file and com.domain.subdomain.myapp.storage.database. These names can get pretty long, but import keeps them all at the top of files and IDEs can help to manage that as well.
Exceptions usually go in the same package as the classes that throw them, so if you had, say, FileStorageException it would go in the same package as FileStorageStrategy. Likewise an interface defining constants would be in the same package.
There's not really any standard as such, just use common sense, and if it all gets too messy, refactor!
A:
One thing that I found very helpful for unit tests was to have a myApp/src/ and also myApp/test_src/ directories. This way, I can place unit tests in the same packages as the classes they test, and yet I can easily exclude the test cases when I prepare my production installation.
A:
Short answer: draw your system architecture in terms of modules, drawn side-by-side, with each module sliced vertically into layers (e.g. view, model, persistence). Then use a structure like com.mycompany.myapp.somemodule.somelayer, e.g. com.mycompany.myapp.client.view or com.mycompany.myapp.server.model.
Using the top level of packages for application modules, in the old-fashioned computer-science sense of modular programming, ought to be obvious. However, on most of the projects I have worked on we end up forgetting to do that, and end up with a mess of packages without that top-level structure. This anti-pattern usually shows itself as a package for something like 'listeners' or 'actions' that groups otherwise unrelated classes simply because they happen to implement the same interface.
Within a module, or in a small application, use packages for the application layers. Likely packages include things like the following, depending on the architecture:
com.mycompany.myapp.view
com.mycompany.myapp.model
com.mycompany.myapp.services
com.mycompany.myapp.rules
com.mycompany.myapp.persistence (or 'dao' for data access layer)
com.mycompany.myapp.util (beware of this being used as if it were 'misc')
Within each of these layers, it is natural to group classes by type if there are a lot. A common anti-pattern here is to unnecessarily introduce too many packages and levels of sub-package so that there are only a few classes in each package.
A:
I think keep it simple and don't over think it. Don't over abstract and layer too much. Just keep it neat, and as it grows, refactoring it is trivial. One of the best features of IDEs is refactoring, so why not make use of it and save you brain power for solving problems that are related to your app, rather then meta issues like code organisation.
A:
One thing I've done in the past - if I'm extending a class I'll try and follow their conventions. For example, when working with the Spring Framework, I'll have my MVC Controller classes in a package called com.mydomain.myapp.web.servlet.mvc
If I'm not extending something I just go with what is simplest. com.mydomain.domain for Domain Objects (although if you have a ton of domain objects this package could get a bit unwieldy).
For domain specific constants, I actually put them as public constants in the most related class. For example, if I have a "Member" class and have a maximum member name length constant, I put it in the Member class. Some shops make a separate Constants class but I don't see the value in lumping unrelated numbers and strings into a single class. I've seen some other shops try to solve this problem by creating SEPARATE Constants classes, but that just seems like a waste of time and the result is too confusing. Using this setup, a large project with multiple developers will be duplicating constants all over the place.
A:
I like break my classes down into packages that are related to each other.
For example:
Model For database related calls
View Classes that deal with what you see
Control Core functionality classes
Util Any misc. classes that are used (typically static functions)
etc.
|
How should I structure a Java application, where do I put my classes?
|
First of all, I know how to build a Java application. But I have always been puzzled about where to put my classes. There are proponents for organizing the packages in a strictly domain oriented fashion, others separate by tier.
I myself have always had problems with
naming,
placing
So,
Where do you put your domain specific constants (and what is the best name for such a class)?
Where do you put classes for stuff which is both infrastructural and domain specific (for instance I have a FileStorageStrategy class, which stores the files either in the database, or alternatively in database)?
Where to put Exceptions?
Are there any standards to which I can refer?
|
[
"I've really come to like Maven's Standard Directory Layout.\nOne of the key ideas for me is to have two source roots - one for production code and one for test code like so:\nMyProject/src/main/java/com/acme/Widget.java\nMyProject/src/test/java/com/acme/WidgetTest.java\n\n(here, both src/main/java and src/test/java are source roots).\nAdvantages:\n\nYour tests have package (or \"default\") level access to your classes under test.\nYou can easily package only your production sources into a JAR by dropping src/test/java as a source root.\n\nOne rule of thumb about class placement and packages:\nGenerally speaking, well structured projects will be free of circular dependencies. Learn when they are bad (and when they are not), and consider a tool like JDepend or SonarJ that will help you eliminate them.\n",
"I'm a huge fan of organized sources, so I always create the following directory structure:\n/src - for your packages & classes\n/test - for unit tests\n/docs - for documentation, generated and manually edited\n/lib - 3rd party libraries\n/etc - unrelated stuff\n/bin (or /classes) - compiled classes, output of your compile\n/dist - for distribution packages, hopefully auto generated by a build system\n\nIn /src I'm using the default Java patterns: Package names starting with your domain (org.yourdomain.yourprojectname) and class names reflecting the OOP aspect you're creating with the class (see the other commenters). Common package names like util, model, view, events are useful, too.\nI tend to put constants for a specific topic in an own class, like SessionConstants or ServiceConstants in the same package of the domain classes.\n",
"Where I'm working, we're using Maven 2 and we have a pretty nice archetype for our projects. The goal was to obtain a good separation of concerns, thus we defined a project structure using multiple modules (one for each application 'layer'):\n - common: common code used by the other layers (e.g., i18n)\n - entities: the domain entities\n - repositories: this module contains the daos interfaces and implementations\n - services-intf: interfaces for the services (e.g, UserService, ...) \n - services-impl: implementations of the services (e.g, UserServiceImpl) \n - web: everything regarding the web content (e.g., css, jsps, jsf pages, ...)\n - ws: web services\nEach module has its own dependencies (e.g., repositories could have jpa) and some are project wide (thus they belong in the common module). Dependencies between the different project modules clearly separate things (e.g., the web layer depends on the service layer but doesn't know about the repository layer).\nEach module has its own base package, for example if the application package is \"com.foo.bar\", then we have:\ncom.foo.bar.common\ncom.foo.bar.entities\ncom.foo.bar.repositories\ncom.foo.bar.services\ncom.foo.bar.services.impl\n...\n\nEach module respects the standard maven project structure:\n src\\\n ..main\\java\n ...\\resources\n ..test\\java\n ...\\resources\n\nUnit tests for a given layer easily find their place under \\src\\test... Everything that is domain specific has it's place in the entities module. Now something like a FileStorageStrategy should go into the repositories module, since we don't need to know exactly what the implementation is. In the services layer, we only know the repository interface, we do not care what the specific implementation is (separation of concerns).\nThere are multiple advantages to this approach:\n\nclear separation of concerns\neach module is packageable as a jar (or a war in the case of the web module) and thus allows for easier code reuse (e.g., we could install the module in the maven repository and reuse it in another project)\nmaximum independence of each part of the project\n\nI know this doesn't answer all your questions, but I think this could put you on the right path and could prove useful to others.\n",
"Class names should always be descriptive and self-explanatory. If you have multiple domains of responsibility for your classes then they should probably be refactored.\nLikewise for you packages. They should be grouped by domain of responsibility. Every domain has it's own exceptions.\nGenerally don't sweat it until you get to a point where it is becoming overwhelming and bloated. Then sit down and don't code, just refactor the classes out, compiling regularly to make sure everything works. Then continue as you did before.\n",
"Use packages to group related functionality together.\nUsually the top of your package tree is your domain name reversed (com.domain.subdomain) to guarantee uniqueness, and then usually there will be a package for your application. Then subdivide that by related area, so your FileStorageStrategy might go in, say, com.domain.subdomain.myapp.storage, and then there might be specific implementations/subclasses/whatever in com.domain.subdomain.myapp.storage.file and com.domain.subdomain.myapp.storage.database. These names can get pretty long, but import keeps them all at the top of files and IDEs can help to manage that as well.\nExceptions usually go in the same package as the classes that throw them, so if you had, say, FileStorageException it would go in the same package as FileStorageStrategy. Likewise an interface defining constants would be in the same package.\nThere's not really any standard as such, just use common sense, and if it all gets too messy, refactor!\n",
"One thing that I found very helpful for unit tests was to have a myApp/src/ and also myApp/test_src/ directories. This way, I can place unit tests in the same packages as the classes they test, and yet I can easily exclude the test cases when I prepare my production installation.\n",
"Short answer: draw your system architecture in terms of modules, drawn side-by-side, with each module sliced vertically into layers (e.g. view, model, persistence). Then use a structure like com.mycompany.myapp.somemodule.somelayer, e.g. com.mycompany.myapp.client.view or com.mycompany.myapp.server.model.\nUsing the top level of packages for application modules, in the old-fashioned computer-science sense of modular programming, ought to be obvious. However, on most of the projects I have worked on we end up forgetting to do that, and end up with a mess of packages without that top-level structure. This anti-pattern usually shows itself as a package for something like 'listeners' or 'actions' that groups otherwise unrelated classes simply because they happen to implement the same interface.\nWithin a module, or in a small application, use packages for the application layers. Likely packages include things like the following, depending on the architecture:\n\ncom.mycompany.myapp.view\ncom.mycompany.myapp.model\ncom.mycompany.myapp.services\ncom.mycompany.myapp.rules\ncom.mycompany.myapp.persistence (or 'dao' for data access layer)\ncom.mycompany.myapp.util (beware of this being used as if it were 'misc')\n\nWithin each of these layers, it is natural to group classes by type if there are a lot. A common anti-pattern here is to unnecessarily introduce too many packages and levels of sub-package so that there are only a few classes in each package.\n",
"I think keep it simple and don't over think it. Don't over abstract and layer too much. Just keep it neat, and as it grows, refactoring it is trivial. One of the best features of IDEs is refactoring, so why not make use of it and save you brain power for solving problems that are related to your app, rather then meta issues like code organisation.\n",
"One thing I've done in the past - if I'm extending a class I'll try and follow their conventions. For example, when working with the Spring Framework, I'll have my MVC Controller classes in a package called com.mydomain.myapp.web.servlet.mvc\nIf I'm not extending something I just go with what is simplest. com.mydomain.domain for Domain Objects (although if you have a ton of domain objects this package could get a bit unwieldy).\nFor domain specific constants, I actually put them as public constants in the most related class. For example, if I have a \"Member\" class and have a maximum member name length constant, I put it in the Member class. Some shops make a separate Constants class but I don't see the value in lumping unrelated numbers and strings into a single class. I've seen some other shops try to solve this problem by creating SEPARATE Constants classes, but that just seems like a waste of time and the result is too confusing. Using this setup, a large project with multiple developers will be duplicating constants all over the place.\n",
"I like break my classes down into packages that are related to each other.\nFor example:\nModel For database related calls\nView Classes that deal with what you see\nControl Core functionality classes\nUtil Any misc. classes that are used (typically static functions)\netc.\n"
] |
[
28,
12,
11,
4,
3,
3,
3,
2,
0,
0
] |
[] |
[] |
[
"architecture",
"java"
] |
stackoverflow_0000007596_architecture_java.txt
|
Q:
Using network services when disconnected in Mac OS X
From time to time am I working in a completely disconnected environment with a Macbook Pro. For testing purposes I need to run a local DNS server in a VMWare session. I've configured the lookup system to use the DNS server (/etc/resolve.conf and through the network configuration panel, which is using configd underneath), and commands like "dig" and "nslookup" work. For example, my DNS server is configured to resolve www.example.com to 127.0.0.1, this is the output of "dig www.example.com":
; <<>> DiG 9.3.5-P1 <<>> www.example.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64859
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;www.example.com. IN A
;; ANSWER SECTION:
www.example.com. 86400 IN A 127.0.0.1
;; Query time: 2 msec
;; SERVER: 172.16.35.131#53(172.16.35.131)
;; WHEN: Mon Sep 15 21:13:15 2008
;; MSG SIZE rcvd: 49
Unfortunately, if I try to ping or setup a connection in a browser, the DNS name is not resolved. This is the output of "ping www.example.com":
ping: cannot resolve www.example.com: Unknown host
It seems that those tools, that are more integrated within Mac OS X 10.4 (and up), are not using the "/etc/resolv.conf" system anymore. Configuring them through scutil is no help, because it seems that if the wireless or the buildin ethernet interface is inactive, basic network functions don't seem to work.
In Linux (for example Ubuntu), it is possible to turn off the wireless adapter, without turning of the network capabilities. So in Linux it seems that I can work completely disconnected.
A solution could be using an ethernet loopback connector, but I would rather like a software solution, as both Windows and Linux don't have this problem.
A:
On OS X starting in 10.4, /etc/resolv.conf is no longer the canonical location for DNS IP addresses. Some Unix tools such as dig and nslookup will use it directly, but anything that uses Unix or Mac APIs to do DNS lookups will not. Instead, configd maintains a database which provides many more options, like using different nameservers for different domains. (A subset of this information is mirrored to /etc/resolv.conf for compatibility.)
You can edit the nameserver info from code with SCDynamicStore, or use scutil interactively or from a script. I posted some links to sample scripts for both methods here. This thread from when I was trying to figure this stuff out may also be of some use.
A:
I run into this from time to time on different notebooks, and I have found the simplest is a low-tech, non software solution - create an ethernet loopback connecter. You can do it in 2 minutes with an old network cable, just cut the end off and join the send and receive pair just above the RJ45 connector. (obviously your interface needs a static IP)
Old school, but completely software independent and good for working in a dev environment on long flights... :)
there is a simple diagram here
|
Using network services when disconnected in Mac OS X
|
From time to time am I working in a completely disconnected environment with a Macbook Pro. For testing purposes I need to run a local DNS server in a VMWare session. I've configured the lookup system to use the DNS server (/etc/resolve.conf and through the network configuration panel, which is using configd underneath), and commands like "dig" and "nslookup" work. For example, my DNS server is configured to resolve www.example.com to 127.0.0.1, this is the output of "dig www.example.com":
; <<>> DiG 9.3.5-P1 <<>> www.example.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64859
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;www.example.com. IN A
;; ANSWER SECTION:
www.example.com. 86400 IN A 127.0.0.1
;; Query time: 2 msec
;; SERVER: 172.16.35.131#53(172.16.35.131)
;; WHEN: Mon Sep 15 21:13:15 2008
;; MSG SIZE rcvd: 49
Unfortunately, if I try to ping or setup a connection in a browser, the DNS name is not resolved. This is the output of "ping www.example.com":
ping: cannot resolve www.example.com: Unknown host
It seems that those tools, that are more integrated within Mac OS X 10.4 (and up), are not using the "/etc/resolv.conf" system anymore. Configuring them through scutil is no help, because it seems that if the wireless or the buildin ethernet interface is inactive, basic network functions don't seem to work.
In Linux (for example Ubuntu), it is possible to turn off the wireless adapter, without turning of the network capabilities. So in Linux it seems that I can work completely disconnected.
A solution could be using an ethernet loopback connector, but I would rather like a software solution, as both Windows and Linux don't have this problem.
|
[
"On OS X starting in 10.4, /etc/resolv.conf is no longer the canonical location for DNS IP addresses. Some Unix tools such as dig and nslookup will use it directly, but anything that uses Unix or Mac APIs to do DNS lookups will not. Instead, configd maintains a database which provides many more options, like using different nameservers for different domains. (A subset of this information is mirrored to /etc/resolv.conf for compatibility.)\nYou can edit the nameserver info from code with SCDynamicStore, or use scutil interactively or from a script. I posted some links to sample scripts for both methods here. This thread from when I was trying to figure this stuff out may also be of some use. \n",
"I run into this from time to time on different notebooks, and I have found the simplest is a low-tech, non software solution - create an ethernet loopback connecter. You can do it in 2 minutes with an old network cable, just cut the end off and join the send and receive pair just above the RJ45 connector. (obviously your interface needs a static IP)\nOld school, but completely software independent and good for working in a dev environment on long flights... :)\nthere is a simple diagram here\n"
] |
[
1,
0
] |
[] |
[] |
[
"macos",
"networking"
] |
stackoverflow_0000065925_macos_networking.txt
|
Q:
What are the first tasks for implementing Unit Testing in Brownfield Applications?
Do you refactor your SQL first? Your architecture? or your code base?
Do you change languages? Do you throw everything away and start from scratch? [Not refactoring]
A:
I'm adding unit testing to a large, legacy spaghetti codebase.
My approach is, when asked to solve a problem, I try to create a new wrapper around the part of the code-base which is relevant to my current task. This new wrapper is developed using TTD (writing the test first). Some of the time calling into the non-unit tested legacy code. At other times I make a new copy of an existing module and start to do serious violence to it. Sometimes I rewrite functionality from scratch.
But as I'm keeping it fairly well tested I feel pretty in control.
What I find with this code-base, which has been developed with far too much copy and pasting, is that once I get an understanding a particular part, and extract some functions from it (which are done test-first) ... these functions often turn out to be usable in many other places and so the rate of replacing the legacy code with my own, unit tested libraries increases.
I don't (and have no authority to) try to rewrite or add tests to parts of the code that are not touched by my current problem (usually a bug I'm trying to fix) but I do have a fairly aggressive proactive stance on anything that is touched and might be relevant.
Update : Penguinix asked : "What languages do you work in? Is there a specific Testing Harness you recommend?"
Right now I'm working in ... er ... Mumps! But the same principle works anywhere.
Something that transformed my understanding of UT was MinUnit : http://www.jera.com/techinfo/jtns/jtn002.html
When I saw MinUnit, that was kind of a "zen" moment of enlightenment for me. It stripped away the misunderstandings I had about unit testing being something complicated requiring sophisticated OO frameworks etc. I understood that UT was just about writing a bunch of tests. The "harness" you can write yourself, in about 3 minutes, in any language you like. Just get on and do it.
A:
This really depends on the state of the codebase... are there massive classes? one class with mega-methods? Are the classes tightly coupled? is configuration a burden?
Considering this, I suggest reading Working Effectively with Legacy Code, picking out your problems, and applying the recommendations.
|
What are the first tasks for implementing Unit Testing in Brownfield Applications?
|
Do you refactor your SQL first? Your architecture? or your code base?
Do you change languages? Do you throw everything away and start from scratch? [Not refactoring]
|
[
"I'm adding unit testing to a large, legacy spaghetti codebase. \nMy approach is, when asked to solve a problem, I try to create a new wrapper around the part of the code-base which is relevant to my current task. This new wrapper is developed using TTD (writing the test first). Some of the time calling into the non-unit tested legacy code. At other times I make a new copy of an existing module and start to do serious violence to it. Sometimes I rewrite functionality from scratch. \nBut as I'm keeping it fairly well tested I feel pretty in control.\nWhat I find with this code-base, which has been developed with far too much copy and pasting, is that once I get an understanding a particular part, and extract some functions from it (which are done test-first) ... these functions often turn out to be usable in many other places and so the rate of replacing the legacy code with my own, unit tested libraries increases.\nI don't (and have no authority to) try to rewrite or add tests to parts of the code that are not touched by my current problem (usually a bug I'm trying to fix) but I do have a fairly aggressive proactive stance on anything that is touched and might be relevant.\nUpdate : Penguinix asked : \"What languages do you work in? Is there a specific Testing Harness you recommend?\" \nRight now I'm working in ... er ... Mumps! But the same principle works anywhere. \nSomething that transformed my understanding of UT was MinUnit : http://www.jera.com/techinfo/jtns/jtn002.html \nWhen I saw MinUnit, that was kind of a \"zen\" moment of enlightenment for me. It stripped away the misunderstandings I had about unit testing being something complicated requiring sophisticated OO frameworks etc. I understood that UT was just about writing a bunch of tests. The \"harness\" you can write yourself, in about 3 minutes, in any language you like. Just get on and do it.\n",
"This really depends on the state of the codebase... are there massive classes? one class with mega-methods? Are the classes tightly coupled? is configuration a burden?\nConsidering this, I suggest reading Working Effectively with Legacy Code, picking out your problems, and applying the recommendations.\n"
] |
[
5,
1
] |
[] |
[] |
[
"language_agnostic",
"programming_languages",
"sql",
"testing_strategies"
] |
stackoverflow_0000066361_language_agnostic_programming_languages_sql_testing_strategies.txt
|
Q:
Flip an Image horizontally
I need to flip an image so that a character faces in the right direction. This needs to be done "on the fly' as they say.
The issue I am having is that with Gif images, I seem to lose the transparency. (The background goes white)
Below is the code: (Alternatively someone could send me to a good example)
$img = imagecreatefromgif("./unit.gif");
$size_x = imagesx($img);
$size_y = imagesy($img);
$temp = imagecreatetruecolor($size_x, $size_y);
imagecolortransparent($img, imagecolorallocate($img, 0, 0, 0));
imagealphablending($img, false);
imagesavealpha($img, true);
$x = imagecopyresampled($temp, $img, 0, 0, ($size_x-1), 0, $size_x, $size_y, 0-$size_x, $size_y);
if ($x) {
$img = $temp;
}
else {
die("Unable to flip image");
}
header("Content-type: image/gif");
imagegif($img);
imagedestroy($img);
A:
Shouldn't this:
imagecolortransparent($img, imagecolorallocate($img, 0, 0, 0));
imagealphablending($img, false);
imagesavealpha($img, true);
...be this:
imagecolortransparent($temp, imagecolorallocate($img, 0, 0, 0));
imagealphablending($temp, false);
imagesavealpha($temp, true);
Note you should be calling these functions for the $temp image you have created, not the source image.
A:
Final Results:
$size_x = imagesx($img);
$size_y = imagesy($img);
$temp = imagecreatetruecolor($size_x, $size_y);
imagecolortransparent($temp, imagecolorallocate($temp, 0, 0, 0));
imagealphablending($temp, false);
imagesavealpha($temp, true);
$x = imagecopyresampled($temp, $img, 0, 0, ($size_x-1), 0, $size_x, $size_y, 0-$size_x, $size_y);
if ($x) {
$img = $temp;
}
else {
die("Unable to flip image");
}
header("Content-type: image/gif");
imagegif($img);
imagedestroy($img);
A:
If you can guarantee the presence of ImageMagick, you can use their mogrify -flop command. It preserves transparency.
|
Flip an Image horizontally
|
I need to flip an image so that a character faces in the right direction. This needs to be done "on the fly' as they say.
The issue I am having is that with Gif images, I seem to lose the transparency. (The background goes white)
Below is the code: (Alternatively someone could send me to a good example)
$img = imagecreatefromgif("./unit.gif");
$size_x = imagesx($img);
$size_y = imagesy($img);
$temp = imagecreatetruecolor($size_x, $size_y);
imagecolortransparent($img, imagecolorallocate($img, 0, 0, 0));
imagealphablending($img, false);
imagesavealpha($img, true);
$x = imagecopyresampled($temp, $img, 0, 0, ($size_x-1), 0, $size_x, $size_y, 0-$size_x, $size_y);
if ($x) {
$img = $temp;
}
else {
die("Unable to flip image");
}
header("Content-type: image/gif");
imagegif($img);
imagedestroy($img);
|
[
"Shouldn't this:\nimagecolortransparent($img, imagecolorallocate($img, 0, 0, 0));\nimagealphablending($img, false);\nimagesavealpha($img, true);\n\n...be this:\nimagecolortransparent($temp, imagecolorallocate($img, 0, 0, 0));\nimagealphablending($temp, false);\nimagesavealpha($temp, true);\n\nNote you should be calling these functions for the $temp image you have created, not the source image.\n",
"Final Results:\n$size_x = imagesx($img);\n$size_y = imagesy($img);\n\n$temp = imagecreatetruecolor($size_x, $size_y);\n\nimagecolortransparent($temp, imagecolorallocate($temp, 0, 0, 0));\nimagealphablending($temp, false);\nimagesavealpha($temp, true);\n$x = imagecopyresampled($temp, $img, 0, 0, ($size_x-1), 0, $size_x, $size_y, 0-$size_x, $size_y);\nif ($x) {\n $img = $temp;\n}\nelse {\n die(\"Unable to flip image\");\n}\n\nheader(\"Content-type: image/gif\");\nimagegif($img);\nimagedestroy($img);\n\n",
"If you can guarantee the presence of ImageMagick, you can use their mogrify -flop command. It preserves transparency.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"php"
] |
stackoverflow_0000066518_php.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.